CN114972935A - Information processing method and related equipment - Google Patents

Information processing method and related equipment Download PDF

Info

Publication number
CN114972935A
CN114972935A CN202110221913.6A CN202110221913A CN114972935A CN 114972935 A CN114972935 A CN 114972935A CN 202110221913 A CN202110221913 A CN 202110221913A CN 114972935 A CN114972935 A CN 114972935A
Authority
CN
China
Prior art keywords
information
touch
target
array
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110221913.6A
Other languages
Chinese (zh)
Inventor
朱启伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huawei Technologies Co Ltd
Original Assignee
Shanghai Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huawei Technologies Co Ltd filed Critical Shanghai Huawei Technologies Co Ltd
Priority to CN202110221913.6A priority Critical patent/CN114972935A/en
Priority to PCT/CN2021/131058 priority patent/WO2022179197A1/en
Priority to JP2023550693A priority patent/JP2024507891A/en
Priority to EP21927621.9A priority patent/EP4266211A4/en
Publication of CN114972935A publication Critical patent/CN114972935A/en
Priority to US18/456,150 priority patent/US20230410353A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/91Radar or analogous systems specially adapted for specific applications for traffic control
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4004Means for monitoring or calibrating of parts of a radar system
    • G01S7/4026Antenna boresight
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)
  • Position Input By Displaying (AREA)

Abstract

The embodiment of the application discloses an information processing method and related equipment, which are used for improving the efficiency of detecting information fusion. The method in the embodiment of the application comprises the following steps: a plurality of pieces of detection information from a plurality of sensors are acquired, and the pieces of detection information include pieces of detection information of the same target object by different sensors. And acquiring corresponding array information according to the plurality of detection information, and determining target array information according to the plurality of array information, wherein the target array information represents detection information which is detected by different sensors and aims at the same target object set. And according to the position information of each target object in the target object set, fusing the detection information corresponding to the same target object in the plurality of position information.

Description

Information processing method and related equipment
Technical Field
The embodiment of the application relates to the field of data processing, in particular to an information processing method and related equipment.
Background
For the same target object, the feature information that can be detected by different types of sensors is different, for example, a camera can detect the appearance feature of the target object, and a radar can detect the movement speed and distance of the target object. For the same target object, in order to obtain more characteristic information of the target object, detection results of different sensors need to be combined to obtain fusion detection information of the target object.
In order to achieve fusion between the detection results of different types of sensors, it is necessary to align the space and time between the different sensors. The spatial alignment procedure is as follows: the method comprises the steps of obtaining pictures which can be detected by each sensor, determining a calibration point in an actual space, and associating the position of the calibration point in the actual space with the position of the calibration point displayed in the pictures. By performing the above operations on the plurality of calibration points, a mapping relationship between the actual space and each sensor picture is established, and thus a mapping relationship between each sensor picture is established. And aligning the time of different sensors, and when object information is detected at a certain point on a certain sensor picture at the same moment and the object information is also detected at a point corresponding to the point on other sensor pictures, determining that the two pieces of information are the information of the same object. Therefore, the detection results of different sensors for the object can be combined together to be used as the fusion detection information of the object.
Because the method needs manual calibration, the efficiency of information fusion is low.
Disclosure of Invention
The embodiment of the application provides an information processing method, which is used for realizing the fusion of detection information detected by different sensors so as to improve the efficiency of the fusion of the detection information.
In a first aspect, an embodiment of the present application provides an information processing method, where the method is applied to a processing device in a monitoring system, and the detection system further includes a plurality of sensors. Wherein, the detection information obtained by each sensor in the plurality of sensors comprises the detection information of a plurality of same targets, the method comprises the following steps:
the processing device acquires a plurality of pieces of detection information from the plurality of sensors, wherein the plurality of pieces of detection information correspond to the plurality of sensors one by one, and each piece of detection information in the plurality of pieces of detection information is detected by the sensor corresponding to the piece of detection information. The processing equipment determines a plurality of corresponding array information according to the plurality of detection information, wherein the plurality of array information and the plurality of detection information are in one-to-one correspondence, each array information is used for describing the position relation between the objects detected by the sensor corresponding to the array information, and the objects comprise the target objects. The processing equipment determines target array type information according to the plurality of array type information, the contact ratio of the target array type information and the plurality of array type information is higher than a preset threshold value, the target array type information is used for describing the position relation among the plurality of target objects, and the array position information of each target object is included in the target array type information. And the processing equipment fuses the detection information corresponding to the same target object in the plurality of array type information according to the array position information of any target object in each target object.
In the embodiment of the application, the array information between the objects detected by the sensors is respectively determined according to the detection information from different sensors, and the target array information is determined according to the coincidence degree of each array information, so that the target object is determined. Because the target array information is the array information with similar characteristics detected by different sensors and reflects the information detected by the same target object at different sensors, the corresponding relation between the detection results of any object at different sensors reflected in the target array information can be determined according to the target array information, and the detection results of different sensors for the same object can be fused according to the corresponding relation. Compared with a manual calibration method, the method for acquiring the fusion detection information through the array information can greatly improve the efficiency of acquiring the fusion detection information.
In addition, in the aspect of information acquisition, the method provided by the embodiment of the application only needs to provide detection information of different sensors, does not need to occupy an observed field, and expands the application range of detection information fusion.
With reference to the first aspect, in the first implementation manner of the first aspect of the embodiments of the present application, the detection information may include a position feature set, where the position feature set may include a plurality of position features, and the position features are used to represent a position relationship between an object detected by a corresponding sensor and objects around the object.
In the embodiment of the application, the detection information includes the position feature set, and the position relationship between the objects detected by the sensors can be accurately reflected through the position feature set, so that accurate formation information can be determined through the position relationship between the objects, and therefore, the detection information from different sensors for the same target object can be accurately fused.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect of the embodiment of the present application, the determining, by the processing device, the corresponding multiple pieces of formation information according to the multiple pieces of detection information may specifically include: the processing equipment acquires a plurality of corresponding touch line information according to a plurality of position feature sets, wherein each touch line information in the touch line information is used for describing information of an object detected by a corresponding sensor touching a reference line, and the touch line information corresponds to the position feature sets one by one. And the processing equipment respectively determines a plurality of corresponding array information according to the plurality of contact information, wherein the plurality of contact information and the plurality of array information are in one-to-one correspondence.
In the embodiment of the application, the touch line information is acquired through the position feature set, and as the touch line information is information of an object touching the reference line, the touch reference line can acquire data including specific numerical values or specific position features such as touch time, touch interval and touch position. Therefore, by using specific values or specific position characteristics of a plurality of object contact lines, a set of contact line data, such as a number sequence of a plurality of touch times, a number sequence of a plurality of touch intervals, or a distribution relationship of a plurality of touch positions, can be obtained. Because the set of the contact line data has specific numerical values or position characteristics, direct operation can be performed without performing other data processing, and therefore the target array type information with the contact degree conforming to the preset threshold value can be quickly determined.
With reference to the second implementation manner of the first aspect, in a third implementation manner of the first aspect of the embodiment of the present application, the target array type information may be determined according to the touch partition sequence, specifically:
the touch line information comprises time sequence information and touch point partition information of an object touch reference line detected by a corresponding sensor, and the touch point partition information represents touch points of the object touch reference line and partition information in the reference line; the array type information comprises a touch subarea sequence, and the touch subarea sequence represents the front-back time sequence relation of the subarea position of the object touch datum line detected by the corresponding sensor.
The processing device determines the target array information according to the plurality of array information, and specifically may include: the processing equipment acquires a first subsequence of the touch partition sequences and takes the first subsequence as target array type information, wherein the contact ratio of the first subsequence to the touch partition sequences is higher than a first threshold value.
The processing device fuses detection information corresponding to the same target object in the multiple pieces of array information according to the array information of each target object, and may specifically include: and the processing equipment fuses the detection information corresponding to the same target object in the touch subarea sequences according to the touch point subarea information corresponding to each target object in the first subsequence.
In the embodiment of the application, the time sequence information represents the front-back relationship of different target objects touching the reference line, the touch point partition information represents the left-right relationship of different target objects touching the reference line, and the position relationship of a plurality of target objects touching the reference line is embodied in the touch partition sequence through the time sequence information representing the front-back relationship and the touch point partition information representing the left-right relationship. The time sequence information and the touch point partition information are specific numerical values, and the touch partition sequence is a set of numerical values reflecting the position relation between the target objects. And acquiring corresponding touch subarea sequences according to the detection information from different sensors. The obtained touch partition sequences are a plurality of numerical value sets, the contact ratio of the numerical value sets is determined to meet a preset threshold value, only corresponding numerical values need to be compared, complex operation is not needed, and the efficiency of matching target array type information is improved.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect of the embodiments of the present application, a first subsequence having a degree of overlap with each touch partition sequence that is higher than a first threshold may be determined according to multiple touch partition sequences of detection information derived from different sensors through a Longest Common Subsequence (LCS) algorithm. In the embodiment of the present application, all the common sequences of the multiple touch partition sequences may be obtained through an LCS algorithm, so as to implement matching of the same location characteristics of the multiple touch partition sequences. Since the LCS algorithm calculates the longest common subsequence, the first subsequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences having the higher overlap ratio with the touch partition sequences than the first threshold.
In the embodiment of the present application, all common sequences of a plurality of touch partition sequences may be determined through an LCS algorithm, so as to match all fragments of touch partition sequences having the same location characteristics. If a plurality of fragments are public sequences, and some non-public sequences are mixed in the public sequences, the non-public sequences mixed in the public sequences can be identified. Wherein the non-common sequences represent different positional relationships among different sensors. In this case, it can be considered that the non-common sequence included in the common sequence is caused by false detection or missing detection of the sensor, so as to be fault-tolerant to the non-common sequence, that is, the non-common sequence corresponds to the target object detected by different sensors, thereby realizing fusion of the detection information.
In this embodiment of the application, the first subsequence determined by the LCS algorithm may include a subsequence with the longest length among subsequences whose degrees of overlap with the plurality of touch partition sequences are higher than the first threshold. Because the position relations between the target objects may have accidental similarity, the longer the length of the determined subsequence is, the lower the possibility of having a similar position relation is, the more the accidental similarity can be avoided, and the longest subsequence is determined through an LCS algorithm, so that the target array type information of the same target object set can be accurately determined. For example, there may be accidental similarity in the position relationship between two objects, but if the criterion is raised to that the position relationship between ten objects has high coincidence degree, the probability of ten objects having similar position relationship is greatly reduced compared with the probability of two objects having similar position relationship, so if the first subsequence of ten objects is determined by the LCS algorithm, the probability of the detection results of the ten objects by different sensors for the same ten objects is higher, and the probability of matching errors is reduced.
With reference to the second implementation manner of the first aspect, in a fifth implementation manner of the first aspect of the embodiment of the present application, the target array type information may be determined according to the touch position sequence, specifically:
the touch line information comprises time sequence information and touch point position information of an object touch reference line detected by a corresponding sensor, the touch point position information represents a touch point of the object touch reference line, and the position information in the reference line reflects the left-right position relation between the target objects; the array information includes a touch position sequence indicating a time sequence relationship between the front and rear of the position of the object detected by the corresponding sensor as touching the reference line.
The processing device determines the target formation information according to the multiple formation information, which may specifically include: and the processing equipment acquires a third subsequence of the multiple touch position sequences and takes the third subsequence as target array type information, wherein the contact ratio of the third subsequence to the multiple touch position sequences is higher than a third threshold value.
The processing device fuses detection information corresponding to the same target object in the multiple pieces of array information according to the array information of each target object, and may specifically include: and the processing equipment fuses the detection information corresponding to the same target object in the multiple touch position sequences according to the touch position information corresponding to each target object in the third subsequence.
In the embodiment of the present application, the touch point position information represents a left-right relationship between different target objects touching the reference line, and may be a continuous numerical value or data. Therefore, based on the continuous numerical values or data, the array information of the target object can be more accurately distinguished from the array information of other non-target objects, and the fusion of the detection information of the same target object can be more accurately realized.
Furthermore, the movement trend between the objects can be analyzed or calculated through the continuous numerical values or data, and other information, such as the movement track of the object, can be calculated besides the movement trend, which is not limited herein.
With reference to the second implementation manner of the first aspect, in a sixth implementation manner of the first aspect of this embodiment of the present application, the target array type information may be determined according to the touch interval sequence, specifically:
the touch line information comprises time sequence information and touch time interval information of an object touching the reference line, wherein the time sequence information and the touch time interval information are detected by a corresponding sensor, and the touch time interval information represents a time interval before and after the object touches the reference line; the array information includes a touch interval sequence representing a distribution of time intervals at which the object detected by the corresponding sensor touches the reference line.
The processing device determines the target array information according to the plurality of array information, and specifically may include: and the processing equipment acquires a second subsequence of the touch interval sequences and takes the second subsequence as target array type information, wherein the contact ratio of the second subsequence to the touch interval sequences is higher than a second threshold value.
The processing equipment fuses the detection information corresponding to the same target object in at least two pieces of array type information according to the array position information of each target object, and the method comprises the following steps: and the processing equipment fuses the detection information corresponding to the same target object in at least two touch interval sequences according to the touch time distribution information corresponding to each target object in the second subsequence.
In the embodiment of the application, the time sequence information represents the front-back relationship of different target objects touching the reference line, the touch time interval information represents the front-back time interval of different target objects touching the reference line, and the position relationship of a plurality of target objects touching the reference line is represented in the touch interval sequence by the time sequence information representing the front-back relationship and the touch time interval information representing the front-back time interval. The time sequence information and the touch time interval information are specific numerical values, and the touch interval sequence is a set of numerical values reflecting the position relation between the target objects. And acquiring corresponding touch interval sequences according to detection information from different sensors. The obtained touch interval sequences are a plurality of value sets, the contact ratio of the value sets is determined to meet a preset threshold value, only corresponding values need to be compared, complex operation is not needed, and the efficiency of matching target array type information is improved.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect of the embodiments of the present application, a second subsequence, whose degree of overlap with each touch interval sequence is higher than the second threshold, may be determined by an LCS algorithm according to multiple touch interval sequences of detection information derived from different sensors. In the embodiment of the present application, all common sequences of a plurality of touch interval sequences may be obtained through an LCS algorithm, so as to implement matching of the same position characteristics of the plurality of touch interval sequences. Since the LCS algorithm calculates the longest common subsequence, the second sequence determined by the LCS algorithm may include the subsequence with the longest length among the subsequences having the higher overlap ratio with the touch interval sequences than the second threshold.
In the embodiment of the present application, all common sequences of a plurality of touch interval sequences may be determined through an LCS algorithm, so as to match all fragments of touch interval sequences having the same position characteristic. If a plurality of fragments are public sequences, and some non-public sequences are included in the public sequences, the non-public sequences included in the public sequences can be identified. Wherein the non-common sequences represent different positional relationships among different sensors. In this case, it can be considered that the non-common sequence is included in the common sequence, and the occurrence reason is false detection or missing detection of the sensor, so as to perform fault tolerance on the non-common sequence, that is, the non-common sequence corresponds to target objects detected by different sensors, thereby implementing fusion of detection information.
In this embodiment of the application, the second subsequence determined by the LCS algorithm may include a subsequence with the longest length among subsequences whose contact degrees with the multiple touch interval sequences are higher than the second threshold. Because the time interval of the target object touching the reference line may have accidental similarity, the longer the length of the determined subsequence is, the lower the possibility of having a similar time interval is, the more the accidental similarity can be avoided, and the longest subsequence is determined through the LCS algorithm, so that the target array information of the same target object set can be accurately determined. For example, although there may be accidental similarity between the time intervals when two target objects touch the reference line, if the criterion is improved to be that the time intervals when ten target objects touch the reference line have high coincidence degree, the probability of ten target objects having similar time intervals is greatly reduced compared with the probability of two target objects having similar time intervals, so that if the second subsequence of ten target objects is determined by the LCS algorithm, the probability of detection results of the ten target objects for the same ten target objects by different sensors is higher, and the probability of matching errors is reduced.
With reference to the second implementation manner of the first aspect, in an eighth implementation manner of the first aspect of the present application, the target array type information may be determined according to the touch partition sequence and the touch interval sequence, specifically:
the touch line information comprises time sequence information of an object touch reference line detected by a corresponding sensor, touch point partition information and touch time interval information, wherein the touch point partition information represents partition information of a touch point of the object touch reference line in the reference line, and the touch time interval information represents a time interval before and after the object touches the reference line; the array information comprises a touch partition sequence and a touch interval sequence, wherein the touch partition sequence represents the front-back time sequence relation of partition positions of the object touch datum lines detected by the corresponding sensors, and the touch interval sequence represents the distribution of time intervals of the object touch datum lines detected by the corresponding sensors.
The processing device determines the target array information according to the plurality of array information, and specifically may include:
the processing equipment acquires a first subsequence of at least two touch partition sequences, wherein the contact ratio of the first subsequence to the touch partition sequences is higher than a first threshold value; the processing equipment acquires a second subsequence of the at least two touch interval sequences, wherein the contact ratio of the second subsequence to the touch interval sequences is higher than a second threshold; the processing equipment determines the intersection of a first object set and a second object set, and takes the intersection as a target object set, wherein the first object set is a set of objects corresponding to the first subsequence, and the second object set is a set of objects corresponding to the second subsequence; and the processing equipment takes the touch subarea sequence and the touch interval sequence of the target object set as target array type information.
In this embodiment of the present application, an intersection of the first object set and the second object set is determined by using the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence, and the intersection is used as the target object set. The objects in the intersection correspond to the first subsequence, that is, similar touch partition information can be obtained according to the detection information of different sensors; at the same time, the objects in the intersection correspond to the second subsequence, i.e. have similar touch interval information at the same time, depending on the detection information of the different sensors. If a plurality of similar pieces of information indicating the positional relationship of the objects can be acquired based on the detection information of the plurality of sensors, the possibility that the object set corresponding to the detection information is the same object set is higher than the case where only one piece of similar information indicating the positional relationship of the objects can be acquired. Therefore, by screening the intersection of the objects corresponding to the multiple subsequences, the array information of the target object can be more accurately distinguished from the array information of other non-target objects, and the fusion of the detection information aiming at the same target object can be more accurately realized.
In this embodiment, in addition to the intersection of the object corresponding to the first subsequence and the object corresponding to the second subsequence, an intersection between objects corresponding to other subsequences may also be taken, for example, an intersection between objects corresponding to the first subsequence and the third subsequence, or an intersection between objects corresponding to the second subsequence and the third subsequence, or an intersection between objects corresponding to other subsequences and an object corresponding to any one of the first to third subsequences. Other sub-sequences are also used to indicate the position relationship between the objects, such as the distance or direction between the objects, and are not limited herein. By taking the intersection of objects corresponding to different subsequences, a proper subsequence can be flexibly selected for operation, and feasibility and flexibility of the scheme are improved.
In the embodiment of the present application, in addition to taking the intersection between the respective corresponding objects of the two sub-sequences, the intersection between the respective corresponding objects of more sub-sequences may also be taken, for example, the intersection between the respective corresponding objects of the first sub-sequence, the second sub-sequence, and the third sub-sequence. The larger the number of subsequences taken, the more types of information that can be obtained from the detection information of the plurality of sensors and that indicates the positional relationship of the objects that are similar, the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, by screening the intersection of the objects corresponding to the multiple subsequences, the array information of the target object can be more accurately distinguished from the array information of other non-target objects, and the fusion of the detection information aiming at the same target object can be more accurately realized.
With reference to the first implementation manner of the first aspect, in a ninth implementation manner of the first aspect of the embodiment of the present application, the target array type information may be determined through a target group distribution map, specifically:
the formation information includes a target group profile, wherein the target group profile represents a positional relationship between the objects.
The processing device determines a plurality of corresponding array information according to the plurality of detection information, and may specifically include: the processing equipment acquires a plurality of corresponding initial target group distribution maps according to the plurality of position feature sets, wherein the initial target group distribution maps represent the position relation among the objects detected by the corresponding sensors; the processing equipment acquires standard view angle diagrams of a plurality of initial target group distribution diagrams through a view angle change algorithm, and takes the standard view angle diagrams as a plurality of corresponding target group distribution diagrams, wherein the position information of the target group distribution diagrams comprises target object distribution information of a target object, and the target object distribution information represents the position of the target object in an object detected by a corresponding sensor.
The processing device determines the target array information according to the at least two array information, and specifically may include: and the processing equipment acquires the image feature set of the plurality of target group distribution maps and takes the image feature set as target array type information, wherein the coincidence degree of the image feature set and the plurality of target group distribution maps is higher than a third threshold value.
The processing device fuses detection information corresponding to the same target object in the multiple pieces of array information according to the array information of each target object, and specifically may include: and the processing equipment fuses the detection information corresponding to the same target object in the plurality of target group distribution graphs according to the target object distribution information corresponding to each target object in the image feature set.
In the embodiment of the application, a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, a plurality of corresponding target group distribution maps are obtained through a view angle change algorithm, an image feature set of the plurality of target group distribution maps is obtained, and the image feature set is used as target array type information. And determining an image feature set with the coincidence degrees of the plurality of target group distribution graphs higher than a preset threshold value through the plurality of target group distribution graphs derived from the plurality of sensors. Because the image characteristics can intuitively reflect the position relation between the displayed objects in the image, the image characteristic set is determined through the plurality of target group distribution maps, the detection results with similar position relation can be intuitively reflected, the detection results of different sensors to the same target group can be intuitively matched, and the fusion of the detection information is accurately realized.
With reference to the ninth implementation manner of the first aspect, in the tenth implementation manner of the first aspect of the embodiment of the present application, the obtaining of the image feature set may be implemented with reference to a reference line, specifically:
the processing device can acquire a plurality of touch line information of the target object corresponding to the position feature sets according to the plurality of position feature sets, wherein each touch line information in the plurality of touch line information is used for describing information of an object touch reference line detected by the corresponding sensor, and the plurality of touch line information and the plurality of position feature sets are in one-to-one correspondence.
The processing device obtains a plurality of corresponding initial target group distribution maps according to the plurality of position feature sets, and specifically may include: the processing equipment acquires a plurality of corresponding initial target group distribution graphs according to the plurality of contact line information, wherein objects in the plurality of initial target group distribution graphs have the same contact line information.
In the embodiment of the application, because the similarity between the images at the approximate time is high, if the same time is not determined, when the initial target distribution maps from different sensors are matched, the interference of the initial target group distribution maps at the approximate time is introduced, so that distribution map matching errors are caused, and the image feature set acquisition errors are caused, so that the detection information at different moments is fused, and the detection information fusion errors are caused. Specifically, a plurality of initial target group distribution maps are determined through the touch line information, have the same touch line information and indicate that the plurality of initial target group distribution maps are acquired at the same time, so that the fused detection information can be acquired at the same time, and the accuracy of detection information fusion is improved.
With reference to the first aspect, any one of the first implementation manner to the tenth implementation manner of the first aspect, in an eleventh implementation manner of the first aspect of the embodiment of the present application, mapping of spatial coordinate systems between different sensors may also be implemented, specifically:
the plurality of sensors include a first sensor and a second sensor, wherein a spatial coordinate system corresponding to the first sensor is a standard coordinate system, and a spatial coordinate system corresponding to the second sensor is a target coordinate system, and the method may further include:
the processing equipment determines the mapping relation between a plurality of standard point information and a plurality of target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing detection information corresponding to the same target object in a plurality of pieces of array type information, the standard point information represents the position information of each object in a target object set in a standard coordinate system, the target point information represents the position information of each object in the target object set in the target coordinate system, and the plurality of pieces of standard point information correspond to the plurality of pieces of target point information one to one; and the processing equipment determines the mapping relation between the standard coordinate system and the target coordinate system according to the mapping relation between the standard point information and the target point information.
In the embodiment of the application, the mapping relationship between the plurality of standard point information and the plurality of target point information is determined by fusing the detection information, and the mapping relationship between the standard coordinate system and the target coordinate system is determined by the mapping relationship between the plurality of standard point information and the plurality of target point information. The method of the embodiment of the application can realize the mapping of the coordinate systems among different sensors as long as the detection information from different sensors can be acquired. Subsequent steps of determining target array type information, mapping point information and the like can be automatically realized by processing equipment without manual calibration and mapping. The target array type information is matched through the processing equipment, and the accuracy of point information mapping is improved by the accuracy of equipment operation. Meanwhile, as long as the detection information from different sensors can be obtained, the fusion of the detection information and the mapping of a coordinate system can be realized, the scene limitation caused by manual calibration is avoided, and the accuracy and universality of the detection information fusion are ensured.
With reference to the first aspect, any one of the first implementation manner to the eleventh implementation manner of the first aspect, in a twelfth implementation manner of the first aspect of the embodiment of the present application, alignment of time axes between different sensors may also be implemented, and specifically, the method may further include:
the processing device calculates a time difference between time axes of the plurality of sensors based on a fusion result of detection information corresponding to the same target object among the plurality of pieces of array information.
In the embodiment of the present application, the time differences between the time axes of the plurality of sensors are calculated by the fusion result of the detection information of the same target object, and the time axes of different sensors can be aligned according to the time differences. The time axis alignment method provided by the embodiment of the application can be realized as long as the detection information of different sensors can be acquired, the application scenes of time axis alignment of different sensors are expanded without a plurality of sensors in the same time synchronization system, and meanwhile, the application range of information fusion is also expanded.
With reference to the first aspect, any one of the first implementation manner to the twelfth implementation manner of the first aspect, in a thirteenth implementation manner of the first aspect of the embodiment of the present application, error correction or screening of the sensors may also be implemented, specifically, the plurality of sensors include a standard sensor and a sensor to be measured, and the method may further include:
the processing equipment acquires standard array information corresponding to the target array information in the standard sensor; the processing equipment acquires the array information to be detected corresponding to the target array information in the sensor to be detected; the processing equipment determines the difference between the information of the array type to be detected and the information of the standard array type; and the processing equipment acquires an error parameter according to the difference and the standard array type information, wherein the error parameter is used for indicating the error of the array type information to be detected or indicating the performance parameter of the sensor to be detected.
In the embodiment of the application, the standard sensor is used as a detection standard, and the error parameter is obtained according to the difference between the information of the array to be detected and the information of the standard array. When the error parameter is used for indicating the error of the array information to be detected, the information corresponding to the error parameter in the array information to be detected can be corrected through the error parameter and the standard array information; when the error parameters are used for indicating the performance parameters of the sensor to be detected, the performance parameters such as the false detection rate of the sensor to be detected can be determined, and the data analysis of the sensor to be detected is realized, so that the selection of the sensor is realized.
The second aspect of the present application provides a processing device, the processing device is located in a detection system, the detection system further includes at least two sensors, detection information obtained by the at least two sensors includes detection information of at least two same objects respectively by the at least two sensors, and the processing device includes: a processor and a transceiver.
The transceiver is configured to acquire at least two pieces of detection information from at least two sensors, where the at least two sensors are in one-to-one correspondence with the at least two pieces of detection information.
Wherein the processor is configured to: determining at least two corresponding array information according to the at least two pieces of detection information, wherein each array information is used for describing the position relationship between the objects detected by the corresponding sensors, and the objects comprise the target objects; determining target array information according to the at least two array information, wherein the coincidence degree of the target array information and each array information in the at least two array information is higher than a preset threshold value, the target array information is used for describing the position relationship between at least two targets, and the target array information comprises the array information of each target; and according to the position information of any target object in each target object, fusing detection information corresponding to the same target object in at least two pieces of position information.
The processing device is adapted to perform the method of the first aspect as described above.
The beneficial effects of the second aspect are referred to the first aspect, and are not described in detail herein.
A third aspect of embodiments of the present application provides a processing apparatus, including: a processor and a memory coupled to the processor. The memory is for storing executable instructions for instructing the processor to perform the method of the aforementioned first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a program is stored, and when the program is executed by the computer, the method according to the first aspect is performed.
A fifth aspect of embodiments of the present application provides a computer program product, which when executed on a computer, performs the method of the first aspect.
Drawings
FIG. 1a is a schematic illustration of time axis alignment of multiple sensors;
FIG. 1b is a schematic view of the spatial coordinate system alignment of multiple sensors;
FIG. 2 is a schematic diagram of a matching target object provided in an embodiment of the present application;
fig. 3a is a system diagram of an information processing method according to an embodiment of the present application;
fig. 3b is a schematic view of an application scenario of the information processing method according to the embodiment of the present application;
fig. 4 is a schematic flowchart of an information processing method according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a feature of an information processing method according to an embodiment of the present application;
FIG. 6 is a schematic view of a scribing method provided in an embodiment of the present application;
fig. 7 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 8 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 9 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 10 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 11 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 12 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 13 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 14 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 15 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 16 is another schematic diagram of an information processing method according to an embodiment of the present application;
fig. 17 is another schematic flow chart of an information processing method according to an embodiment of the present application;
fig. 18 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 19 is a schematic view of another application scenario of the information processing method according to the embodiment of the present application;
fig. 20 is a schematic structural diagram of a processing device according to an embodiment of the present application;
fig. 21 is another schematic structural diagram of a processing apparatus according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides an information processing method and related equipment, which are used for realizing fusion of detection information detected by different sensors so as to improve the efficiency of detection information fusion.
The sensor can detect the object, and different sensors can detect different detection information aiming at the same object. For example, a camera may detect appearance features such as shape and texture of an object, and a radar may detect motion information such as position and speed of the object. For the same object, if a plurality of kinds of information are to be acquired, detection information from different sensors needs to be fused.
In order to realize the fusion of the detection information, the time axes of different sensors need to be aligned with a space coordinate system. Referring to fig. 1a, the alignment of time axes requires that the sensors are in the same pair system, and fig. 1a is a schematic diagram illustrating the alignment of time axes of multiple sensors. The time setting device in the time setting system generates a time identifier and transmits the time identifier to a plurality of sensors in the time setting system. The alignment of the time axis can be realized by detecting a plurality of sensors in the time synchronization system based on the same time identifier.
The time mark of the time setting device can only be transmitted in the time setting system, and the sensor outside the time setting system can not receive the time mark, so that the alignment of the time axis can only be realized in the same time setting system, and the application scene of the detection information fusion is limited by the factor.
On the other hand, the alignment of the spatial coordinate system needs to be realized by spatial calibration. Referring to fig. 1b, fig. 1b is a schematic diagram illustrating the alignment of the spatial coordinate systems of the multiple sensors. The spatial calibration needs to determine a calibration point in an actual space, and manually calibrate the position of the calibration point in different sensor pictures, for example, calibrate the calibration point 4 in the picture of the sensor a, calibrate the corresponding calibration point 4' in the picture of the sensor B, and then manually determine the mapping relationship of the position of the same calibration point in different sensor pictures. In order to ensure the accuracy of the mapping relationship, a plurality of calibration points need to be calibrated to implement the complete mapping of the spatial coordinate system.
Because the space calibration needs to be manually realized, the subjective cognition of people and the actual mapping relation may have deviation, and the actual mapping relation cannot be truly reflected. For example, the calibration point 4 and the calibration point 4' shown in fig. 1b cannot find a calibration point clearly different from other points in the cylinder, and the calibration points calibrated for different frames cannot actually reflect the same point, resulting in calibration errors. In addition to the cylindrical shape, any other object without distinct points, such as a sphere, etc., is prone to the above-mentioned calibration error. Therefore, the mapping relationship of manual calibration is not necessarily accurate. The spatial calibration is inaccurate, and in the process of fusing the detection information of a plurality of sensors, the same real target object may be determined as different target objects, or different target objects may be determined as the same target object, so that the fused information is wrong data.
In the embodiment of the present application, in addition to performing spatial calibration on the pictures of two cameras as shown in fig. 1b, spatial calibration may also be performed on multiple sensors that do not belong to the same type. For example, a screen of a camera and a screen of a radar are normalized. For calibration of different types of sensor pictures, calibration errors of the calibration points may also occur, and details are not described here.
Moreover, the efficiency of manual calibration is low, the space calibration needs to be performed manually for a plurality of calibration points, and the detected area cannot be used in the calibration process, which brings limitation to the actual operation. For example, if a train lane is to be spatially calibrated, manual calibration typically requires that the train lane be occupied for half a day or a day. Normally, the scheduling of train lanes does not allow such long occupancy times to occur. In this case, the spatial calibration and the fusion of the detection information cannot be realized.
In summary, the current alignment of time axes of different sensors is limited to a time-setting system, and cannot be realized when the sensors are not in the same time-setting system. Currently, alignment of different sensor space coordinate systems is limited by low efficiency and low accuracy of manual calibration, so that errors easily occur in fusion of detection information, and a scene capable of realizing fusion is limited.
In view of the above-mentioned drawbacks, embodiments of the present application provide an information processing method for acquiring formation information between objects displayed by detection information through the detection information from multiple sensors. And determining that the target array information is the detection information of different sensors on the same object set by matching the target array information with similar characteristics, so that the detection information of different sensors is fused.
The method provided by the embodiment of the application is actually the reproduction of the process of manually determining the same target object in the pictures of different sensors on the equipment. Each sensor is provided with a plurality of pictures corresponding to a plurality of times, and the information such as the number, the state and the like of the target objects reflected in each picture is different. In the face of such much information, the human eyes cannot directly capture all the details in the picture, and only the picture of the same target object set can be distinguished from different pictures on the whole. Since multiple frames of the same target object set are determined in different frames, this process is also referred to as matching the target object set.
The process of matching the target object set by the human eye requires an abstract process. Other details in the picture are omitted, and only the position relation between the target objects in the picture is extracted, so that the array information between the target objects is abstracted.
To more clearly describe the process of abstraction, it will be described next in conjunction with fig. 2. Referring to fig. 2, fig. 2 is a schematic diagram of a matching target object according to an embodiment of the present disclosure. As shown in fig. 2, in the screen of the camera, i.e., the detection information a, 5 motor vehicles are formed in a shape similar to the number "9". On the other hand, in the radar screen, i.e., the detection information B, 5 objects are formed into a shape similar to "9". Then, it can be considered that the respective 5 target object sets in the two frames have similar position characteristics, i.e. have similar formation information, and it can be considered that the two target object sets are the same object in different sensor frames.
Matching the target object set, the same single target object can be determined in the images of different sensors according to the position of the single target object in the target object set in the images of different sensors.
As shown in fig. 2, in the detection information a detected by the sensor a, the target object at the bottom of the matrix "9" is the target object a, and it can be considered that in the detection information B detected by the sensor B, the target object a' at the bottom of the matrix "9" is the same target object as the target object a.
Illustratively, sensor a may be a camera and sensor B may be a radar. In addition to the aforementioned combination, the sensor a and the sensor B may be other combinations, for example, the sensor a is a radar, the sensor B is an ETC, or the sensor a and the sensor are the same sensor, for example, a radar or a camera, and the like, which is not limited herein.
In the embodiment of the present application, the number of sensors is not limited, and besides the sensor a and the sensor B, more sensors may be used to obtain more detection information, and the same target in the detection information may be analyzed, which is not limited herein.
The foregoing describes how a person determines a picture of the same object among pictures of different sensors, and applying the above-mentioned idea to a device is the solution of the embodiment of the present invention. Specifically, the scheme of the embodiment of the invention mainly comprises the following steps: 1. acquiring a plurality of detection information from different sensors; 2. determining corresponding array type information according to the plurality of detection information; 3. determining target array information with similar characteristics according to the plurality of array information; 4. and fusing the detection information of different sensors aiming at the same target object according to the array bit information of each target object in the target array type information.
Referring to fig. 3a, fig. 3a is a system diagram illustrating an information processing method according to an embodiment of the present disclosure. As shown in fig. 3a, the system is a detection system, which includes a processing device and a plurality of sensors. Taking sensor a and sensor B as an example, sensor a transmits detected detection information a to the processing device, and sensor B transmits detected detection information B to the processing device. And the processing equipment acquires the fusion information of the target object according to the detection information A and the detection information B.
It should be noted that, the devices in the detection system described in the present application may or may not have a fixed connection state, and data transmission is implemented in the form of data copy or the like. As long as the detection information of the sensor can be transmitted to the processing device, the sensor and the processing device can be referred to as a detection system, which is not limited herein. For example, the sensor a and the sensor B may respectively acquire the detection information, and then copy the detection information a and the detection information B to the processing device within a certain time, and the processing device processes the detection information a and the detection information B. This mode may also be referred to as offline processing.
It should be noted that the drawings only illustrate two sensors, and do not limit the number of sensors in the embodiments and detection systems of the present application.
Referring to fig. 3b, fig. 3b is a schematic view of an application scenario of an information processing method according to an embodiment of the present application. As shown in fig. 3b, the information processing method provided in the embodiment of the present application is mainly used for information fusion in a multi-sensor system. The multi-sensor system can receive detection information from a plurality of sensors and fuse the detection information from the plurality of sensors. The detection information may be a license plate from an Electronic Toll Collection (ETC) sensor, transaction flow information, and the like. In addition to the above information from the ETC sensor, the multi-sensor system may also acquire other detection information from other sensors, such as a license plate from a camera, vehicle type information, etc., distance from a radar, speed information, etc., and is not limited herein.
The information processing method provided by the embodiment of the application realizes the fusion of the detection information, and the fusion result can be applied to various scenes, such as toll audit on an expressway, off-site control, safety monitoring and the like. Besides the above-mentioned scenes on the expressway, the fusion result may also be applied to other scenes, such as a holographic intersection at an urban intersection, vehicle entry early warning, pedestrian early warning, or intrusion detection on a closed road, automatic parking, and the like, which is not limited herein.
An information processing method in the embodiment of the application.
Based on the detection system shown in fig. 3a, the steps of the information processing method shown in the embodiment of the present application will be described in detail with reference to fig. 4. Referring to fig. 4, fig. 4 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
401. the detection information a is acquired from the sensor a.
Optionally, the detection information a acquired by the sensor a may include a set of location features. The position feature set comprises a plurality of position features, and the position features are used for representing the position relation between the object detected by the sensor A and the objects around the object. For example, in the case where the sensor a is a camera, the detection information is a picture composed of pixels, and the position feature may be expressed as a distance between the pixels. The position feature may be expressed in other forms besides the distance between the pixels, for example, a left-right relationship or a front-back relationship between the pixels, and the like, and is not limited herein.
In the embodiment of the present application, the sensor a may be other types of sensors besides the camera, such as a radar, an Electronic Toll Collection (ETC) sensor, and the like, which is not limited herein. For different types of sensors, there may be corresponding location characteristics, for example, the location characteristics of the radar may be expressed as a distance between objects or a direction between objects, and the location characteristics of the ETC may be expressed as lane information of a vehicle and a front-back timing relationship, and the like, which is not limited herein.
402. The detection information B is acquired from the sensor B.
Optionally, the detection information B acquired by the sensor B may also include a set of location features. For the description of the sensor B, the detection information B, and the location feature, refer to the description of the sensor a, the detection information a, and the location feature in step 401, and are not described herein again.
It should be noted that, in the embodiment of the present application, the sensor a and the sensor B may be the same type of sensor or different types of sensors. For example, the sensor a and the sensor B may be cameras with different angles or different radars, or the sensor a may be a camera or a radar, and the sensor B may be ETC, which is not limited herein.
It should be noted that the number of sensors in the embodiment of the present application is not limited to two, and the number of sensors may be any integer greater than or equal to 2, which is not limited herein. As examples of the sensors in the monitoring system, if the detection system includes more sensors, the description of the sensors refers to the description of the sensors a and B in steps 401 and 402, and details thereof are not repeated herein. The plurality of sensors are not limited in kind, and may be the same kind of sensors or different kinds of sensors, and are not limited herein.
403. And determining the array type information A according to the detection information A.
Having acquired the detection information a, the processing device can determine, from the detection information a, the array information a indicating the positional relationship between the objects detected by the sensor a.
Optionally, if the detection information a includes a location feature set, the processing device may determine the array information a according to the location feature set. Specifically, there are various methods for acquiring the array information a according to the location feature set, and the acquired array information a is also different, and in order to describe the different methods for acquiring the array information a more clearly, the following embodiments will be described in a classified manner. For a specific process, refer to the embodiments shown in fig. 7 to 17, which are not described herein again.
404. And determining the array information B according to the detection information B.
Having acquired the detection information B, the processing device can determine, from the detection information B, matrix information B indicating the positional relationship between the objects detected by the sensor B. Specifically, the positional relationship between the objects may include at least one of a left-right positional relationship between the objects, or a front-back positional relationship between the objects.
Optionally, if the detection information B includes a location feature set, the processing device may determine the array information B according to the location feature set. Specifically, the determination of the array information may be implemented by a method such as a line marking method or an image feature matching method, and the process of obtaining the array information B refers to the process of obtaining the array information a in step 403, which is not described herein again.
In the embodiment of the present application, step 401 and step 402 have no necessary precedence relationship, that is, step 401 may be executed before or after step 402, and step 401 and step 402 may also be executed simultaneously, which is not limited herein. Step 403 and step 404 also have no necessary precedence relationship, that is, step 403 may be executed before or after step 404, and step 403 and step 404 may also be executed simultaneously, as long as step 403 is executed after step 401, and step 404 is executed after step 402, which is not limited herein.
In the embodiment of the present application, if the detection information from more sensors is obtained, the corresponding array type information is also determined according to the obtained detection information, and the process of determining the corresponding array type information refers to the description of step 403 and step 404, which is not described herein again.
405. And determining target array information according to the array information A and the array information B.
And the array information A and the array information B are obtained, and the target array information can be determined according to the array information A and the array information B. The coincidence degree of the target array information and the array information A and the coincidence degree of the target array information and the array information B are higher than a preset threshold value, and the target array information and the array information A and the array information B belong to the same target object set.
In the embodiment of the present application, the matrix information may have a plurality of expression forms, and the criteria for determining the contact ratio are different. In order to describe the obtaining process and the processing manner of different array information more clearly, the following will be explained in detail with reference to the embodiments of fig. 7 to fig. 17, and details are not repeated here.
406. And fusing detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the target position information.
The formation information comprises formation information of each target object, and is used for indicating the specific position of the target object in the target object set. Therefore, the targets corresponding to the same target object in the detection information of different sensors can be determined according to the position information of the target object, and the detection information of a plurality of corresponding targets can be fused.
In the embodiment of the application, the array information between the objects detected by the sensors is respectively determined according to the detection information from different sensors, and the target array information is determined according to the coincidence degree of each array information, so that the target object is determined. The target array information is the array information with similar characteristics detected by different sensors and reflects the information detected by the same target object at different sensors, so that the corresponding relation between the detection results of any object at different sensors reflected in the target array information can be determined according to the target array information, and the detection results of different sensors on the same object can be fused according to the corresponding relation. Compared with a manual calibration method, the method for acquiring the fusion detection information through the array information can greatly improve the efficiency of acquiring the fusion detection information.
In addition, in the aspect of information acquisition, the method provided by the embodiment of the application only needs to provide detection information of different sensors, does not need to occupy an observed field, and expands the application range of detection information fusion.
Optionally, in step 403 and step 404, the corresponding array information may be determined according to the location feature set, and in step 405, the target array information needs to be determined according to a plurality of array information. In the embodiment of the present application, the position feature set has different forms, and there are many ways to determine the matrix information, which mainly include a line drawing method and an image feature matching method, and the classification will be described next.
In the embodiment of the present application, the array information may include three types of information: 1. the lateral position relative relationship between the objects, such as the left-right position relationship or the left-right spacing between the objects; 2. the longitudinal position relative relationship between the objects, such as the front-back position relationship or the front-back spacing of the objects; 3. the characteristics of the object itself, such as length, width, height, shape, etc.
Taking fig. 5 as an example, fig. 5 is a schematic characteristic diagram of an information processing method according to an embodiment of the present application. As shown in fig. 5, the formation information may include the front-rear distance and the left-right distance between the vehicles, and may also include information of each vehicle, such as the model number and the license plate number of the vehicle, which is not limited herein.
It should be noted that fig. 5 only illustrates a vehicle on a road, and does not limit the object detected by the sensor, and the sensor may also be used to detect other objects, such as pedestrians, obstacles, and the like, which is not limited herein.
1. And (4) scribing.
For humans, the lineup information may appear as a whole shape, such as shape "9" in the embodiment shown in FIG. 2. For devices, the processing of shapes or images is not efficient for digital processing. The array information is expressed in a continuous or discrete digital form, so that the data processing efficiency can be greatly improved.
The overall shape characteristic is converted into a digital characteristic, and the digital characteristic can be realized by a scribing method. A datum line is drawn in the pictures of different sensors, and the information of time sequence, position and the like of the object touching the datum line is obtained, so that the shape characteristics can be converted into digital characteristics, and the operation processing of processing equipment is facilitated.
In the embodiment of the present application, various information of the object touching the reference line is also referred to as touch line information. The touch line information may include timing information of the object touching the reference line, touch point partition information, touch point position information, touch time interval information, and the like, which are not limited herein.
The time sequence information represents the time sequence before and after the object detected by the sensor touches the datum line, and the front-back relation between the objects is reflected.
The touch point partition information represents partition information of a touch point of an object touching the reference line in the reference line. Referring to fig. 6, fig. 6 is a schematic view of a scribing method according to an embodiment of the present application. In the driving road, the reference line may be partitioned according to different lanes, for example, lane 1 is a zone 1, lane 2 is a zone 2, and lane 3 is a zone 3 in the figure.
The touch point position information represents position information of a touch point of an object touching the reference line in the reference line. For example, in fig. 6, the first vehicle in lane 1 is 1.5 meters away from the left end of the reference line, and the first vehicle in lane 3 is 7.5 meters away from the left end of the reference line.
The touch time interval information indicates a time interval before and after the object touches the reference line.
Among the three categories of the array information, the touch point partition information and the touch point position information can be categorized into the relative relationship between the horizontal positions of the objects, and the timing information and the touch time interval information can be categorized into the relative relationship between the vertical positions of the objects.
There are various methods for determining the target matrix information by the above information, and the following classification is described:
1) and determining a first subsequence according to the time sequence information and the touch point partition information.
Referring to fig. 7, fig. 7 is a flowchart illustrating an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
701. the detection information a is acquired from a sensor a (camera).
Taking a camera as an example, in the case that the sensor a is a camera, the detection information is a picture composed of pixels, and the position features in the position feature set can be represented as distances between the pixels. The position feature may be expressed in other forms besides the distance between the pixels, for example, a left-right relationship or a front-back relationship between the pixels, and the like, and is not limited herein.
In the embodiment of the present application, the sensor a may be other types of sensors besides a camera, such as a radar, an ETC sensor, and the like, which is not limited herein. For different types of sensors, there may be corresponding location characteristics, for example, the location characteristics of the radar may be expressed as a distance between objects or a direction between objects, and the location characteristics of the ETC may be expressed as lane information of a vehicle and a front-back timing relationship, and the like, which is not limited herein.
702. The detection information B is acquired from a sensor B (radar).
Taking radar as an example, in the case where the sensor B is radar, the detection information is a picture of an object detected by the radar within a detection range, and the position features in the position feature set may be represented as distances between the objects. The position feature may be expressed in other forms besides the distance between the objects, for example, a left-right relationship or a front-back relationship between the objects, and the like, and is not limited herein.
In the embodiment of the present application, the sensor B may be other types of sensors besides radar, such as a camera, ETC sensor, and is not limited herein. For different types of sensors, there may be corresponding location features, which are not limited herein.
In the embodiment of the present application, the sensors a and B are merely examples of the sensors, and do not limit the kinds and the number of the sensors.
703. And acquiring time sequence information A and touch point partition information A of the object pixel touching the reference line according to the detection information A.
Since the detection information a is a picture composed of pixels, the touch line information is information of the pixels of the object touching the reference line. The processing device can acquire the timing information A and the touch point partition information A of the object pixel touching the reference line according to the detection information A.
Referring to fig. 8, fig. 8 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 8, a column of serial numbers indicates a front-back sequence of each object touching a reference line, i.e. a time sequence information a; the column of the touch point partition information indicates partition information of the touch point on the reference line when each object touches the reference line, namely touch point partition information a, wherein 1 indicates 1 lane, and 3 indicates 3 lanes.
704. And acquiring time sequence information B and touch point partition information B of the object touching the reference line according to the detection information B.
The detection information B is a picture of the object detected by the radar in the detection range, and the touch line information is information of the object touching the reference line. The processing device can acquire the timing information B of the object touching the reference line and the touch point partition information B according to the detection information B. As shown in fig. 8, the column of the sequence number indicates the front-back sequence of each object touching the reference line, i.e. the sequence information B; the touch point partition information column indicates partition information of a touch point on a reference line when each object touches the reference line, namely touch point partition information B, wherein 1 indicates 1 lane, and 3 indicates 3 lanes.
In this embodiment of the present application, step 701 and step 702 do not have a necessary sequence, step 701 may be executed before or after step 702, or step 701 and step 702 may be executed simultaneously, which is not limited herein. Step 703 and step 704 have no necessary sequence, step 703 may be executed before or after step 704, or step 703 and step 704 may be executed simultaneously, as long as step 703 is executed after step 701, and step 704 is executed after step 702, which is not limited herein.
705. And acquiring a touch partition sequence A according to the time sequence information A and the touch point partition information A.
As shown in fig. 8, according to the time sequence information a, the touch point partition information a may be arranged in sequence according to a time sequence, so as to obtain a touch partition sequence a.
706. And acquiring a touch partition sequence B according to the time sequence information B and the touch point partition information B.
As shown in fig. 8, according to the time sequence information B, the touch point partition information B may be arranged in sequence according to a time sequence, so as to obtain a touch partition sequence B.
In this embodiment of the present application, step 705 and step 706 have no necessary sequence, step 705 may be executed before or after step 706, or step 705 and step 706 may be executed simultaneously, as long as step 705 is executed after step 703, and step 706 is executed after step 704, which is not limited herein.
707. And acquiring a first subsequence according to the touch partition sequence A and the touch partition sequence B.
The touch partition sequence a and the touch partition sequence B are essentially two sequences that the processing device can compare and when a sequence segment that is the same or has a higher degree of overlap is found in both sequences, the sequence segment can be considered to be a common part of both sequences. In the present examples, this sequence fragment is also referred to as the first subsequence. The touch subarea sequence reflects the position relation between the objects detected by the sensors, namely the matrix information between the objects. When the two sequence arrays include sequence segments with the same or higher coincidence degree, it indicates that the object sets corresponding to the segments in the two sequence arrays have the same positional relationship, i.e. have the same array information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the first subsequence is also referred to as target matrix information and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the first subsequence is not required to be completely overlapped with the segments in the touch partition sequence a and the touch partition sequence B, and it is only required to ensure that the overlap ratio of the first subsequence with the touch partition sequence a and the touch partition sequence B is higher than the first threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the first threshold may be 90%, but not limited to 90%, and the first threshold may also be other values, such as 95%, 99%, etc.
For example, the touch partition sequence a and the touch partition sequence B shown in fig. 8 each include a sequence segment of (3,3,1,3, 1). The processing device may treat the segment as a first subsequence. At this time, the coincidence degree of the first subsequence with the touch partition sequence a and the touch partition sequence B is 100%.
Alternatively, the first subsequence may be determined by an LCS algorithm. In the embodiment of the present application, all the common sequences of the multiple touch partition sequences may be obtained through an LCS algorithm, so as to implement matching of the same location characteristics of the multiple touch partition sequences. Since the LCS algorithm calculates the longest common subsequence, the first subsequence calculated by the LCS algorithm may include the subsequence with the longest length among the subsequences whose overlap ratios with the touch partition sequences are all higher than the first threshold.
In the embodiment of the present application, all common sequences of the multiple touch partition sequences may be determined through an LCS algorithm, so as to match all fragments of the touch partition sequences having the same location characteristics. If a plurality of fragments are public sequences, and some non-public sequences are mixed in the public sequences, the non-public sequences mixed in the public sequences can be identified. Wherein the non-common sequences represent different positional relationships among different sensors. In this case, it can be considered that the non-common sequence included in the common sequence is caused by false detection or missing detection of the sensor, so as to be fault-tolerant to the non-common sequence, that is, the non-common sequence corresponds to the target object detected by different sensors, thereby realizing fusion of the detection information.
In this embodiment of the application, the first subsequence determined by the LCS algorithm may include a subsequence with the longest length among subsequences whose degrees of overlap with the plurality of touch partition sequences are higher than the first threshold. Because the position relations between the target objects may have accidental similarity, the longer the length of the determined subsequence is, the lower the possibility of having a similar position relation is, the more the accidental similarity can be avoided, and the longest subsequence is determined through an LCS algorithm, so that the target array type information of the same target object set can be accurately determined.
For example, there may be accidental similarity between the position relationships of two targets, but if the criterion is improved to that the position relationships between ten targets have high coincidence, the probability of ten targets having similar position relationships is greatly reduced compared with the probability of two targets having similar position relationships, so if the first subsequence of ten targets is determined by the LCS algorithm, the probability of detection results of the ten targets by different sensors for the same ten targets is higher, and the probability of matching errors is reduced.
708. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the first subsequence.
The first subsequence is composed of a plurality of touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B. For example, the sequence number of the touch point partition information in the touch partition sequence a is 4, the touch point partition information of the touch point partition sequence is 3, and the touch point partition information before and after the touch point partition information is 1.
In the embodiment of the present application, the single touch point partition information in the touch partition sequence or the first subsequence is also referred to as position information, and indicates a position of a single target object in the target object set.
In the embodiment of the present application, the own partition information is referred to as an own feature, and the preceding and following or nearby partition information is referred to as a peripheral feature. Besides the previous and next touch point partition information, the peripheral features may also include more nearby touch point partition information, which is not limited herein.
In the tap partition sequence B, the tap partition information having the same own characteristic and peripheral characteristic, i.e. the tap partition information with the serial number of 13, can also be found. Since the two touch point partition information are both in the first subsequence and have the same self characteristics and peripheral characteristics, it can be considered that the two touch point partition information reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fused information of the target object.
For example, when a camera corresponding to the partition sequence a is touched, appearance information such as the size and shape of an object corresponding to the serial number 4 can be detected. Specifically, the information of the model, the color, the license plate and the like of the vehicle corresponding to the serial number 4 can be detected. When the radar corresponding to the partition sequence B is touched, information such as the moving speed of the object corresponding to the serial number 13 can be detected. Specifically, the information of the vehicle speed, acceleration, etc. corresponding to the vehicle with the serial number 13 can be detected. The processing device can fuse the information of the model, the color, the license plate and the like with the information of the vehicle speed, the acceleration and the like to obtain fused information of the vehicle.
In the embodiment of the application, the time sequence information represents the front-back relationship of different target objects touching the reference line, the touch point partition information represents the left-right relationship of different target objects touching the reference line, and the position relationship of a plurality of target objects touching the reference line is embodied in the touch partition sequence through the time sequence information representing the front-back relationship and the touch point partition information representing the left-right relationship. The time sequence information and the touch point partition information are specific numerical values, and the touch partition sequence is a set of numerical values reflecting the position relation between the target objects. And acquiring corresponding touch subarea sequences according to the detection information from different sensors. The obtained touch partition sequences are a plurality of numerical value sets, the contact ratio of the numerical value sets is determined to meet a preset threshold value, only corresponding numerical values need to be compared, complex operation is not needed, and the efficiency of matching target array type information is improved.
In the embodiment of the application, the target array type information is determined according to the timing information and the touch point partition information, and the target array type information can also be determined according to the timing information and the touch time interval information.
2) And determining the second subsequence according to the time sequence information and the touch time interval information.
Referring to fig. 9, fig. 9 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
901. the detection information a is acquired from a sensor a (camera).
902. The detection information B is acquired from a sensor B (radar).
For the description of step 901 and step 902, refer to step 701 and step 702 of the embodiment shown in fig. 7, and are not described herein again.
903. And acquiring time sequence information A and touch time interval information A of the object pixel touching the reference line according to the detection information A.
Since the detection information a is a picture composed of pixels, the touch line information is information of the pixels of the object touching the reference line. The processing device can acquire the timing information A and the touch time interval information A of the object pixel touching the reference line according to the detection information A.
Referring to fig. 10, fig. 10 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 10, a column of serial numbers indicates a front-back sequence of each object touching a reference line, i.e., a time sequence information a; the touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line, namely touch time interval information a, wherein the touch time interval information is in seconds. The touch time interval information may be in units of milliseconds, other than seconds, and is not limited herein.
904. And acquiring time sequence information B and touch time interval information B of the object touching the reference line according to the detection information B.
The detection information B is a picture of the object detected by the radar in the detection range, and the touch line information is information of the object touching the reference line. The processing device can acquire the timing information B and the touch time interval information B of the object touching the reference line according to the detection information B.
As shown in fig. 10, the column of serial numbers indicates the front-back sequence of each object touching the reference line, i.e. the sequence information B; the touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line, namely touch time interval information B, wherein the touch time interval information is in seconds. The touch time interval information may be in units of milliseconds, other than seconds, and is not limited herein.
In this embodiment of the application, step 901 and step 902 do not have a necessary sequence, step 901 may be executed before or after step 902, or step 901 and step 902 may be executed simultaneously, which is not limited herein. Step 903 and step 904 also have no necessary sequence, step 903 may be executed before or after step 904, or step 903 and step 904 may be executed simultaneously, as long as step 903 is executed after step 901, and step 904 is executed after step 902, which is not limited herein.
905. And acquiring a touch interval sequence A according to the time sequence information A and the touch time interval information A.
As shown in fig. 10, according to the time sequence information a, the touch time interval information a may be arranged in sequence according to a time sequence, so as to obtain a touch interval sequence a.
906. And acquiring a touch interval sequence B according to the time sequence information B and the touch time interval information B.
As shown in fig. 10, according to the time sequence information B, the touch time interval information B may be arranged in sequence according to a time sequence, so as to obtain a touch interval sequence B.
In this embodiment of the application, step 905 does not have a necessary sequence with step 906, step 905 may be executed before or after step 906, or step 905 and step 906 may be executed simultaneously, as long as step 905 is executed after step 903, and step 906 is executed after step 904, which is not limited herein.
907. And acquiring a second subsequence according to the touch interval sequence A and the touch interval sequence B.
The touch spacer sequence a and the touch spacer sequence B are essentially two sequences that the processing device can compare and when a sequence segment that is the same or has a higher degree of overlap is found in both sequences, the sequence segment can be considered to be a common part of both sequences. In the present examples, this sequence fragment is also referred to as the second subsequence. The touch interval sequence reflects the position relation between the objects detected by the sensor, namely the matrix information between the objects. When the two sequence arrays include sequence segments with the same or higher coincidence degree, it indicates that the object sets corresponding to the segments in the two sequence arrays have the same positional relationship, i.e. have the same array information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the second subsequence is also referred to as target matrix information and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the second subsequence is not required to be completely overlapped with the segments in the touch interval sequence a and the touch interval sequence B, and it is only required to ensure that the overlap ratio of the second subsequence with the touch interval sequence a and the touch interval sequence B is higher than the second threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the second threshold may be 90%, but 90%, and the second threshold may also be other values, such as 95%, 99%, etc., which are not limited herein.
For example, the touch interval sequence a and the touch interval sequence B shown in fig. 10 each include sequence segments of (2.0s,0.3s,1.9s,0.4 s). The processing device may treat the fragment as a second subsequence. At this time, the contact ratio of the second subsequence to both the touch interval sequence a and the touch interval sequence B is 100%.
Alternatively, the second subsequence may be determined by an LCS algorithm. In the embodiment of the present application, all common sequences of a plurality of touch interval sequences may be obtained through an LCS algorithm, so as to implement matching of the same position characteristics of the plurality of touch interval sequences. Since the LCS algorithm calculates the longest common subsequence, the second subsequence calculated by the LCS algorithm may include the subsequence with the longest length among the subsequences whose overlap ratios with the touch interval sequences are all higher than the second threshold.
In the embodiment of the present application, all common sequences of a plurality of touch interval sequences may be determined through an LCS algorithm, so as to match all fragments of touch interval sequences having the same position characteristic. If a plurality of fragments are public sequences, and some non-public sequences are included in the public sequences, the non-public sequences included in the public sequences can be identified. Wherein the non-common sequences represent different positional relationships among different sensors. In this case, it can be considered that the non-common sequence included in the common sequence is caused by false detection or missing detection of the sensor, so as to be fault-tolerant to the non-common sequence, that is, the non-common sequence corresponds to the target object detected by different sensors, thereby realizing fusion of the detection information.
In this embodiment of the application, the second subsequence determined by the LCS algorithm may include a subsequence with the longest length among subsequences whose contact degrees with the multiple touch interval sequences are higher than the second threshold. Because the position relations between the target objects may have accidental similarity, the longer the length of the determined subsequence is, the lower the possibility of having a similar position relation is, the more the accidental similarity can be avoided, and the longest subsequence is determined through an LCS algorithm, so that the target array type information of the same target object set can be accurately determined.
For example, there may be accidental similarity between the position relationships of two targets, but if the criterion is improved to that the position relationships between ten targets have high coincidence, the probability of ten targets having similar position relationships is greatly reduced compared with the probability of two targets having similar position relationships, so if the first subsequence of ten targets is determined by the LCS algorithm, the probability of detection results of the ten targets by different sensors for the same ten targets is higher, and the probability of matching errors is reduced.
908. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the second subsequence.
The second subsequence is composed of a plurality of touch time interval information, and for each touch time interval information in the second subsequence, corresponding data can be found in the touch interval sequence a and the touch interval sequence B. For example, the touch time interval information with sequence number 3 in the touch interval sequence a has its own touch time interval information of 0.3s, and the previous and subsequent touch time interval information is 2.0s and 1.9s, respectively.
In the embodiment of the present application, the single touch time interval information in the touch interval sequence or the second subsequence is also referred to as the position information, and represents the position of the single target object in the target object set.
In the embodiment of the present application, the touch time interval information of the user is referred to as the self characteristic, and the touch time interval information of the user before, after, or near the self characteristic is referred to as the peripheral characteristic. In addition to the previous and next touch time interval information, the peripheral feature may also include more nearby touch time interval information, which is not limited herein.
In the touch interval sequence B, the touch interval information having the same self-feature and the same peripheral feature, i.e., the touch interval information with the sequence number 12, can also be found. Since the two pieces of touch time interval information are both in the second subsequence and have the same self characteristics and peripheral characteristics, it can be considered that the two pieces of touch time interval information reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the serial number 3 with the detection information corresponding to the serial number 12 to obtain the fused information of the target object.
For example, when a camera corresponding to the partition sequence a is touched, appearance information such as the size and shape of an object corresponding to the serial number 3 can be detected. Specifically, the information of the model, color, license plate and the like of the vehicle corresponding to the serial number 3 can be detected. And the radar corresponding to the partition sequence B is touched, so that the information such as the moving speed of the object corresponding to the serial number 12 can be detected. Specifically, the information of the vehicle speed, acceleration, etc. corresponding to the vehicle with the serial number 12 can be detected. The processing device can fuse the information of the model, the color, the license plate and the like with the information of the vehicle speed, the acceleration and the like to obtain fused information of the vehicle.
In the embodiment of the application, the time sequence information represents the front-back relationship of different target objects touching the reference line, the touch time interval information represents the front-back time interval of different target objects touching the reference line, and the position relationship of a plurality of target objects touching the reference line is represented in the touch interval sequence through the time sequence information representing the front-back relationship and the touch time interval information representing the front-back time interval. The time sequence information and the touch time interval information are specific numerical values, and the touch interval sequence is a set of numerical values reflecting the position relation between the targets. And acquiring corresponding touch interval sequences according to the detection information from different sensors. The obtained touch interval sequences are a plurality of value sets, the contact ratio of the value sets is determined to meet a preset threshold value, only corresponding values need to be compared, complex operation is not needed, and the efficiency of matching target array type information is improved.
In the embodiment of the present application, in addition to determining the target array type information according to the two methods, the target array type information may also be determined according to the timing information and the touch point position information.
3) And determining a third subsequence according to the time sequence information and the touch point position information.
Referring to fig. 11, fig. 11 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
1101. the detection information a is acquired from a sensor a (camera).
1102. The detection information B is acquired from a sensor B (radar).
For the description of step 1101 and step 1102, refer to step 701 and step 702 of the embodiment shown in fig. 7, and are not described herein again.
1103. And acquiring time sequence information A and touch point position information A of the object pixel touching the reference line according to the detection information A.
Since the detection information a is a picture composed of pixels, the touch line information is information of the pixels of the object touching the reference line. The processing device can acquire the timing information a and the touch point position information a of the object pixel touching the reference line according to the detection information a. The touch point position information a indicates the position of the touch point on the reference line. Specifically, the touch point position information a may represent a positional relationship between touch points of different objects, and may specifically represent a left-right relationship between the touch points, so that the left-right relationship between the objects may be represented.
Optionally, in order to represent the left-right position relationship between the objects, the touch point position information a may represent a distance between the touch point and the reference point on the reference line, and represent the position relationship between the touch points according to the distances between different touch points. The distance between the touch point and the left end point of the reference line is taken as an example in the embodiment of the application, but the information of the touch point position is not limited, and the information of the touch point position can represent the position relationship between the touch point and any point on the reference line, and is not limited here.
Referring to fig. 12, fig. 12 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 12, a column of serial numbers indicates a front-back sequence of each object touching a reference line, i.e., a time sequence information a; the touch point position information column represents the distance between the touch point of each object touching the reference line and the left end point of the reference line, namely touch point position information A. In the embodiment of the present application, the touch point position information may indicate a position relationship between the touch point and any point on the reference line, and is not limited herein.
1104. And acquiring time sequence information B and touch point position information B of the object touching the reference line according to the detection information B.
The detection information B is a picture of the object detected by the radar in the detection range, and the touch line information is information of the object touching the reference line. The processing device may obtain timing information B and touch point position information B of the object touching the reference line according to the detection information B. For the description of the touch point location information B, refer to the description of the touch point location information a in step 1103, which is not described herein again.
Optionally, as shown in fig. 12, a column of serial numbers indicates the front-back sequence of each object touching the reference line, i.e. the time sequence information B; the touch time interval information column indicates the distance between the touch point of each object touching the reference line and the left end point of the reference line, namely touch point position information B. In this embodiment, the touch point position information may represent a position relationship between the touch point and any point on the reference line to represent a position relationship between different touch points, which is not limited herein.
In this embodiment of the present application, step 1101 and step 1102 have no necessary sequence, step 1101 may be executed before or after step 1102, or step 1101 and step 1102 may be executed simultaneously, which is not limited herein. Step 1103 and step 1104 have no necessary sequence, step 1103 may be executed before or after step 1104, or step 1103 and step 1104 may be executed simultaneously, as long as step 1103 is executed after step 1101, and step 1104 is executed after step 1102, which is not limited herein.
1105. And acquiring a touch position sequence A according to the time sequence information A and the touch position information A.
As shown in fig. 12, according to the time sequence information a, the touch position information a can be arranged in time sequence to obtain a touch position sequence a.
1106. And acquiring a touch position sequence B according to the time sequence information B and the touch position information B.
As shown in fig. 12, according to the time sequence information B, the touch position information B may be arranged in sequence according to a time sequence, so as to obtain a touch position sequence B.
In this embodiment of the present application, step 1105 has no necessary sequence with step 1106, step 1105 may be executed before or after step 1106, or step 1105 may be executed simultaneously with step 1106, as long as step 1105 is executed after step 1103, and step 1106 is executed after step 1104, which is not limited herein.
1107. And acquiring a third subsequence according to the touch position sequence A and the touch position sequence B.
The touch location sequence a and the touch location sequence B are essentially two sequences that the processing device can compare and when a sequence segment that is the same or has a higher degree of overlap is found in both sequences, the sequence segment can be considered to be a common part of both sequences. In the examples of this application, this sequence fragment is also referred to as third subsequence. The touch position sequence reflects the position relation between the objects detected by the sensor, namely the formation information between the objects. When the two sequence arrays include sequence segments with the same or higher coincidence degree, it indicates that the object sets corresponding to the segments in the two sequence arrays have the same positional relationship, i.e. have the same array information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the third subsequence is also referred to as target matrix information and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the third subsequence is not required to be completely overlapped with the segments in the touch position sequence a and the touch position sequence B, and it is only required to ensure that the overlap ratio of the third subsequence with the touch position sequence a and the touch position sequence B is higher than the third threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the third threshold may be 90%, but 90%, and the third threshold may also be other values, such as 95%, 99%, etc., and is not limited herein.
For example, the sequence of tap positions a and B shown in fig. 12 each comprise a sequence segment of (7.5m,7.3m,1.5m,7.6m, 1.3 m). The processing device may treat the fragment as a third subsequence. At this time, the contact ratio of the third subsequence to both the touch position sequence a and the touch position sequence B is 100%.
Alternatively, the third subsequence may be determined by an LCS algorithm. In the embodiment of the present application, all common sequences of a plurality of touch location sequences may be obtained through an LCS algorithm, so as to implement matching of the same location features of the plurality of touch location sequences. Since the LCS algorithm calculates the longest common subsequence, the third subsequence calculated by the LCS algorithm may include the subsequence with the longest length among the subsequences whose overlap ratios with the touch position sequences are all higher than the second threshold.
1108. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the third subsequence.
The third subsequence is composed of a plurality of touch point position information, and for each touch point position information in the third subsequence, corresponding data can be found in the touch position sequence A and the touch position sequence B. For example, the touch point position information with the sequence number 2 in the touch position sequence a has the touch point position information of 7.3m, and the front and rear touch point position information is 7.5m and 1.5m, respectively.
In the embodiment of the present application, the single touch point position information in the touch position sequence or the third subsequence is also referred to as position information, and represents a position of a single target object in the target object set.
In the embodiment of the present application, the touch point position information of the touch point is referred to as the self characteristic, and the touch point position information of the front, back, or near is referred to as the peripheral characteristic. In addition to the information of the position of the front and back touch points, the peripheral features may also include information of positions of more nearby touch points, which is not limited herein.
The touch point position information with the same self-feature and peripheral feature, i.e. the touch point position information with the serial number of 11, can also be found in the touch position sequence B. Since the two pieces of touch point position information are both in the third subsequence and have the same self characteristics and peripheral characteristics, it can be considered that the two pieces of touch point position information reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the serial number 2 with the detection information corresponding to the serial number 11 to obtain the fused information of the target object.
For example, when a camera corresponding to the position sequence a is touched, the appearance information such as the size and shape of the object corresponding to the serial number 2 can be detected. Specifically, the information of the model, color, license plate and the like of the vehicle corresponding to the serial number 2 can be detected. When the radar corresponding to the position sequence B is touched, information such as the moving speed of the object corresponding to the serial number 11 can be detected. Specifically, the information of the vehicle speed, acceleration, etc. corresponding to the vehicle with the serial number 11 can be detected. The processing device can fuse the information of the model, the color, the license plate and the like with the information of the vehicle speed, the acceleration and the like to obtain fused information of the vehicle.
In the embodiment of the present application, all common sequences of a plurality of touch interval sequences may be determined through an LCS algorithm, so as to match all fragments of touch interval sequences having the same position characteristic. If a plurality of fragments are public sequences, and some non-public sequences are included in the public sequences, the non-public sequences included in the public sequences can be identified. Wherein the non-common sequences represent different positional relationships among different sensors. In this case, it can be considered that the non-common sequence included in the common sequence is caused by false detection or missing detection of the sensor, so as to be fault-tolerant to the non-common sequence, that is, the non-common sequence corresponds to the target object detected by different sensors, thereby realizing fusion of the detection information.
In this embodiment of the application, the third subsequence determined by the LCS algorithm may include a subsequence with the longest length among subsequences whose contact degrees with the plurality of touch position sequences are all higher than a third threshold. Because the position relation between the target objects may have accidental similarity, the longer the determined subsequence length is, the lower the possibility of having similar position relation is, the more the chance can be avoided, and the longest subsequence is determined through an LCS algorithm, so that the target array type information of the same target object set can be accurately determined.
For example, there may be accidental similarity in the position relationship between two objects, but if the criterion is raised to that the position relationship between ten objects has high coincidence degree, the probability of ten objects having similar position relationship is greatly reduced compared with the probability of two objects having similar position relationship, so if the first subsequence of ten objects is determined by the LCS algorithm, the probability of the detection results of the ten objects by different sensors for the same ten objects is higher, and the probability of matching errors is reduced.
In the embodiment of the present application, the touch point position information represents a left-right relationship between different target objects touching the reference line, and may be a continuous numerical value or data. Therefore, based on the continuous numerical values or data, the array information of the target object can be more accurately distinguished from the array information of other non-target objects, and the fusion of the detection information aiming at the same target object can be more accurately realized.
Furthermore, the movement trend between the objects can be analyzed or calculated through the continuous numerical values or data, and other information, such as the movement track of the object, can be calculated besides the movement trend, which is not limited herein.
In the embodiment of the present application, besides determining the corresponding sub-sequences, the sub-sequences may be combined to improve the accuracy of the lattice matching.
4) And determining intersection according to the first subsequence and the second subsequence.
Referring to fig. 13, fig. 13 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
1301. the detection information a is acquired from a sensor a (camera).
1302. The detection information B is acquired from a sensor B (radar).
For the description of step 1301 and step 1302, refer to step 701 and step 702 of the embodiment shown in fig. 7, which are not described herein again.
1303. And acquiring time sequence information A of the object pixel touching the reference line, touch point partition information A and touch time interval information A according to the detection information A.
Since the detection information a is a picture composed of pixels, the touch line information is information of the pixels of the object touching the reference line. The processing device can acquire the timing information A of the object pixel touching the reference line, the touch point partition information A and the touch time interval information A according to the detection information A.
Referring to fig. 14, fig. 14 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 14, a column of serial numbers indicates a front-back sequence of each object touching a reference line, i.e., a time sequence information a; the column of the touch point partition information indicates partition information of the touch point on the reference line when each object touches the reference line, namely touch point partition information a, wherein 1 indicates 1 lane, and 3 indicates 3 lanes. The touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line, namely touch time interval information a, wherein the touch time interval information is in seconds. The touch time interval information may be in units of milliseconds, other than seconds, and is not limited herein.
1304. And acquiring time sequence information B, touch point partition information B and touch time interval information B of the object touching the reference line according to the detection information B.
The detection information B is a picture of the object detected by the radar in the detection range, and the touch line information is information of the object touching the reference line. The processing device can acquire the timing information B of the object touching the reference line, the touch point partition information B and the touch time interval information B according to the detection information B.
As shown in fig. 14, the column of sequence numbers indicates the sequence of each object touching the reference line, i.e. the sequence information B; the touch point partition information column indicates partition information of a touch point on a reference line when each object touches the reference line, namely touch point partition information B, wherein 1 indicates 1 lane, and 3 indicates 3 lanes. The touch time interval information indicates the time difference between each object touching the reference line and the previous object touching the reference line, namely touch time interval information B, wherein the touch time interval information is in seconds. The touch time interval information may be in units of milliseconds, other than seconds, and is not limited herein.
In the embodiment of the present application, step 1301 and step 1302 do not have a necessary sequence, step 1301 may be executed before or after step 1302, or step 1301 and step 1302 may be executed simultaneously, which is not limited herein. Step 1303 and step 1304 are not necessarily in sequence, step 1303 may be executed before or after step 1304, or step 1303 and step 1304 may be executed simultaneously, as long as step 1303 is executed after step 1301, and step 1304 is executed after step 1302, which is not limited herein.
1305. And acquiring a touch partition sequence A according to the time sequence information A and the touch point partition information, and acquiring a touch interval sequence A according to the time sequence information A and the touch time interval information A.
The step of the processing device obtaining the touch partition sequence a according to the timing information a and the touch point partition information is shown in step 705 of the embodiment shown in fig. 7, and is not repeated here.
The step of the processing device obtaining the touch interval sequence a according to the timing information a and the touch time interval information a is shown in step 905 of the embodiment shown in fig. 9, and is not repeated here.
1306. And acquiring a touch partition sequence B according to the time sequence information B and the touch point partition information, and acquiring a touch interval sequence B according to the time sequence information B and the touch time interval information B.
The step of the processing device obtaining the touch partition sequence B according to the timing information B and the touch point partition information is shown in step 706 in the embodiment shown in fig. 7, and is not described here again.
The step of the processing device obtaining the touch interval sequence B according to the timing information B and the touch time interval information B is shown in step 906 of the embodiment shown in fig. 9, and is not repeated here.
In this embodiment of the present application, step 1305 and step 1306 have no necessary precedence order, and step 1305 may be executed before or after step 1306, or step 1305 and step 1306 may be executed simultaneously, as long as step 1305 is executed after step 1303, and step 1306 is executed after step 1304, which is not limited herein.
1307. And acquiring a first subsequence according to the touch partition sequence A and the touch partition sequence B.
The touch partition sequence a and the touch partition sequence B are essentially two sequences that the processing device can compare and when a sequence segment that is the same or has a higher degree of overlap is found in both sequences, the sequence segment can be considered to be a common part of both sequences. In the present examples, this sequence fragment is also referred to as the first subsequence. The touch subarea sequence reflects the position relation between the objects detected by the sensors, namely the matrix information between the objects. When the two sequence arrays include sequence segments with the same or higher coincidence degree, it indicates that the object sets corresponding to the segments in the two sequence arrays have the same positional relationship, i.e. have the same array information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the first subsequence is also referred to as target matrix information and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the first subsequence is not required to be completely overlapped with the segments in the touch partition sequence a and the touch partition sequence B, and it is only required to ensure that the overlap ratio of the first subsequence with the touch partition sequence a and the touch partition sequence B is higher than the first threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the first threshold may be 90%, but 90%, and the first threshold may also be other values, such as 95%, 99%, etc., which are not limited herein.
For example, the touch partition sequence a and the touch partition sequence B shown in fig. 8 each include a sequence segment of (3,3,1,3, 1). The processing device may treat the segment as a first subsequence. At this time, the coincidence degree of the first subsequence with the touch partition sequence a and the touch partition sequence B is 100%.
1308. And acquiring a second subsequence according to the touch interval sequence A and the touch interval sequence B.
The touch spacer sequence a and the touch spacer sequence B are essentially two sequences that the processing device can compare and when a sequence segment that is the same or has a higher degree of overlap is found in both sequences, the sequence segment can be considered to be a common part of both sequences. In the present examples, this sequence fragment is also referred to as the second subsequence. The touch interval sequence reflects the position relation between the objects detected by the sensor, namely the matrix information between the objects. When the two sequence arrays include sequence segments with the same or higher coincidence degree, it indicates that the object sets corresponding to the segments in the two sequence arrays have the same positional relationship, i.e. have the same array information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the second subsequence is also referred to as target matrix information and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the second subsequence is not required to be completely overlapped with the segments in the touch interval sequence a and the touch interval sequence B, and it is only required to ensure that the overlap ratio of the second subsequence with the touch interval sequence a and the touch interval sequence B is higher than the second threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the second threshold may be 90%, but 90%, and the second threshold may also be other values, such as 95%, 99%, etc., which are not limited herein.
For example, the touch interval sequence a and the touch interval sequence B shown in fig. 10 each include sequence segments of (2.0s,0.3s,1.9s,0.4 s). The processing device may treat the fragment as a second subsequence. At this time, the contact ratio of the second subsequence to both the touch interval sequence a and the touch interval sequence B is 100%.
1309. And determining the intersection of the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence.
The objects indicated by the first subsequence (3,3,1,3,1), on the sensor a side, are numbered from 1 to 5, corresponding to the objects numbered from 10 to 14 on the sensor B side. In the embodiment of the present application, the set of objects corresponding to the first sub-sequence is also referred to as a first set of objects.
The objects indicated by the second subsequence (2.0s,0.3s,1.9s,0.4s), on the sensor a side, are numbered from 2 to 5, corresponding to the objects on the sensor B side, numbered from 11 to 14. In the embodiment of the present application, the set of objects corresponding to the second subsequence is also referred to as a second set of objects.
And taking the intersection of the two object sets, namely on the sensor A side, taking the intersection of the objects with the serial numbers 1 to 5 and the objects with the serial numbers 2 to 5, namely determining the set of the target objects with the intersection of the serial numbers 2 to 5. Correspondingly, on the sensor B side, the intersection is the set of objects with serial numbers 11 to 14. In the embodiment of the present application, the intersection of the first object set and the second object set is also referred to as a target object set.
1310. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the objects in the intersection.
The first subsequence is composed of a plurality of touch point partition information, and for each touch point partition information in the first subsequence, corresponding data can be found in the touch partition sequence A and the touch partition sequence B. For example, the sequence number of the touch point partition information in the touch partition sequence a is 4, the touch point partition information of the touch point partition sequence is 3, and the touch point partition information before and after the touch point partition information is 1.
In the embodiment of the present application, the single touch point partition information in the touch partition sequence or the first subsequence is also referred to as position information, and indicates a position of a single target object in the target object set.
In the embodiment of the present application, the partition information of itself is referred to as the self feature, and the partition information in front of, behind, or near the self feature is referred to as the peripheral feature. Besides the previous and next touch point partition information, the peripheral features may also include more nearby touch point partition information, which is not limited herein.
In the tap partition sequence B, the tap partition information having the same own characteristic and peripheral characteristic, i.e. the tap partition information with the serial number of 13, can also be found. Since the two touch point partition information are both in the first subsequence and have the same self characteristics and peripheral characteristics, it can be considered that the two touch point partition information reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the serial number 4 with the detection information corresponding to the serial number 13 to obtain the fused information of the target object.
For example, when a camera corresponding to the partition sequence a is touched, appearance information such as the size and shape of an object corresponding to the serial number 4 can be detected. Specifically, the information of the model, the color, the license plate and the like of the vehicle corresponding to the serial number 4 can be detected. When the radar corresponding to the partition sequence B is touched, information such as the moving speed of the object corresponding to the serial number 13 can be detected. Specifically, the information of the vehicle speed, acceleration, etc. corresponding to the vehicle with the serial number 13 can be detected. The processing device can fuse the information of the model, the color, the license plate and the like with the information of the vehicle speed, the acceleration and the like to obtain fused information of the vehicle.
Similarly, the detection information having the same self-feature and the same peripheral feature in the second subsequence may also be fused, for which reference is specifically made to the aforementioned fusion process according to the first subsequence, and details are not described here again.
For the description of the self feature and the peripheral feature of the second subsequence, refer to step 908 in the embodiment shown in fig. 9, which is not described herein again.
In this embodiment of the present application, an intersection between the first object set and the second object set is determined by using the first object set corresponding to the first subsequence and the second object set corresponding to the second subsequence, and the intersection is used as a target object set. The objects in the intersection correspond to the first subsequence, that is, similar touch partition information can be obtained according to the detection information of different sensors; at the same time, the objects in the intersection correspond to the second subsequence, i.e. have similar touch interval information at the same time, depending on the detection information of the different sensors. If a plurality of similar pieces of information indicating the positional relationship of the objects can be acquired based on the detection information of the plurality of sensors, the possibility that the object set corresponding to the detection information is the same object set is higher than the case where only one piece of similar information indicating the positional relationship of the objects can be acquired. Therefore, the array information of the target object can be more accurately distinguished from the array information of other non-target objects by screening the intersection of the objects corresponding to the multiple sub-sequences, so that the fusion of the detection information of the same target object can be more accurately realized.
In the embodiment of the present application, in addition to taking an intersection of an object corresponding to the first subsequence and an object corresponding to the second subsequence, an intersection between objects corresponding to other subsequences may also be taken, for example, an intersection between objects corresponding to the first subsequence and the third subsequence, or an intersection between objects corresponding to the second subsequence and the third subsequence, or an intersection between an object corresponding to another subsequence and an object corresponding to any one of the first to third subsequences. Other sub-sequences are also used to indicate the position relationship between the objects, such as the distance or direction between the objects, and the like, which is not limited herein. By taking the intersection of objects corresponding to different subsequences, a proper subsequence can be flexibly selected for operation, and feasibility and flexibility of the scheme are improved.
In the embodiment of the present application, in addition to taking the intersection between the respective corresponding objects of the two sub-sequences, the intersection between the respective corresponding objects of more sub-sequences may also be taken, for example, the intersection between the respective corresponding objects of the first sub-sequence, the second sub-sequence, and the third sub-sequence. The larger the number of subsequences taken, the more types of information that can be obtained from the detection information of the plurality of sensors and that indicates the positional relationship of the objects that are similar, the higher the possibility that the set of objects corresponding to the detection information is the same set of objects. Therefore, the array information of the target object can be more accurately distinguished from the array information of other non-target objects by screening the intersection of the objects corresponding to the multiple sub-sequences, so that the fusion of the detection information of the same target object can be more accurately realized.
In the embodiment of the application, the touch line information is acquired through the position feature set, and as the touch line information is information of an object touching the reference line, the touch reference line can acquire data including specific numerical values or specific position features such as touch time, touch interval and touch position. Therefore, by using specific values or specific position characteristics of a plurality of object contact lines, a set of contact line data, such as a number sequence of a plurality of touch times, a number sequence of a plurality of touch intervals, or a distribution relationship of a plurality of touch positions, can be obtained. Because the set of the contact line data has specific numerical values or position characteristics, direct operation can be performed without performing other data processing, and therefore the target array type information with the contact degree conforming to the preset threshold value can be quickly determined.
In the embodiment of the present application, the matrix information may be determined by other methods, such as an image feature matching method, in addition to the line marking method.
2. And (3) an image feature matching method.
For humans, the matrix information may appear as a whole shape. For the device, the overall shape of the abstract can be represented by image features. In the embodiment of the present application, a method of determining the matrix information by the overall image characteristics is referred to as an image characteristic matching method.
Based on the detection system shown in fig. 3a, the steps of the information processing method shown in the embodiment of the present application will be described in detail with reference to fig. 15. Referring to fig. 15, fig. 15 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure. The method comprises the following steps:
1501. the detection information a is acquired from a sensor a (camera).
1502. The detection information B is acquired from a sensor B (radar).
Steps 1501 and 1502 refer to steps 701 and 702 of the embodiment shown in FIG. 7, which are not described herein.
1503. And determining an initial target group distribution diagram A according to the detection information A.
Since the detection information a is a picture composed of pixels, the processing device can distinguish different objects according to the pixels in the picture and mark the objects with feature points. And the shape formed by each feature point is used as an initial target group distribution diagram A.
Specifically, the labeling of the feature points may follow a uniform rule, for example, for the labeling of the vehicle, the center point of the vehicle head may be used as the feature point. Besides the center point of the vehicle head, other points may also be used, such as the center point of the license plate, and the like, which are not limited herein.
For example, referring to fig. 16, fig. 16 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application. As shown in fig. 16, the center point of the license plate is labeled and the labeled points are connected to form an initial target group distribution graph a having a shape similar to the number "9".
Optionally, the initial target group distribution map a may be obtained by extracting corresponding shape features through a scale-invariant feature transform (SIFT) algorithm.
1504. And determining an initial target group distribution diagram B according to the detection information B.
The detection information B is a picture of the object detected by the radar in the detection range, and the object detected by the radar has the label information in the picture, where the label information represents the corresponding object. The processing device may take the shape in which each piece of label information is formed in the screen as the initial target group distribution map B.
Illustratively, as shown in fig. 16, the positions where the label information is located are connected to form an initial target group distribution diagram B, which also has a shape similar to the number "9".
Optionally, the corresponding shape feature may be extracted through a SIFT algorithm, so as to obtain the initial target group distribution map B.
1505. A target group profile a of the initial target group profile a and a target group profile B of the initial target group profile B are obtained.
The processing device may acquire the standard view angle map of the initial target group distribution map a through a view angle variation algorithm, and use the standard view angle map of the initial target group distribution map a as the target group distribution map a. Similarly, the processing device may obtain the standard view angle map of the initial target group distribution map B through a view angle variation algorithm, and use the standard view angle map of the initial target group distribution map B as the target group distribution map B.
For example, as shown in fig. 16, the angle of view of the initial target group distribution diagram B is taken as a standard angle of view, and the angle of view of the initial target group distribution diagram a is changed to obtain the target group distribution diagram a with the same angle of view as the target group distribution diagram B.
1506. And determining an image feature set according to the target group distribution diagram A and the target group distribution diagram B.
The target group profile a and the target group profile B are two shapes, and the processing device may compare image features of the two shapes, and when the two image features are found to include the same or a feature set with a higher degree of coincidence, the feature set may be considered as a common part of the two image features. In the embodiments of the present application, this feature set is also referred to as an image feature set. The image features represent the position relation between the objects detected by the sensor, namely the matrix information between the objects. When the two image features include the same or a feature set with a higher degree of coincidence, it indicates that the object sets corresponding to the feature set in the two image features have the same positional relationship, that is, have the same formation information. Different sensors detect the same or similar array information, and the two sensors can be considered to detect the same object set.
In the embodiment of the present application, the image feature set is also referred to as target matrix information, and represents the same or similar matrix information detected by a plurality of sensors.
Specifically, since the sensor has a certain missing detection rate, the image feature set is not required to be completely overlapped with the features in the target group distribution map a and the target group distribution map B, and it is only required to ensure that the overlapping degree of the image feature set with the target group distribution map a and the target group distribution map B is higher than the third threshold. In the embodiments of the present application, the degree of coincidence is also referred to as a degree of similarity. Specifically, the third threshold may be 90%, but 90%, and the third threshold may also be other values, such as 95%, 99%, etc., and is not limited herein.
Optionally, the image feature sets of different target group distribution maps may be matched through a face recognition algorithm or a fingerprint recognition algorithm.
1507. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the image feature set.
The image feature set is composed of a plurality of marking information or marking points, and for each marking information or marking point in the image feature set, corresponding data can be found in the target group distribution diagram A and the target group distribution diagram B. For example, in FIG. 16, the target population profile A is labeled at the bottom of shape "9".
In the embodiment of the present application, the single labeling information or labeling point in the target group distribution map or the image feature set is also referred to as position information, and indicates the position of a single target object in the target object set.
The label information having the same position, i.e., the one at the bottom of the shape "9" in the target group distribution map B, can also be found in the target group distribution map B. Because the two annotation information and the annotation point are in the image feature set and have the same position feature, the two annotation information and the annotation point can be considered to reflect the same object. Therefore, the processing device can fuse the detection information corresponding to the labeled point at the bottom of the shape "9" with the detection information corresponding to the labeled information at the bottom of the shape "9" to obtain the fused information of the target object.
For example, the camera corresponding to the target group distribution diagram a can detect the appearance information such as the size and shape of the object. Specifically, the information such as the model, color, license plate and the like of the corresponding vehicle can be detected. The radar corresponding to the target group profile B can detect information such as the moving speed of the object. Specifically, the vehicle may detect information such as a vehicle speed, an acceleration, and the like of the corresponding vehicle. The processing device can fuse the information of the model, the color, the license plate and the like with the information of the vehicle speed, the acceleration and the like to obtain fused information of the vehicle.
In the embodiment of the application, a plurality of corresponding initial target group distribution maps are obtained according to detection information from different sensors, a plurality of corresponding target group distribution maps are obtained through a visual angle variation algorithm, an image feature set of the target group distribution maps is obtained, and the image feature set is used as target array type information. And determining an image feature set with the coincidence degrees of the plurality of target group distribution graphs higher than a preset threshold value through the plurality of target group distribution graphs derived from the plurality of sensors. Because the image characteristics can intuitively reflect the position relation between the displayed objects in the image, the image characteristic set is determined through the plurality of target group distribution maps, the detection results with similar position relation can be intuitively reflected, the detection results of different sensors to the same target group can be intuitively matched, and the fusion of the detection information is accurately realized.
In the embodiment of the application, the image feature matching method and the scribing method can be combined to obtain a more accurate result.
3. The image feature matching method is combined with the line marking method.
Referring to fig. 17, fig. 17 is a schematic flowchart of an information processing method according to an embodiment of the present application. The method comprises the following steps:
1701. the detection information a is acquired from a sensor a (camera).
1502. The detection information B is acquired from a sensor B (radar).
Steps 1701 and 1702 are referred to in steps 701 and 702 of the embodiment shown in FIG. 7 and will not be described herein.
1703. And acquiring touch line information A according to the detection information A.
In the embodiment shown in fig. 6, it has been described that the touch line information includes timing information when the object touches the reference line, touch point partition information, touch point position information, touch time interval information, and the like. The processing device may acquire any of the foregoing touch line information from the detection information, and for example, may acquire the timing information a and the touch point partition information a from the detection information a. For the process of acquiring the timing information a and the touch point partition information a, refer to step 703 in the embodiment shown in fig. 7, which is not described herein again.
In addition to acquiring the timing information a and the touch point partition information a, the processing device may acquire other touch line information, such as the timing information a and the touch time interval information a shown in step 903 in the embodiment shown in fig. 9, or the timing information a and the touch point position information a shown in step 1103 in the embodiment shown in fig. 11, or the timing information a, the touch point partition information a and the touch time interval information a shown in step 1303 in the embodiment shown in fig. 13, which is not limited herein.
1704. And acquiring the antenna touch information B according to the detection information B.
Corresponding to step 1703, the processing device acquires which types of touch line information are obtained according to the detection information a, and accordingly, the processing device acquires touch line information of the same type according to the detection information B, which is referred to the foregoing embodiments shown in fig. 7, 9, 11, or 13, and is not described herein again.
1705. And determining an initial target group distribution diagram A according to the touch line information A.
The object touches the reference line only at one instant, so that the touch line information can reflect the moment of detecting the information. The processing device may determine the initial target group distribution map a based on the detection information a at the time reflected by the touch line information a. The initial target group distribution map a obtained here reflects the array information at the time of the antenna information a. The process of obtaining the initial target group distribution map a is referred to step 1503 of the embodiment shown in fig. 15, and will not be described herein again.
1706. And determining an initial target group distribution diagram B according to the touch line information B.
The processing device may determine the antenna information B having the same array information as the antenna information a, which may be considered to be the same as the antenna information a, since the antenna information mainly reflects the array information of the set of objects.
The processing device may determine the initial target group distribution map B based on the detection information B at the time reflected by the touch line information B. The initial target group distribution map B obtained here reflects the array information at the time of the antenna information B. The process of obtaining the initial target group distribution map B refers to step 1504 in the embodiment shown in fig. 15, which is not described herein again.
1707. A target group profile a of the initial target group profile a and a target group profile B of the initial target group profile B are obtained.
1708. And determining an image feature set according to the target group distribution diagram A and the target group distribution diagram B.
1709. And fusing the detection information aiming at the same target object from the sensor A and the sensor B according to the position information of the target object in the image feature set.
Steps 1707 to 1709 refer to steps 1505 to 1507 of the embodiment shown in FIG. 15, which are not described herein again.
In the embodiment of the application, because the similarity between the images at the approximate time is high, if the same time is not determined, when the initial target distribution maps from different sensors are matched, the interference of the initial target group distribution maps at the approximate time is introduced, so that distribution map matching errors are caused, and the image feature set acquisition errors are caused, so that the detection information at different moments is fused, and the detection information fusion errors are caused. Specifically, a plurality of initial target group distribution maps are determined through the touch line information, have the same touch line information and indicate that the plurality of initial target group distribution maps are acquired at the same time, so that the fused detection information can be acquired at the same time, and the accuracy of detection information fusion is improved.
And secondly, application of the information processing method in the embodiment of the application.
The method provided by the embodiment of the application can be used for acquiring the fusion information and can also have other purposes, such as the functions of realizing the mapping of space coordinate systems of different sensors, realizing the mapping of time axes of different sensors, correcting or screening the sensors and the like.
1. Mapping of spatial coordinate systems of different sensors is achieved.
Specifically, the plurality of sensors may include a first sensor and a second sensor, where a spatial coordinate system corresponding to the first sensor is a standard coordinate system, and a spatial coordinate system corresponding to the second sensor is a target coordinate system. In order to implement mapping of spatial coordinate systems of different sensors, after the embodiments shown in fig. 7 to 17, the method may further include:
and the processing equipment determines the mapping relation between the plurality of standard point information and the plurality of target point information according to the fusion detection information, wherein the fusion detection information is obtained by fusing the detection information corresponding to the same target object in the plurality of array type information. In the embodiments of the present application, this is also referred to as fusion information. The standard point information represents the position information of each object in the target object set in a standard coordinate system, the target point information represents the position information of each object in the target object set in the target coordinate system, and the plurality of standard point information and the plurality of target point information are in one-to-one correspondence.
After the mapping relationship is determined, the processing device may determine the mapping relationship between the standard coordinate system and the target coordinate system according to the mapping relationship between the standard point information and the target point information.
In the embodiment of the application, the mapping relationship between the plurality of standard point information and the plurality of target point information is determined by fusing the detection information, and the mapping relationship between the standard coordinate system and the target coordinate system is determined by the mapping relationship between the plurality of standard point information and the plurality of target point information. The method of the embodiment of the application can realize the mapping of the coordinate systems among different sensors as long as the detection information from different sensors can be acquired. Subsequent steps of determining target array type information, mapping point information and the like can be automatically realized by processing equipment without manual calibration and mapping. The target array type information is matched through the processing equipment, and the accuracy of point information mapping is improved by the accuracy of equipment operation. Meanwhile, as long as the detection information from different sensors can be obtained, the fusion of the detection information and the mapping of a coordinate system can be realized, the scene limitation caused by manual calibration is avoided, and the accuracy and universality of the detection information fusion are ensured.
2. Mapping of the time axes of the different sensors is achieved.
The processing device calculates a time difference between time axes of the plurality of sensors based on a fusion result of detection information corresponding to the same target object among the plurality of pieces of array information. The time difference can realize the mapping of time axes among different sensors.
In the embodiment of the present application, the time differences between the time axes of the plurality of sensors are calculated by the fusion result of the detection information of the same target object, and the time axes of different sensors can be aligned according to the time differences. The time axis alignment method provided by the embodiment of the application can be realized as long as the detection information of different sensors can be acquired, the application scenes of time axis alignment of different sensors are expanded without a plurality of sensors in the same time synchronization system, and meanwhile, the application range of information fusion is also expanded.
3. Error correction or screening of sensors.
Specifically, the plurality of sensors may include a standard sensor and a sensor to be measured, and the method may further include:
the processing equipment acquires standard array information corresponding to the target array information in the standard sensor; the processing equipment acquires the array information to be detected corresponding to the target array information in the sensor to be detected; the processing equipment determines the difference between the information of the array type to be detected and the information of the standard array type; and the processing equipment acquires an error parameter according to the difference and the standard array type information, wherein the error parameter is used for indicating the error of the array type information to be detected or indicating the performance parameter of the sensor to be detected.
Referring to fig. 18, fig. 18 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 18, if a sensor B falsely detects a piece of data, such as v6a6 data in the figure, it can be determined that the data of the serial number 15 is falsely detected by the sensor B according to a difference between the touch partition sequence a and the touch partition sequence B.
As described above, false detection information of the sensor may be acquired, thereby calculating a false detection rate of the sensor to evaluate performance of the sensor.
Referring to fig. 19, fig. 19 is a schematic view of an application scenario of the information processing method according to the embodiment of the present application, as shown in fig. 19, if a sensor B misses a data item, for example, the target object on the 3 lanes corresponding to the serial number 2 in the figure, it can be determined that a target object is missed between the serial numbers 10 and 11 according to a difference between the touch partition sequence a and the touch partition sequence B.
As described above, the missing detection information of the sensor can be acquired, so that the missing detection rate of the sensor can be calculated to evaluate the performance of the sensor.
In the embodiment of the application, the standard sensor is used as a detection standard, and the error parameter is obtained according to the difference between the information of the array to be detected and the information of the standard array. When the error parameter is used for indicating the error of the array information to be detected, the information corresponding to the error parameter in the array information to be detected can be corrected through the error parameter and the standard array information; when the error parameters are used for indicating the performance parameters of the sensor to be detected, the performance parameters such as the false detection rate of the sensor to be detected can be determined, and the data analysis of the sensor to be detected is realized, so that the selection of the sensor is realized.
And thirdly, processing equipment corresponding to the information processing method in the embodiment of the application.
The following describes a processing apparatus in an embodiment of the present application. Referring to fig. 20, fig. 20 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure. The processing device 2000 is located in a detection system, the detection system further includes at least two sensors, wherein the detection information acquired by the at least two sensors includes detection information of at least two same objects respectively detected by the at least two sensors, and the processing device 2000 may include a processor 2001 and a transceiver 2002.
The transceiver 2002 is configured to obtain at least two pieces of detection information from at least two sensors, where the at least two sensors are in one-to-one correspondence with the at least two pieces of detection information.
Wherein the processor 2001 is configured to: determining at least two corresponding array information according to the at least two pieces of detection information, wherein each array information is used for describing the position relationship between the objects detected by the corresponding sensors, and the objects comprise the target objects; determining target array information according to the at least two array information, wherein the coincidence degree of the target array information and each array information in the at least two array information is higher than a preset threshold value, the target array information is used for describing the position relationship between at least two targets, and the target array information comprises the array information of each target; and according to the position information of any one target object in each target object, fusing the detection information corresponding to the same target object in at least two pieces of position information.
In an alternative embodiment, the detection information includes a position feature set, and the position feature set includes at least two position features, and the position features represent the position relationship between the object detected by the corresponding sensor and the objects around the object.
In an alternative embodiment, the processor 2001 is specifically configured to: acquiring at least two corresponding touch line information according to the at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by a corresponding sensor, and the at least two touch line information and the at least two position feature sets are in one-to-one correspondence; and respectively determining at least two corresponding array information according to the at least two touch wire information, wherein the at least two touch wire information correspond to the at least two array information one to one.
In an optional implementation manner, the touch line information includes timing information and touch point partition information of an object touching the reference line, which are detected by the corresponding sensor, and the touch point partition information indicates partition information of a touch point of the object touching the reference line in the reference line; the array type information comprises a touch subarea sequence, and the touch subarea sequence represents the front-back time sequence relation of the subarea position of the object touch datum line detected by the corresponding sensor.
The processor 2001 is specifically configured to: acquiring a first subsequence of at least two touch partition sequences, and taking the first subsequence as target array type information, wherein the contact ratio of the first subsequence to the at least two touch partition sequences is higher than a first threshold value; and fusing detection information corresponding to the same target object in at least two touch subarea sequences according to the touch point subarea information corresponding to each target object in the first subsequence.
In an alternative embodiment, the touch line information includes timing information and touch time interval information corresponding to the object detected by the sensor touching the reference line, and the touch time interval information indicates a time interval before and after the object touches the reference line; the array information includes a touch interval sequence representing a distribution of time intervals at which the object detected by the corresponding sensor touches the reference line.
The processor 2001 is specifically configured to: acquiring a second subsequence of the at least two touch interval sequences, and taking the second subsequence as target array type information, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than a second threshold value; and fusing detection information corresponding to the same target object in at least two touch interval sequences according to the touch time distribution information corresponding to each target object in the second subsequence.
In an alternative embodiment, the touch line information includes the timing information corresponding to the object detected by the sensor touching the reference line, touch point partition information and the touch time interval information, the touch point partition information indicates partition information of a touch point of the object touching the reference line in the reference line, and the touch time interval information indicates a time interval before and after the object touches the reference line; the array information comprises a touch partition sequence and a touch interval sequence, the touch partition sequence represents the front-back time sequence relation of partition positions of the object touch datum lines detected by the corresponding sensors, and the touch interval sequence represents the distribution of time intervals of the object touch datum lines detected by the corresponding sensors.
The processor 2001 is specifically configured to: acquiring a first subsequence of at least two touch partition sequences, wherein the contact ratio of the first subsequence to the at least two touch partition sequences is higher than a first threshold value; acquiring a second subsequence of the at least two touch interval sequences, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than a second threshold value; determining an intersection of a first object set and a second object set, and taking the intersection as a target object set, wherein the first object set is a set of objects corresponding to the first subsequence, and the second object set is a set of objects corresponding to the second subsequence; and taking the touch subarea sequence and the touch interval sequence of the target object set as the target array type information.
In an alternative embodiment, the formation information includes a target cluster map, which represents the positional relationship between the objects.
The processor 2001 is specifically configured to: acquiring at least two corresponding initial target group distribution maps according to the at least two position feature sets, wherein the initial target distribution maps represent the position relation between the objects detected by the corresponding sensors; acquiring standard view maps of at least two initial target group distribution maps through a view angle change algorithm, and taking the at least two standard view maps as at least two corresponding target group distribution maps, wherein the position information of the target group distribution maps comprises target object distribution information of a target object, and the target object distribution information represents the position of the target object in an object detected by a corresponding sensor; acquiring an image feature set of at least two target group distribution maps, and taking the image feature set as the target array type information, wherein the coincidence degrees of the image feature set and the at least two target group distribution maps are higher than a third threshold; and fusing detection information corresponding to the same target object in at least two target group distribution graphs according to the target object distribution information corresponding to each target object in the image feature set.
In an alternative embodiment, the processor 2001 is further configured to: and acquiring at least two touch line information of the corresponding target object in the image feature set according to the at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by the corresponding sensor, and the at least two touch line information are in one-to-one correspondence with the at least two position feature sets.
The processor 2001 is specifically configured to obtain at least two corresponding initial target group distribution maps according to the at least two touch line information, where objects in the at least two initial target group distribution maps have the same touch line information.
In an alternative embodiment, the at least two sensors include a first sensor and a second sensor, the first sensor corresponds to a standard coordinate system, and the second sensor corresponds to a target coordinate system.
The processor 2001 is also configured to: determining a mapping relation between at least two pieces of standard point information and at least two pieces of target point information according to fusion detection information obtained by fusing detection information corresponding to the same target object in the at least two pieces of array type information, wherein the standard point information represents position information of each object in a target object set in a standard coordinate system, and the target point information represents position information of each object in a target coordinate system, wherein the at least two pieces of standard point information correspond to the at least two pieces of target point information one to one; and determining the mapping relation between the standard coordinate system and the target coordinate system according to the mapping relation between the standard point information and the target point information.
In an alternative embodiment, the processor 2001 is further configured to calculate a time difference between time axes of the at least two sensors according to a fusion result of the detection information corresponding to the same target object in the at least two pieces of array information.
In an alternative embodiment, the at least two sensors include a standard sensor and a sensor under test.
The processor 2001 is also configured to: acquiring standard array information corresponding to the target array information in the standard sensor; acquiring the array information to be detected corresponding to the target array information in the sensor to be detected; determining the difference between the information of the array type to be detected and the information of the standard array type; and acquiring error parameters according to the difference and the standard array type information, wherein the error parameters are used for indicating the error of the array type information to be detected or indicating the performance parameters of the sensor to be detected.
The processing device 2000 may perform the operations performed by the processing device in the embodiments shown in fig. 4 to fig. 17, which are not described herein again.
Referring to fig. 21, fig. 21 is a schematic structural diagram of a processing apparatus according to an embodiment of the present disclosure. The processing device 2100 may include one or more Central Processing Units (CPUs) 2101 and memory 2105. The memory 2105 stores one or more application programs and data.
The memory 2105 may be volatile memory or persistent memory, among others. The program stored in memory 2105 may include one or more modules, each of which may include a sequence of instructions operating on a processing device. Still further, the central processor 2101 may be arranged in communication with the memory 2105 to perform a series of instruction operations within the memory 2105 on the processing device 2100.
The processing device 2100 may also include one or more power supplies 2102, one or more wired or wireless network interfaces 2103, one or more transceiver interfaces 2104, and/or one or more operating systems, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The server 2100 may perform operations performed by the processing device in the embodiments shown in fig. 4 to fig. 17, which are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Claims (25)

1. An information processing method is applied to a processing device in a detection system, the detection system further comprises at least two sensors, wherein detection information acquired by the at least two sensors comprises detection information of at least two same targets respectively acquired by the at least two sensors, and the method comprises the following steps:
the processing equipment acquires at least two pieces of detection information from the at least two sensors, wherein the at least two sensors correspond to the at least two pieces of detection information one to one;
the processing equipment determines at least two corresponding array information according to the at least two pieces of detection information, wherein each array information is used for describing a position relation between objects detected by corresponding sensors, and the objects comprise the target objects;
the processing equipment determines target array information according to the at least two pieces of array information, the coincidence degree of the target array information and each piece of array information in the at least two pieces of array information is higher than a preset threshold value, the target array information is used for describing the position relationship between the at least two target objects, and the target array information comprises the array information of each target object;
and the processing equipment fuses detection information corresponding to the same target object in the at least two pieces of array type information according to the array position information of any target object in each target object.
2. The method of claim 1, wherein the detection information comprises a set of location features, and the set of location features comprises at least two location features representing a location relationship between the object detected by the corresponding sensor and objects surrounding the object.
3. The method of claim 2, wherein the processing device determines at least two corresponding formation information according to the at least two detection information, comprising:
the processing equipment acquires at least two corresponding touch line information according to at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by a corresponding sensor, and the at least two touch line information are in one-to-one correspondence with the at least two position feature sets;
and the processing equipment respectively determines the corresponding at least two pieces of array information according to the at least two pieces of antenna information, and the at least two pieces of antenna information correspond to the at least two pieces of array information one to one.
4. The method of claim 3,
the touch line information comprises time sequence information and touch point partition information of an object touching the reference line, which are detected by a corresponding sensor, and the touch point partition information represents partition information of a touch point of the object touching the reference line in the reference line;
the array type information comprises a touch subarea sequence which represents the front-back time sequence relation of the subarea position of the object detected by the corresponding sensor touching the datum line;
the processing equipment determines target array information according to the at least two array information, and the method comprises the following steps:
the processing equipment acquires a first subsequence of the at least two touch partition sequences, and takes the first subsequence as the target array type information, wherein the contact ratio of the first subsequence to the at least two touch partition sequences is higher than a first threshold;
the processing equipment fuses the detection information corresponding to the same target object in the at least two pieces of array type information according to the array position information of each target object, and the method comprises the following steps:
and the processing equipment fuses detection information corresponding to the same target object in the at least two touch subarea sequences according to the touch point subarea information corresponding to each target object in the first subsequence.
5. The method of claim 3,
the touch line information comprises time sequence information and touch time interval information of an object touching the reference line, which are detected by a corresponding sensor, and the touch time interval information represents the time interval between the object touching the reference line and before and after the object touches the reference line;
the array information comprises a touch interval sequence, and the touch interval sequence represents the distribution of time intervals of the objects detected by the corresponding sensors touching the datum line;
the processing equipment determines target array information according to the at least two array information, and the method comprises the following steps:
the processing equipment acquires a second subsequence of the at least two touch interval sequences, and takes the second subsequence as the target array type information, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than a second threshold;
the processing equipment fuses the detection information corresponding to the same target object in the at least two pieces of array type information according to the array position information of each target object, and the method comprises the following steps:
and the processing equipment fuses the detection information corresponding to the same target object in the at least two touch interval sequences according to the touch time distribution information corresponding to each target object in the second subsequence.
6. The method of claim 3,
the touch line information includes the time sequence information of the object touching the reference line detected by the corresponding sensor, the touch point partition information and the touch time interval information, the touch point partition information represents partition information of a touch point of the object touching the reference line in the reference line, and the touch time interval information represents a time interval between the object touching the reference line and before and after the object touches the reference line;
the array information comprises the touch subarea sequence and the touch interval sequence, the touch subarea sequence represents the front-back time sequence relation of the subarea position of the object detected by the corresponding sensor touching the datum line, and the touch interval sequence represents the distribution of the time interval of the object detected by the corresponding sensor touching the datum line;
the processing equipment determines target array information according to the at least two array information, and the method comprises the following steps:
the processing device obtains the first subsequence of at least two touch partition sequences, and the contact ratios of the first subsequence and the at least two touch partition sequences are higher than the first threshold value;
the processing device acquires a second subsequence of at least two touch interval sequences, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than the second threshold value;
the processing device determines an intersection of a first object set and a second object set, and takes the intersection as a target object set, where the first object set is a set of objects corresponding to the first subsequence, and the second object set is a set of objects corresponding to the second subsequence;
and the processing equipment takes the touch subarea sequence and the touch interval sequence of the target object set as the target array type information.
7. The method of claim 2, wherein the formation information includes a target group profile, the target group profile representing a positional relationship between objects;
the processing device determines at least two corresponding array information according to the at least two pieces of detection information, and the method comprises the following steps:
the processing equipment acquires at least two corresponding initial target group distribution maps according to the at least two position feature sets, wherein the initial target group distribution maps represent the position relation between the objects detected by the corresponding sensors;
the processing equipment acquires standard view angle maps of the at least two initial target group distribution maps through a view angle change algorithm, and takes the at least two standard view angle maps as corresponding at least two target group distribution maps, wherein the position information of the target group distribution maps comprises target object distribution information of a target object, and the target object distribution information represents the position of the target object in the object detected by the corresponding sensor;
the processing equipment determines target array information according to the at least two array information, and the method comprises the following steps:
the processing equipment acquires an image feature set of the at least two target group distribution maps, and takes the image feature set as the target array type information, wherein the coincidence degrees of the image feature set and the at least two target group distribution maps are higher than a third threshold value;
the processing equipment fuses the detection information corresponding to the same target object in the at least two pieces of array type information according to the array position information of each target object, and the method comprises the following steps:
and the processing equipment fuses the detection information corresponding to the same target object in the at least two target group distribution graphs according to the target object distribution information corresponding to each target object in the image feature set.
8. The method of claim 7, further comprising:
the processing equipment acquires at least two touch line information of a target object corresponding to at least two position feature sets according to the at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by a corresponding sensor, and the at least two touch line information are in one-to-one correspondence with the at least two position feature sets;
the processing device acquires at least two corresponding initial target group distribution maps according to at least two position feature sets, and the method comprises the following steps:
and the processing equipment acquires at least two corresponding initial target group distribution graphs according to the at least two pieces of contact line information, wherein objects in the at least two initial target group distribution graphs have the same contact line information.
9. The method of any one of claims 1 to 8, wherein the at least two sensors comprise a first sensor and a second sensor, the first sensor corresponding to a spatial coordinate system that is a standard coordinate system and the second sensor corresponding to a spatial coordinate system that is a target coordinate system, the method further comprising:
the processing equipment determines a mapping relation between at least two pieces of standard point information and at least two pieces of target point information according to fusion detection information obtained by fusing detection information corresponding to the same target object in the at least two pieces of matrix information, wherein the standard point information represents position information of each object in the target object set in the standard coordinate system, and the target point information represents position information of each object in the target object set in the target coordinate system, wherein the at least two pieces of standard point information correspond to the at least two pieces of target point information one to one;
and the processing equipment determines the mapping relation between the standard coordinate system and the target coordinate system according to the mapping relation between the standard point information and the target point information.
10. The method according to any one of claims 1 to 9, further comprising:
and the processing equipment calculates the time difference between the time axes of the at least two sensors according to the fusion result of the detection information corresponding to the same target object in the at least two pieces of array information.
11. The method of any one of claims 1 to 10, wherein the at least two sensors include a standard sensor and a sensor under test, the method further comprising:
the processing equipment acquires standard array information corresponding to the target array information in the standard sensor;
the processing equipment acquires the array information to be detected corresponding to the target array information in the sensor to be detected;
the processing equipment determines the difference between the information of the array type to be detected and the information of the standard array type;
and the processing equipment acquires an error parameter according to the difference and the standard array type information, wherein the error parameter is used for indicating the error of the array type information to be detected or indicating the performance parameter of the sensor to be detected.
12. A processing apparatus, wherein the processing apparatus is located in a detection system, the detection system further includes at least two sensors, detection information obtained by the at least two sensors includes detection information of at least two same objects respectively detected by the at least two sensors, and the processing apparatus includes: a processor and a transceiver;
the transceiver is configured to acquire at least two pieces of detection information from the at least two sensors, where the at least two sensors are in one-to-one correspondence with the at least two pieces of detection information;
the processor is configured to:
determining at least two corresponding array types according to the at least two pieces of detection information, wherein each array type is used for describing a position relation between objects detected by a corresponding sensor, and the objects comprise the target object;
determining target array information according to the at least two pieces of array information, wherein the coincidence degree of the target array information and each piece of array information in the at least two pieces of array information is higher than a preset threshold value, the target array information is used for describing the position relationship between the at least two target objects, and the target array information comprises the array information of each target object;
and according to the position information of any one target object in each target object, fusing the detection information corresponding to the same target object in the at least two pieces of position information.
13. The processing device according to claim 12, wherein the detection information includes a set of location features including at least two location features representing a positional relationship between the object detected by the corresponding sensor and objects surrounding the object.
14. The processing device of claim 13, wherein the processor is specifically configured to:
acquiring at least two corresponding touch line information according to at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by a corresponding sensor, and the at least two touch line information are in one-to-one correspondence with the at least two position feature sets;
and respectively determining the corresponding at least two pieces of array information according to the at least two pieces of antenna information, wherein the at least two pieces of antenna information correspond to the at least two pieces of array information one to one.
15. The processing apparatus according to claim 14,
the touch line information comprises time sequence information and touch point partition information of an object touching the reference line, which are detected by a corresponding sensor, and the touch point partition information represents partition information of a touch point of the object touching the reference line in the reference line;
the array type information comprises a touch subarea sequence which represents the front-back time sequence relation of the subarea position of the object detected by the corresponding sensor touching the datum line;
the processor is specifically configured to:
acquiring a first subsequence of the at least two touch partition sequences, and taking the first subsequence as the target array type information, wherein the contact ratio of the first subsequence to the at least two touch partition sequences is higher than a first threshold value;
and fusing detection information corresponding to the same target object in the at least two touch subarea sequences according to the touch point subarea information corresponding to each target object in the first subsequence.
16. The processing apparatus according to claim 14,
the touch line information comprises time sequence information and touch time interval information of an object touching the reference line, which are detected by a corresponding sensor, and the touch time interval information represents the time interval between the object touching the reference line and before and after the object touches the reference line;
the array information comprises a touch interval sequence, and the touch interval sequence represents the distribution of time intervals of the objects detected by the corresponding sensors touching the datum line;
the processor is specifically configured to:
acquiring a second subsequence of the at least two touch interval sequences, and taking the second subsequence as the target array type information, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than a second threshold value;
and fusing detection information corresponding to the same target object in the at least two touch interval sequences according to the touch time distribution information corresponding to each target object in the second subsequence.
17. The processing apparatus according to claim 14,
the touch line information includes the time sequence information of the object touching the reference line detected by the corresponding sensor, the touch point partition information and the touch time interval information, the touch point partition information represents partition information of a touch point of the object touching the reference line in the reference line, and the touch time interval information represents a time interval between the object touching the reference line and before and after the object touches the reference line;
the array information comprises the touch subarea sequence and the touch interval sequence, the touch subarea sequence represents the front-back time sequence relation of the subarea position of the object detected by the corresponding sensor touching the datum line, and the touch interval sequence represents the distribution of the time interval of the object detected by the corresponding sensor touching the datum line;
the processor is specifically configured to:
acquiring the first subsequences of at least two touch partition sequences, wherein the contact ratio of the first subsequences to the at least two touch partition sequences is higher than the first threshold;
acquiring a second subsequence of at least two touch interval sequences, wherein the contact ratio of the second subsequence to the at least two touch interval sequences is higher than the second threshold value;
determining an intersection of a first object set and a second object set, and taking the intersection as a target object set, wherein the first object set is a set of objects corresponding to the first subsequence, and the second object set is a set of objects corresponding to the second subsequence;
and taking the touch subarea sequence and the touch interval sequence of the target object set as the target array type information.
18. The processing apparatus according to claim 13, wherein the formation information includes a target group profile representing a positional relationship between objects;
the processor is specifically configured to:
acquiring at least two corresponding initial target group distribution maps according to the at least two position feature sets, wherein the initial target distribution maps represent the position relation between the objects detected by the corresponding sensors;
acquiring standard view angle maps of the at least two initial target group distribution maps through a view angle change algorithm, and taking the at least two standard view angle maps as corresponding at least two target group distribution maps, wherein the position information of the target group distribution maps comprises target object distribution information of a target object, and the target object distribution information represents the position of the target object in an object detected by a corresponding sensor;
the processor is specifically configured to:
acquiring an image feature set of the at least two target group distribution maps, and taking the image feature set as the target matrix type information, wherein the coincidence degrees of the image feature set and the at least two target group distribution maps are higher than a third threshold;
the processor is specifically configured to:
and fusing detection information corresponding to the same target object in the at least two target group distribution graphs according to the target object distribution information corresponding to each target object in the image feature set.
19. The processing device of claim 18, wherein the processor is further configured to:
acquiring at least two touch line information of a corresponding target object in the image feature set according to at least two position feature sets, wherein each touch line information in the at least two touch line information is used for describing information of an object touch reference line detected by a corresponding sensor, and the at least two touch line information and the at least two position feature sets are in one-to-one correspondence;
the processor is specifically configured to obtain at least two corresponding initial target group distribution maps according to the at least two pieces of antenna information, where objects in the at least two initial target group distribution maps have the same antenna information.
20. The processing apparatus according to any of claims 12 to 19, wherein the at least two sensors comprise a first sensor and a second sensor, the first sensor corresponding to a spatial coordinate system that is a standard coordinate system and the second sensor corresponding to a spatial coordinate system that is a target coordinate system, the processor further configured to:
determining a mapping relation between at least two pieces of standard point information and at least two pieces of target point information according to fusion detection information obtained by fusing detection information corresponding to the same target object in the at least two pieces of array type information, wherein the standard point information represents position information of each object in the target object set in the standard coordinate system, and the target point information represents position information of each object in the target coordinate system, wherein the at least two pieces of standard point information correspond to the at least two pieces of target point information one to one;
and determining the mapping relation between the standard coordinate system and the target coordinate system according to the mapping relation between the standard point information and the target point information.
21. The processing device according to any one of claims 12 to 20, wherein the processor is further configured to calculate a time difference between time axes of the at least two sensors according to a fusion result of detection information corresponding to a same target object in the at least two pieces of formation information.
22. The processing apparatus according to any one of claims 12 to 21, wherein the at least two sensors comprise a standard sensor and a sensor under test, the processor further configured to:
acquiring standard array information corresponding to the target array information in the standard sensor;
acquiring to-be-detected array information corresponding to the target array information in the to-be-detected sensor;
determining the difference between the information of the array type to be detected and the information of the standard array type;
and acquiring an error parameter according to the difference and the standard array type information, wherein the error parameter is used for indicating the error of the array type information to be detected or indicating the performance parameter of the sensor to be detected.
23. A processing apparatus, comprising:
a processor and a memory coupled with the processor;
the memory stores executable instructions for execution by the processor, the executable instructions instructing the processor to perform the method of any of claims 1 to 11.
24. A computer-readable storage medium, characterized in that a program is stored in the computer-readable storage medium, which, when executed by the computer, performs the method according to any one of claims 1 to 11.
25. A computer program product, characterized in that when the computer program product is executed on a computer, the computer performs the method according to any of claims 1 to 11.
CN202110221913.6A 2021-02-27 2021-02-27 Information processing method and related equipment Pending CN114972935A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202110221913.6A CN114972935A (en) 2021-02-27 2021-02-27 Information processing method and related equipment
PCT/CN2021/131058 WO2022179197A1 (en) 2021-02-27 2021-11-17 Information processing method and related device
JP2023550693A JP2024507891A (en) 2021-02-27 2021-11-17 Information processing methods and related devices
EP21927621.9A EP4266211A4 (en) 2021-02-27 2021-11-17 Information processing method and related device
US18/456,150 US20230410353A1 (en) 2021-02-27 2023-08-25 Information processing method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221913.6A CN114972935A (en) 2021-02-27 2021-02-27 Information processing method and related equipment

Publications (1)

Publication Number Publication Date
CN114972935A true CN114972935A (en) 2022-08-30

Family

ID=82973145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221913.6A Pending CN114972935A (en) 2021-02-27 2021-02-27 Information processing method and related equipment

Country Status (5)

Country Link
US (1) US20230410353A1 (en)
EP (1) EP4266211A4 (en)
JP (1) JP2024507891A (en)
CN (1) CN114972935A (en)
WO (1) WO2022179197A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019198076A1 (en) * 2018-04-11 2019-10-17 Ionterra Transportation And Aviation Technologies Ltd. Real-time raw data- and sensor fusion
CN109444911B (en) * 2018-10-18 2023-05-05 哈尔滨工程大学 Unmanned ship water surface target detection, identification and positioning method based on monocular camera and laser radar information fusion
CN111257866B (en) * 2018-11-30 2022-02-11 杭州海康威视数字技术股份有限公司 Target detection method, device and system for linkage of vehicle-mounted camera and vehicle-mounted radar
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
EP3702802A1 (en) * 2019-03-01 2020-09-02 Aptiv Technologies Limited Method of multi-sensor data fusion
CN109977895B (en) * 2019-04-02 2020-10-16 重庆理工大学 Wild animal video target detection method based on multi-feature map fusion
CN112305576A (en) * 2020-10-31 2021-02-02 中环曼普科技(南京)有限公司 Multi-sensor fusion SLAM algorithm and system thereof

Also Published As

Publication number Publication date
EP4266211A1 (en) 2023-10-25
US20230410353A1 (en) 2023-12-21
JP2024507891A (en) 2024-02-21
EP4266211A4 (en) 2024-05-22
WO2022179197A1 (en) 2022-09-01

Similar Documents

Publication Publication Date Title
KR101758576B1 (en) Method and apparatus for detecting object with radar and camera
CN111830953B (en) Vehicle self-positioning method, device and system
CN106650705B (en) Region labeling method and device and electronic equipment
CN109471096B (en) Multi-sensor target matching method and device and automobile
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN111814752B (en) Indoor positioning realization method, server, intelligent mobile device and storage medium
CN111045000A (en) Monitoring system and method
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN105608417A (en) Traffic signal lamp detection method and device
JP6758160B2 (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
CN112802092B (en) Obstacle sensing method and device and electronic equipment
CN113034586B (en) Road inclination angle detection method and detection system
CN106327461A (en) Image processing method and device used for monitoring
CN115965655A (en) Traffic target tracking method based on radar-vision integration
WO2023231991A1 (en) Traffic signal lamp sensing method and apparatus, and device and storage medium
CN112562005A (en) Space calibration method and system
CN115457084A (en) Multi-camera target detection tracking method and device
CN117392423A (en) Laser radar-based true value data prediction method, device and equipment for target object
CN114724104A (en) Method, device, electronic equipment, system and medium for detecting visual recognition distance
CN109903308B (en) Method and device for acquiring information
CN117130010A (en) Obstacle sensing method and system for unmanned vehicle and unmanned vehicle
CN117406212A (en) Visual fusion detection method for traffic multi-element radar
CN116659518A (en) Autonomous navigation method, device, terminal and medium for intelligent wheelchair
CN114972935A (en) Information processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination