CN114910901B - High-precision multi-sensor fusion ranging system of cooperative robot - Google Patents

High-precision multi-sensor fusion ranging system of cooperative robot Download PDF

Info

Publication number
CN114910901B
CN114910901B CN202210598478.3A CN202210598478A CN114910901B CN 114910901 B CN114910901 B CN 114910901B CN 202210598478 A CN202210598478 A CN 202210598478A CN 114910901 B CN114910901 B CN 114910901B
Authority
CN
China
Prior art keywords
depth
contour
key
division
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210598478.3A
Other languages
Chinese (zh)
Other versions
CN114910901A (en
Inventor
洪俊填
王光能
张国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dazu Robot Co ltd
Original Assignee
Shenzhen Dazu Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dazu Robot Co ltd filed Critical Shenzhen Dazu Robot Co ltd
Priority to CN202210598478.3A priority Critical patent/CN114910901B/en
Publication of CN114910901A publication Critical patent/CN114910901A/en
Application granted granted Critical
Publication of CN114910901B publication Critical patent/CN114910901B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The invention provides a high-precision multi-sensor fusion ranging system of a cooperative robot, which comprises the following components: the acquisition layer is used for acquiring an acoustic panoramic imaging image and multi-sensing ranging data around the cooperative robot and an optical panoramic imaging image; the fusion layer is used for carrying out feature extraction and feature association fusion on the acoustic panoramic imaging image, the multi-sensor ranging data and the optical panoramic imaging image to obtain a corresponding fusion result; the output layer is used for constructing a three-dimensional panoramic model around the cooperative robot based on the fusion result and a point cloud model construction method, and determining a corresponding ranging result based on the three-dimensional panoramic model; the method is used for adopting a plurality of sensors, utilizing the redundancy and complementary characteristics of the sensors and the relatively complete information obtained based on the dynamic change of the external environment, solving the problem of fault tolerance or robustness in the sensor data fusion, obtaining a high-precision ranging result and realizing the real high-precision ranging of the robot.

Description

High-precision multi-sensor fusion ranging system of cooperative robot
Technical Field
The invention relates to the technical field of ranging, in particular to a high-precision multi-sensor fusion ranging system of a cooperative robot.
Background
At present, with the development of related technologies such as sensor technology, data processing technology, computer technology, network communication technology, artificial intelligence technology, parallel computing software and hardware technology, a new and more effective data fusion method is continuously introduced, and multi-sensor data fusion becomes an important technology for intelligent detection and data processing of complex industrial systems in the future, so that the application field of the multi-sensor data fusion method is continuously expanded. Multi-sensor data fusion is not a single technique, but a comprehensive theory and method across disciplines. One typical field of application for multi-sensor data fusion technology is robotics. Currently, the method is mainly applied to mobile robots and teleoperation robots, because the robots work in dynamic, uncertain and unstructured environments, the highly uncertain environments require the robots to have high sensing capability on the environment, and the multi-sensor data fusion technology is an effective method for improving the sensing capability of a robot system. The intelligent robot adopts a plurality of sensors, and utilizes the redundancy and complementary characteristics of the sensors and relatively complete information obtained based on dynamic changes of the external environment to realize high-precision sensing of the changes of the external environment.
However, the technology of multi-sensor fusion is still a very immature new research field for realizing high-precision distance measurement of robots, and in the process of continuous change and development, the application has not established a unified fusion theory and an effective generalized fusion model and algorithm, and the research on a specific method of data fusion is still in a preliminary stage, and the problem of fault tolerance or robustness in sensor data fusion has not been well solved, and the technology has a certain limitation in realizing true high-precision distance measurement.
Therefore, the invention provides a high-precision multi-sensor fusion ranging system of a cooperative robot.
Disclosure of Invention
The invention provides a high-precision multi-sensor fusion ranging system of a collaborative robot, which is used for adopting a plurality of sensors, utilizing the redundancy and complementary characteristics of the sensors and relatively complete information acquired based on dynamic changes of external environments, solving the problem of fault tolerance or robustness in sensor data fusion, obtaining a high-precision ranging result and realizing real high-precision ranging of the robot.
The invention provides a high-precision multi-sensor fusion ranging system of a cooperative robot, which comprises the following components:
the acquisition layer is used for acquiring an acoustic panoramic imaging image and multi-sensing ranging data around the cooperative robot and an optical panoramic imaging image;
the fusion layer is used for carrying out feature extraction and feature association fusion on the acoustic panoramic imaging image, the multi-sensor ranging data and the optical panoramic imaging image to obtain a corresponding fusion result;
and the output layer is used for constructing a three-dimensional panoramic model around the cooperative robot based on the fusion result and the point cloud model construction method, and determining a corresponding ranging result based on the three-dimensional panoramic model.
Preferably, the acquisition layer includes:
the acoustic acquisition module is used for acquiring an acoustic panoramic imaging image around the cooperative robot based on the microphone acoustic array;
the multi-sensing ranging module is used for acquiring ultrasonic ranging data, millimeter wave ranging data and laser radar ranging data around the cooperative robot based on an ultrasonic sensor, a millimeter wave sensor and a laser radar sensor which are uniformly arranged outside the cooperative robot, and taking the ultrasonic ranging data, the millimeter wave ranging data and the laser radar ranging data as corresponding multi-sensing ranging data;
And the optical acquisition module is used for acquiring an optical panoramic imaging image around the cooperative robot by controlling the rotation of a high-definition camera arranged outside the cooperative robot.
Preferably, the acoustic acquisition module comprises:
The array determining unit is used for determining a corresponding microphone acoustic array model based on the operation environment parameters of the cooperative robot and the current required ranging precision;
An image generation unit for controlling a plurality of acoustic sensor units arranged outside the collaborative robot to start capturing acoustic signals from the surroundings of the collaborative robot based on the microphone acoustic array model, obtaining corresponding captured signals based on the acoustic signals, and generating an acoustic panoramic imaging map of the surroundings of the collaborative robot based on the captured signals and acoustic imaging technology.
Preferably, the array determining unit includes:
The parameter determining subunit is used for determining corresponding array parameters based on the operation environment parameters of the cooperative robot and the current required ranging precision;
And the model generation subunit is used for generating a corresponding microphone acoustic array model based on the array parameters and a preset array arrangement form.
Preferably, the fusion layer includes:
the image fusion module is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image to obtain a corresponding fused panoramic image;
And the result fusion module is used for extracting image depth change characteristics contained in the fusion panoramic image and distance change characteristics in the multi-sensor distance measurement data, and carrying out corresponding region association correction on the fusion panoramic image based on the image depth change characteristics and the distance change characteristics to obtain a corresponding fusion result.
Preferably, the image fusion module includes:
The image conversion unit is used for extracting sound field cloud data around the cooperative robot from the acoustic panoramic imaging image and converting the acoustic panoramic imaging image into a corresponding depth image based on the sound field cloud data;
A data extraction unit configured to extract first depth distribution data included in the depth image and second depth distribution data included in the optical panoramic imaging image;
The key matching unit is used for carrying out key matching on the first depth distribution data and the second depth distribution data to obtain a corresponding key matching result;
and the image fusion unit is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image based on the key matching result to obtain a corresponding fused panoramic image.
Preferably, the key matching unit includes:
A contour recognition subunit, configured to generate a corresponding first depth distribution map based on the first depth distribution data, and recognize a corresponding first key contour in the first depth distribution map, and generate a corresponding second depth distribution map based on the second depth distribution data, and recognize a corresponding second key contour in the second depth distribution map;
The center determining subunit is used for determining a first center point corresponding to the first key contour and a second center point corresponding to the second key contour;
the first determining subunit is used for taking the first center point as a rotation center, and determining a plurality of first rotation lines by rotating the straight line passing through the first center point for a preset number of times according to a preset rotation angle gradient;
The second determining subunit is configured to determine a corresponding first division point in the first rotation starting line based on a maximum horizontal coordinate difference value corresponding to the first key contour and a preset division interval, and use a straight line that passes through the first division point and is perpendicular to the first rotation starting line as a first division line corresponding to the first rotation starting line to obtain a first division line set corresponding to the first rotation starting line;
A third determining subunit, configured to determine a first division intersection point of the first key contour and a corresponding first division line, determine a first division distance between adjacent first division intersection points on each first division line, generate a corresponding first division shape matrix based on the first division distances included in all the first division lines included in the first division line set, and determine a corresponding first key vector based on the first division shape matrices corresponding to all the first rotation lines;
A fourth determining subunit, configured to determine a plurality of second lifting lines by using the second center point as a rotation center and rotating a straight line passing through the second center point for a preset number of times according to a preset rotation angle gradient;
A fifth determining subunit, configured to determine a corresponding second division point in the second rotation starting line based on a maximum horizontal coordinate difference value corresponding to the second key contour and a preset division interval, and obtain a second division line set corresponding to the second rotation starting line by using a straight line that passes through the second division point and is perpendicular to the second rotation starting line as a second division line corresponding to the second rotation starting line;
A sixth determining subunit, configured to determine second division points of the second key contour and corresponding second division lines, determine second division distances between adjacent second division points on each second division line, generate corresponding second division shape matrices based on the second division distances in all second division lines included in the second division line set, and determine corresponding second key vectors based on the second division shape matrices corresponding to all second rotation lines;
A contour judging subunit, configured to calculate a first similarity between the first key contour and the second key contour based on the first key vector and the second key vector, determine a second key contour corresponding to a first similarity between the first key contours, and judge whether more than one preliminary matching contour corresponds to the first key contour;
the first matching subunit is used for performing depth gradient feature matching on the first key contour and the corresponding preliminary matching contour when more than one preliminary matching contour corresponds to the first key contour, and determining a final matching contour corresponding to the first key contour;
A second matching subunit, configured to take the corresponding preliminary matching contour as the corresponding final matching contour when only one preliminary matching contour corresponding to the first key contour exists;
And the final matching subunit is used for matching the first depth distribution data with the second depth distribution data based on the first key profile and the corresponding final matching profile to obtain a corresponding key matching result.
Preferably, the first matching subunit includes:
the first generation subunit is used for randomly selecting a first starting point from the first key contour, determining first depth gradient data corresponding to the first key contour in a clockwise direction from the first starting point based on first sub-depth distribution data corresponding to the first key contour, and generating a corresponding first depth gradient curve based on the first depth gradient data;
The second generation subunit is used for randomly selecting a second starting point in the preliminary matching contour, determining second depth gradient data corresponding to the preliminary matching contour from the second starting point along the clockwise direction based on second sub-depth distribution data corresponding to the preliminary matching contour, and generating a corresponding second depth gradient curve based on the second depth gradient data;
the ordinal number determining subunit is used for dividing the first depth gradual change curve into a plurality of gradual change curve segments based on preset dividing precision, and determining an initial ordinal number corresponding to the gradual change curve segments based on the position of the gradual change curve segments in the first gradual change curve;
A multiple sequential moving subunit, configured to set, in order from the initial ordinal number to the large number, a current ordinal number corresponding to each gradual change curve segment as 1, and regarding each setting process as a sequential moving process, regarding the gradual change curve segment with the current ordinal number of 1 as a leading curve segment corresponding to the sequential moving process, determining a sequential moving difference value between the new leading curve segment and 1 when each new leading curve segment is obtained, subtracting the sequential moving difference value from the current ordinal number corresponding to the gradual change curve segment with the current ordinal number greater than the initial ordinal number of the current leading curve segment, obtaining a new ordinal number corresponding to the gradual change curve segment, and meanwhile, adding the current ordinal number corresponding to the gradual change curve segment with the current ordinal number less than the initial ordinal number of the current leading curve segment to the initial ordinal number of the leading curve segment, obtaining a new ordinal number corresponding to the corresponding gradual change curve segment, and sorting all gradual change segments based on the latest determined ordinal numbers, so as to obtain a corresponding gradual change curve segment sorting sequence in the current sequential moving process;
The first calculating subunit is used for connecting all the gradual curve segments to generate corresponding depth gradual forward-shifting curves based on the gradual curve segment sequencing sequence, and calculating second similarity between the depth gradual forward-shifting curves and the second depth gradual curves;
The feature extraction subunit is used for determining a first region contained in the first key contour and a second region contained in the preliminary matching contour, determining a corresponding first transverse depth change feature sequence and a corresponding first longitudinal depth change feature sequence in the first region, and simultaneously determining a corresponding second transverse depth change feature sequence and a corresponding second longitudinal depth change feature sequence in the second region;
A second computing subunit configured to calculate a third similarity between the first lateral depth change feature sequence and the second lateral depth change feature sequence, and a fourth similarity between the first longitudinal depth change feature sequence and the second longitudinal depth change feature sequence;
and the final determining subunit is used for calculating the comprehensive similarity between the first key contour and the corresponding preliminary matching contour based on the second similarity, the third similarity and the fourth similarity, and taking the preliminary matching contour corresponding to the maximum comprehensive similarity as the corresponding final matching contour.
Preferably, the output layer includes:
The data determining module is used for extracting panoramic point cloud data around the cooperative robot from the fusion result;
The model construction module is used for constructing a three-dimensional panoramic model around the cooperative robot based on the panoramic point cloud data and the point cloud construction method;
and the result determining module is used for determining a ranging result of the corresponding position based on the three-dimensional panoramic data.
Preferably, the model building module includes:
the point cloud correction unit is used for calculating the local coherence corresponding to each point cloud data contained in the panoramic point cloud data, correcting the point cloud data with the local coherence smaller than a coherence threshold value, and obtaining corresponding accurate full-view point cloud data;
the model construction unit is used for constructing a three-dimensional panoramic model around the cooperative robot based on the accurate full-view point cloud data and the point cloud construction method.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a schematic diagram of a high-precision multi-sensor fusion ranging system for a collaborative robot in an embodiment of the invention;
FIG. 2 is a schematic diagram of an acquisition layer according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an acoustic acquisition module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an array determining unit according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a fusion layer according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an image fusion module according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a key matching unit according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a first matching subunit according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an output layer according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a model building block according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1:
the invention provides a high-precision multi-sensor fusion ranging system of a cooperative robot, which referring to fig. 1, comprises:
the acquisition layer is used for acquiring an acoustic panoramic imaging image and multi-sensing ranging data around the cooperative robot and an optical panoramic imaging image;
the fusion layer is used for carrying out feature extraction and feature association fusion on the acoustic panoramic imaging image, the multi-sensor ranging data and the optical panoramic imaging image to obtain a corresponding fusion result;
and the output layer is used for constructing a three-dimensional panoramic model around the cooperative robot based on the fusion result and the point cloud model construction method, and determining a corresponding ranging result based on the three-dimensional panoramic model.
In this embodiment, the acoustic panoramic imaging map is a 360-degree panoramic acoustic imaging map around the collaborative robot acquired based on the microphone acoustic array.
In this embodiment, the multi-sensing ranging data is a plurality of sensing data, for example: ultrasonic ranging data and millimeter wave ranging data, and laser radar ranging data.
In this embodiment, the optical panoramic imaging map is a 360-degree panoramic optical imaging map around the collaborative robot acquired based on the optical camera.
In this embodiment, the fusion result is obtained after feature extraction and feature association fusion are performed on the acoustic panoramic imaging image and the multi-sensor ranging data and the optical panoramic imaging image.
In the embodiment, the three-dimensional panoramic model is a three-dimensional model of 360-degree panoramic around the cooperative robot constructed based on the fusion result and the point cloud model construction method.
In the embodiment, the distance measurement result is the distance between the cooperative robot and the surrounding corresponding point determined based on the three-dimensional panoramic model.
The beneficial effects of the technology are as follows: the method adopts a plurality of sensors, utilizes the redundancy and complementary characteristics of the sensors and relatively complete information obtained based on dynamic changes of external environments, solves the problem of fault tolerance or robustness in sensor data fusion, obtains a high-precision ranging result, and realizes real high-precision ranging of the robot.
Example 2:
On the basis of embodiment 1, the acquisition layer, referring to fig. 2, includes:
the acoustic acquisition module is used for acquiring an acoustic panoramic imaging image around the cooperative robot based on the microphone acoustic array;
the multi-sensing ranging module is used for acquiring ultrasonic ranging data, millimeter wave ranging data and laser radar ranging data around the cooperative robot based on an ultrasonic sensor, a millimeter wave sensor and a laser radar sensor which are uniformly arranged outside the cooperative robot, and taking the ultrasonic ranging data, the millimeter wave ranging data and the laser radar ranging data as corresponding multi-sensing ranging data;
And the optical acquisition module is used for acquiring an optical panoramic imaging image around the cooperative robot by controlling the rotation of a high-definition camera arranged outside the cooperative robot.
In this embodiment, the microphone acoustic array is an array of microphones arranged outside the collaborative robot.
In this embodiment, the ultrasonic ranging data is a measured distance between the cooperative robot and each point around the cooperative robot obtained based on ultrasonic sensors uniformly disposed outside the cooperative robot.
In this embodiment, millimeter wave ranging data is based on the measured distances between the cooperative robot and the surrounding points acquired by the millimeter wave sensors uniformly provided outside the cooperative robot.
In this embodiment, the lidar ranging data is based on the measured distances between the cooperative robot and the surrounding points acquired by the lidar sensors uniformly disposed outside the cooperative robot.
The beneficial effects of the technology are as follows: based on the microphone acoustic array, the ultrasonic sensor, the millimeter wave sensor, the laser radar sensor and the high-definition camera, multi-source data around the collaborative robot are obtained, complete information around the collaborative robot is obtained, and a data base is provided for subsequent determination of high-precision ranging results.
Example 3:
On the basis of embodiment 2, the acoustic acquisition module, referring to fig. 3, includes:
The array determining unit is used for determining a corresponding microphone acoustic array model based on the operation environment parameters of the cooperative robot and the current required ranging precision;
An image generation unit for controlling a plurality of acoustic sensor units arranged outside the collaborative robot to start capturing acoustic signals from the surroundings of the collaborative robot based on the microphone acoustic array model, obtaining corresponding captured signals based on the acoustic signals, and generating an acoustic panoramic imaging map of the surroundings of the collaborative robot based on the captured signals and acoustic imaging technology.
In this embodiment, the operating environment parameters are, for example, the desired range.
In this embodiment, the currently required ranging accuracy is the ranging accuracy, for example: millimeter or nanometer.
In this embodiment, the microphone acoustic array model is an arrangement pattern of microphones outside the collaborative robot.
In this embodiment, the capturing signal is that a plurality of acoustic sensor units disposed outside the collaborative robot are controlled to start capturing acoustic signals from around the collaborative robot based on the microphone acoustic array model.
The beneficial effects of the technology are as follows: the corresponding microphone acoustic array is set based on the operation environment parameters of the cooperative robot and the current required distance measurement precision, so that the microphone acoustic array suitable for the current operation environment is selected, the anti-interference capability in the sound capturing process is enhanced, and the precision of the acoustic panoramic imaging image is ensured.
Example 4:
on the basis of embodiment 3, the array determining unit, referring to fig. 4, includes:
The parameter determining subunit is used for determining corresponding array parameters based on the operation environment parameters of the cooperative robot and the current required ranging precision;
And the model generation subunit is used for generating a corresponding microphone acoustic array model based on the array parameters and a preset array arrangement form.
In the embodiment, the array parameters comprise two aspects of geometric parameters and characteristic parameters, wherein the geometric parameters mainly comprise microphone spacing, microphone space position, array aperture size, microphone number and the like, and the characteristic parameters comprise directivity, main lobe width, side lobe size, spatial resolution and the like of the array; the array aperture influences the response of the array to a low-frequency sound source, the larger the aperture size is, the smaller the measurable sound source frequency is, the lower the array spatial resolution is, the array element spacing determines the range of the identifiable sound source frequency of the array, and the space geometrical form of the microphone enables the array to have different main lobe widths and sidelobe number levels. In practical application, the factors such as equipment, requirements and the like are integrated, array parameters are required to be reasonably selected, different array shapes are compared, and a topological structure with better performance is selected; the unreasonable arrangement of microphones in an acoustic array can lead to increased side lobes and increased amplitude in a directivity pattern, and serious spatial confusion can also occur, so that the energy of a sound source leaks and the true position cannot be identified.
In this embodiment, the preset array arrangement form is as follows: star, ring, checkerboard, spiral array.
The beneficial effects of the technology are as follows: by selecting proper array parameters, higher array performance can be obtained, the range of the measurable sound source frequency and the identifiable sound source frequency is ensured to be large enough, and further the accuracy of a ranging result is ensured.
Example 5:
on the basis of embodiment 4, the fusion layer, referring to fig. 5, includes:
the image fusion module is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image to obtain a corresponding fused panoramic image;
And the result fusion module is used for extracting image depth change characteristics contained in the fusion panoramic image and distance change characteristics in the multi-sensor distance measurement data, and carrying out corresponding region association correction on the fusion panoramic image based on the image depth change characteristics and the distance change characteristics to obtain a corresponding fusion result.
In this embodiment, the fused panoramic image is an image obtained after the fusion of the acoustic panoramic imaging image and the optical panoramic imaging image.
In this embodiment, the image depth change feature is a feature characterizing the image depth change included in the fused panoramic image.
In this embodiment, the distance change feature is a feature that characterizes a distance change between different points in the collaborative robot and the surrounding environment, which is included in the fused panoramic image.
In this embodiment, the fusion result is a result obtained after performing corresponding region association correction on the fused panoramic image based on the image depth change feature and the distance change feature.
The beneficial effects of the technology are as follows: the acoustic panoramic imaging image and the optical panoramic imaging image are fused, and then the fused panoramic image is corrected based on the image depth change characteristics contained in the fused panoramic image and the distance change characteristics in the multi-sensor ranging data, so that the distance characterization information between the collaborative robot and different points of the surrounding environment can be accurately reflected by the fused panoramic image obtained after correction.
Example 6:
On the basis of embodiment 5, the image fusion module, referring to fig. 6, includes:
The image conversion unit is used for extracting sound field cloud data around the cooperative robot from the acoustic panoramic imaging image and converting the acoustic panoramic imaging image into a corresponding depth image based on the sound field cloud data;
A data extraction unit configured to extract first depth distribution data included in the depth image and second depth distribution data included in the optical panoramic imaging image;
The key matching unit is used for carrying out key matching on the first depth distribution data and the second depth distribution data to obtain a corresponding key matching result;
and the image fusion unit is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image based on the key matching result to obtain a corresponding fused panoramic image.
In this embodiment, the sound field cloud data is sound field data around the cooperative robot extracted from the acoustic panoramic imaging map.
In this embodiment, the depth image is a distribution image representing the distance between each point around the collaborative robot and the collaborative robot.
In this embodiment, the first depth distribution data is the depth distribution data in the depth image.
In this embodiment, the second depth distribution data is the depth distribution data in the optical panoramic imaging map.
In this embodiment, the key matching result is a result obtained after the first depth distribution data and the second depth distribution data are subjected to key matching.
The beneficial effects of the technology are as follows: and carrying out key matching on the depth distribution data in the acoustic panoramic imaging image and the depth distribution data in the optical panoramic imaging image to obtain a corresponding key matching result, realizing local alignment of the acoustic panoramic imaging image and the optical panoramic imaging image based on the key matching result, and reducing errors of the fused panoramic images.
Example 7:
On the basis of embodiment 6, the key matching unit, referring to fig. 7, includes:
A contour recognition subunit, configured to generate a corresponding first depth distribution map based on the first depth distribution data, and recognize a corresponding first key contour in the first depth distribution map, and generate a corresponding second depth distribution map based on the second depth distribution data, and recognize a corresponding second key contour in the second depth distribution map;
The center determining subunit is used for determining a first center point corresponding to the first key contour and a second center point corresponding to the second key contour;
the first determining subunit is used for taking the first center point as a rotation center, and determining a plurality of first rotation lines by rotating the straight line passing through the first center point for a preset number of times according to a preset rotation angle gradient;
The second determining subunit is configured to determine a corresponding first division point in the first rotation starting line based on a maximum horizontal coordinate difference value corresponding to the first key contour and a preset division interval, and use a straight line that passes through the first division point and is perpendicular to the first rotation starting line as a first division line corresponding to the first rotation starting line to obtain a first division line set corresponding to the first rotation starting line;
A third determining subunit, configured to determine a first division intersection point of the first key contour and a corresponding first division line, determine a first division distance between adjacent first division intersection points on each first division line, generate a corresponding first division shape matrix based on the first division distances included in all the first division lines included in the first division line set, and determine a corresponding first key vector based on the first division shape matrices corresponding to all the first rotation lines;
A fourth determining subunit, configured to determine a plurality of second lifting lines by using the second center point as a rotation center and rotating a straight line passing through the second center point for a preset number of times according to a preset rotation angle gradient;
A fifth determining subunit, configured to determine a corresponding second division point in the second rotation starting line based on a maximum horizontal coordinate difference value corresponding to the second key contour and a preset division interval, and obtain a second division line set corresponding to the second rotation starting line by using a straight line that passes through the second division point and is perpendicular to the second rotation starting line as a second division line corresponding to the second rotation starting line;
A sixth determining subunit, configured to determine second division points of the second key contour and corresponding second division lines, determine second division distances between adjacent second division points on each second division line, generate corresponding second division shape matrices based on the second division distances in all second division lines included in the second division line set, and determine corresponding second key vectors based on the second division shape matrices corresponding to all second rotation lines;
A contour judging subunit, configured to calculate a first similarity between the first key contour and the second key contour based on the first key vector and the second key vector, determine a second key contour corresponding to a first similarity between the first key contours, and judge whether more than one preliminary matching contour corresponds to the first key contour;
the first matching subunit is used for performing depth gradient feature matching on the first key contour and the corresponding preliminary matching contour when more than one preliminary matching contour corresponds to the first key contour, and determining a final matching contour corresponding to the first key contour;
A second matching subunit, configured to take the corresponding preliminary matching contour as the corresponding final matching contour when only one preliminary matching contour corresponding to the first key contour exists;
And the final matching subunit is used for matching the first depth distribution data with the second depth distribution data based on the first key profile and the corresponding final matching profile to obtain a corresponding key matching result.
In this embodiment, a corresponding first depth distribution map is generated based on the first depth distribution data, a corresponding first key contour is identified in the first depth distribution map, a corresponding second depth distribution map is generated based on the second depth distribution data, a corresponding second key contour is identified in the second depth distribution map, the contour identification method adopted here can be various contour identification algorithms or edge identification algorithms in the prior art,
For example: identifying a corresponding key contour based on a Canny edge detection algorithm, and executing the following steps: 1) Image noise reduction; 2) Calculating an image gradient; 3) Non-maximum suppression; 4) Threshold value screening;
Or identifying the corresponding key contour through the gray scale change rate of the pixel points contained in the first depth distribution chart and the second depth distribution chart, and taking the contour formed by the pixel points with the gray scale change rate larger than the change rate threshold value as the corresponding key contour.
In this embodiment, the first depth profile is a depth profile generated based on the first depth profile data.
In this embodiment, the first key profile is a profile included in the first depth profile.
In this embodiment, the second depth profile is a depth profile generated based on the second depth profile data.
In this embodiment, the second critical profile is the profile included in the second depth profile.
In this embodiment, the first center point is the physical center point of the first key contour.
In this embodiment, the second center point is the physical center point of the second key contour.
In this embodiment, the first rotation line is a plurality of straight lines obtained by rotating the straight line passing through the first center point for a preset number of times according to a preset rotation angle gradient by taking the first center point as a rotation center.
In this embodiment, the preset rotation angle gradient is a preset rotation angle gradient, for example, 12 rotation lines are set, and a rotation angle is set every 360+.12=30 degrees.
In this embodiment, the maximum abscissa value is the difference between the maximum abscissa value and the lowest abscissa value in the first key profile.
In this embodiment, the preset division interval is set according to the specific case.
In this embodiment, the first division point is a point determined in the first rotation line based on the maximum horizontal coordinate difference value corresponding to the first key contour and the preset division interval.
In this embodiment, the corresponding first division point is determined in the first rotation line based on the maximum horizontal coordinate difference value corresponding to the first key contour and a preset division interval, for example, the maximum horizontal coordinate difference value is 10 unit lengths, and the preset division interval is 1 unit length, and then one first division point is set every 1 unit length.
In this embodiment, the first dividing line is a straight line passing through the first dividing point and perpendicular to the first rotation line.
In this embodiment, the first dividing line set is a set formed by all first dividing lines corresponding to the first transfer line.
In this embodiment, the first dividing intersection point is an intersection point of the first key contour and the corresponding first dividing line.
In this embodiment, the first dividing distance is the distance between adjacent first dividing intersections on each first dividing line.
In this embodiment, the corresponding first division shape matrix is generated based on the first division distances included in all the first division lines included in the first division line set, which is:
Sequentially taking the first separation distance contained in each first division line as a value in a corresponding row in the first division shape matrix; and generating a corresponding first division shape matrix based on the first division distances contained in all the first division lines, wherein when the total number of the first division distances is smaller than the dimension of the first division shape matrix, the end of the value in the corresponding row is complemented by 0, so that the value reaches the dimension of the first division shape matrix.
In this embodiment, the first segmentation shape matrix corresponding to all the first rotation lines determines a corresponding first key vector, which is: and taking the transposed matrix of the matrix obtained by transverse arrangement and combination of all the first segmentation shape matrices corresponding to the first transfer lines as a corresponding first key vector.
In this embodiment, the second rotation line is a straight line determined by rotating the straight line passing through the second center point for a preset number of times according to a preset rotation angle gradient by using the second center point as a rotation center.
In this embodiment, the second division point is a point determined in the second rotation line based on the maximum horizontal coordinate difference value corresponding to the second key contour and the preset division interval.
In this embodiment, the second dividing line is a straight line passing through the second dividing point and perpendicular to the second rotation line.
In this embodiment, the second dividing line set is a set formed by all the second dividing lines.
In this embodiment, the second dividing intersection point is an intersection point of the second key contour and the corresponding second dividing line.
In this embodiment, the second dividing distance is the distance between adjacent second dividing intersections on the corresponding second dividing line.
In this embodiment, the second division shape matrix is a matrix generated based on the second division distances included in all the second division lines included in the second division line set.
In this embodiment, the second key vector is a vector determined based on the second division shape matrix corresponding to all the second rotation lines.
In this embodiment, calculating a first similarity between the first key contour and the second key contour based on the first key vector and the second key vector includes:
wherein s 1 is a first similarity between the first key contour and the second key contour, X 1 is a first key vector, X 2 is a second key vector, X 1 is a modulus of the first key vector, X 2 is a modulus of the second key vector, (|x 1|,|X2|)max is a maximum value in the modulus of the first key vector and the modulus of the second key vector;
For example, if X 1 is (0, 1), X 2 is (0, 2), s 1 is 0.5.
In this embodiment, the preliminary matching contour is the second key contour corresponding to the first key contour with the greatest similarity.
In this embodiment, the final matching profile is the final matching profile corresponding to the determined first key profile.
The beneficial effects of the technology are as follows: the corresponding segmentation shape matrix is determined based on the first depth distribution data in the first key outline and the corresponding segmentation shape matrix is determined based on the second depth distribution data in the second key outline to carry out key matching, so that the local alignment matching of the acoustic panoramic imaging image and the optical panoramic imaging image is realized, and the error of fusing panoramic images is reduced.
Example 8:
on the basis of embodiment 7, the first matching subunit, referring to fig. 8, includes:
the first generation subunit is used for randomly selecting a first starting point from the first key contour, determining first depth gradient data corresponding to the first key contour in a clockwise direction from the first starting point based on first sub-depth distribution data corresponding to the first key contour, and generating a corresponding first depth gradient curve based on the first depth gradient data;
The second generation subunit is used for randomly selecting a second starting point in the preliminary matching contour, determining second depth gradient data corresponding to the preliminary matching contour from the second starting point along the clockwise direction based on second sub-depth distribution data corresponding to the preliminary matching contour, and generating a corresponding second depth gradient curve based on the second depth gradient data;
the ordinal number determining subunit is used for dividing the first depth gradual change curve into a plurality of gradual change curve segments based on preset dividing precision, and determining an initial ordinal number corresponding to the gradual change curve segments based on the position of the gradual change curve segments in the first gradual change curve;
A multiple sequential moving subunit, configured to set, in order from the initial ordinal number to the large number, a current ordinal number corresponding to each gradual change curve segment as 1, and regarding each setting process as a sequential moving process, regarding the gradual change curve segment with the current ordinal number of 1 as a leading curve segment corresponding to the sequential moving process, determining a sequential moving difference value between the new leading curve segment and 1 when each new leading curve segment is obtained, subtracting the sequential moving difference value from the current ordinal number corresponding to the gradual change curve segment with the current ordinal number greater than the initial ordinal number of the current leading curve segment, obtaining a new ordinal number corresponding to the gradual change curve segment, and meanwhile, adding the current ordinal number corresponding to the gradual change curve segment with the current ordinal number less than the initial ordinal number of the current leading curve segment to the initial ordinal number of the leading curve segment, obtaining a new ordinal number corresponding to the corresponding gradual change curve segment, and sorting all gradual change segments based on the latest determined ordinal numbers, so as to obtain a corresponding gradual change curve segment sorting sequence in the current sequential moving process;
The first calculating subunit is used for connecting all the gradual curve segments to generate corresponding depth gradual forward-shifting curves based on the gradual curve segment sequencing sequence, and calculating second similarity between the depth gradual forward-shifting curves and the second depth gradual curves;
The feature extraction subunit is used for determining a first region contained in the first key contour and a second region contained in the preliminary matching contour, determining a corresponding first transverse depth change feature sequence and a corresponding first longitudinal depth change feature sequence in the first region, and simultaneously determining a corresponding second transverse depth change feature sequence and a corresponding second longitudinal depth change feature sequence in the second region;
A second computing subunit configured to calculate a third similarity between the first lateral depth change feature sequence and the second lateral depth change feature sequence, and a fourth similarity between the first longitudinal depth change feature sequence and the second longitudinal depth change feature sequence;
and the final determining subunit is used for calculating the comprehensive similarity between the first key contour and the corresponding preliminary matching contour based on the second similarity, the third similarity and the fourth similarity, and taking the preliminary matching contour corresponding to the maximum comprehensive similarity as the corresponding final matching contour.
In this embodiment, the first starting point is a randomly selected first point in the first key profile.
In this embodiment, the first sub-depth distribution data is the depth distribution data corresponding to the first key profile.
In this embodiment, the first depth gradation data is depth gradation data of a first key contour determined in a clockwise direction from a first start point.
In this embodiment, the first depth gradient curve is a depth gradient curve generated based on the first depth gradient data.
In this embodiment, the second starting point is a randomly selected point in the preliminary matching profile.
In this embodiment, the second sub-depth distribution data is the depth distribution data corresponding to the preliminary matching profile.
In this embodiment, the second depth gradation data is depth gradation data corresponding to the preliminary matching profile determined in the clockwise direction from the second starting point.
In this embodiment, the second depth gradient curve is a curve generated based on the second depth gradient data.
In this embodiment, the preset division accuracy is set according to the actual situation.
In this embodiment, the gradual curve segments are a plurality of gradual curve segments obtained by dividing the first depth gradual curve based on a preset division precision.
In this embodiment, the initial ordinal number is the initial ordinal number of the progressive curve segment determined based on the ordered ordinal number of the progressive curve segment in the first progressive curve.
In this embodiment, the current ordinal number is the current ordinal number of the corresponding gradual curve segment.
In this embodiment, the leading curve segment is a gradual curve segment with a current ordinal number of 1.
In this embodiment, the forward difference is the difference between the initial ordinal number of the new leading curve segment and 1.
In this embodiment, the progressive curve segment ordering sequence is an ordering sequence of progressive curve segments obtained by ordering all progressive curve segments based on the latest determined ordinal number.
In this embodiment, the depth gradual transition curve is a curve generated by connecting all gradual transition curve segments based on a gradual transition curve segment sequencing sequence.
In this embodiment, calculating a second similarity between the depth gradation forward motion curve and the second depth gradation curve includes:
Determining a depth gradient forward-shifting function corresponding to the depth gradient forward-shifting curve, and determining a depth gradient function corresponding to a second depth gradient curve;
Calculating a second similarity between the depth fade downshifting curve and a second depth fade curve based on the depth fade downshifting function and the depth fade function, comprising:
Wherein s 2 is a second similarity between the depth gradation forward-shift curve and the second depth gradation curve, x 0 is a minimum value in a maximum abscissa value of the depth gradation forward-shift function and a maximum abscissa value of the depth gradation function, f (x) is the depth gradation forward-shift function, h (x) is the depth gradation function, x is an independent variable in the depth gradation forward-shift function and the depth gradation function, f (x) is a derivative of the depth gradation forward-shift function, and h (x) is a derivative of the depth gradation function;
For example, f (x) =2x, h (x) =3x, x 0 is 1, s 2 is 0.5.
In this embodiment, the first region is a region included in the first key contour.
In this embodiment, the second region is a region included in the second key profile.
In this embodiment, the first sequence of lateral depth change features is a sequence of lateral depth change features in the first region.
In this embodiment, the first longitudinal depth change feature sequence is a sequence of longitudinal depth change features in the first region.
In this embodiment, the second sequence of lateral depth variation features is a sequence of lateral depth variation features in the second region.
In this embodiment, the second longitudinal depth change feature sequence is a sequence of longitudinal depth change features in the second region.
In this embodiment, calculating a third similarity between the first lateral depth variation feature sequence and the second lateral depth variation feature sequence and a fourth similarity between the first longitudinal depth variation feature sequence and the second longitudinal depth variation feature sequence includes:
Fitting a corresponding first lateral depth change curve based on first lateral depth change feature data contained in the first lateral depth change feature sequence, and fitting a corresponding second lateral depth change curve based on second lateral depth change feature data contained in the second lateral depth change feature sequence;
determining sub-similarity between a first lateral depth change curve contained in the first lateral depth change feature sequence and a corresponding second lateral depth change curve contained in the two lateral depth change feature sequences;
and taking the average value of sub-similarities corresponding to all the first transverse depth change curves as a fourth similarity between the first longitudinal depth change characteristic sequence and the second longitudinal depth change characteristic sequence.
In this embodiment, the integrated similarity is an average value between the second similarity and the third and fourth similarities.
The beneficial effects of the technology are as follows: based on the depth change features in the first key contour and the contour of the initial matching contour and the depth change features in the region more contained in the first key contour and the region contained in the contour of the initial matching contour, further matching between the first key contour and the initial matching contour is realized, local alignment matching of the acoustic panoramic imaging image and the optical panoramic imaging image is realized, and errors of the fused panoramic image are reduced.
Example 9:
on the basis of embodiment 8, the output layer, referring to fig. 9, includes:
The data determining module is used for extracting panoramic point cloud data around the cooperative robot from the fusion result;
The model construction module is used for constructing a three-dimensional panoramic model around the cooperative robot based on the panoramic point cloud data and the point cloud construction method;
and the result determining module is used for determining a ranging result of the corresponding position based on the three-dimensional panoramic data.
In this embodiment, the full-scene point cloud data is the point cloud data of the surrounding panorama of the cooperative robot extracted from the fusion result.
The beneficial effects of the technology are as follows: the method and the device have the advantages that the three-dimensional panoramic model around the collaborative robot is built based on the fusion result, the ranging result of the corresponding position is determined based on the three-dimensional panoramic model, and the ranging result with higher precision is comprehensively determined after various high-pass data are fused.
Example 10:
on the basis of embodiment 9, the model building module, referring to fig. 10, includes:
the point cloud correction unit is used for calculating the local coherence corresponding to each point cloud data contained in the panoramic point cloud data, correcting the point cloud data with the local coherence smaller than a coherence threshold value, and obtaining corresponding accurate full-view point cloud data;
the model construction unit is used for constructing a three-dimensional panoramic model around the cooperative robot based on the accurate full-view point cloud data and the point cloud construction method.
In this embodiment, calculating the local coherence corresponding to each point cloud data included in the panoramic point cloud data includes:
And taking the ratio of the difference value between each point cloud data and the adjacent point cloud data to the corresponding point cloud data as the local coherence of the corresponding point cloud data.
In this embodiment, the accurate panoramic point cloud data is accurate panoramic point cloud data obtained by correcting point cloud data with local continuity smaller than a continuity threshold.
In this embodiment, the coherence threshold is the minimum local coherence corresponding to when the coherence satisfies the requirements.
The beneficial effects of the technology are as follows: and correcting the panoramic point cloud data based on the local coherence corresponding to the panoramic point cloud data, so that the accuracy of the finally determined three-dimensional panoramic model is further ensured.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (7)

1. The utility model provides a collaborative robot high accuracy multisensor fuses ranging system which characterized in that includes:
the acquisition layer is used for acquiring an acoustic panoramic imaging image and multi-sensing ranging data around the cooperative robot and an optical panoramic imaging image;
the fusion layer is used for carrying out feature extraction and feature association fusion on the acoustic panoramic imaging image, the multi-sensor ranging data and the optical panoramic imaging image to obtain a corresponding fusion result;
The output layer is used for constructing a three-dimensional panoramic model around the cooperative robot based on the fusion result and a point cloud model construction method, and determining a corresponding ranging result based on the three-dimensional panoramic model;
wherein, the fusion layer includes:
the image fusion module is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image to obtain a corresponding fused panoramic image;
the result fusion module is used for extracting image depth change characteristics contained in the fusion panoramic image and distance change characteristics in the multi-sensor distance measurement data, and carrying out corresponding region association correction on the fusion panoramic image based on the image depth change characteristics and the distance change characteristics to obtain a corresponding fusion result;
wherein, the image fusion module includes:
The image conversion unit is used for extracting sound field cloud data around the cooperative robot from the acoustic panoramic imaging image and converting the acoustic panoramic imaging image into a corresponding depth image based on the sound field cloud data;
A data extraction unit configured to extract first depth distribution data included in the depth image and second depth distribution data included in the optical panoramic imaging image;
The key matching unit is used for carrying out key matching on the first depth distribution data and the second depth distribution data to obtain a corresponding key matching result;
The image fusion unit is used for fusing the acoustic panoramic imaging image and the optical panoramic imaging image based on the key matching result to obtain a corresponding fused panoramic image;
wherein, the key matching unit includes:
A contour recognition subunit, configured to generate a corresponding first depth distribution map based on the first depth distribution data, and recognize a corresponding first key contour in the first depth distribution map, and generate a corresponding second depth distribution map based on the second depth distribution data, and recognize a corresponding second key contour in the second depth distribution map;
The center determining subunit is used for determining a first center point corresponding to the first key contour and a second center point corresponding to the second key contour;
the first determining subunit is used for taking the first center point as a rotation center, and determining a plurality of first rotation lines by rotating the straight line passing through the first center point for a preset number of times according to a preset rotation angle gradient;
The second determining subunit is configured to determine a corresponding first division point in the first rotation starting line based on a maximum horizontal coordinate difference value corresponding to the first key contour and a preset division interval, and use a straight line that passes through the first division point and is perpendicular to the first rotation starting line as a first division line corresponding to the first rotation starting line to obtain a first division line set corresponding to the first rotation starting line;
A third determining subunit, configured to determine a first division intersection point of the first key contour and a corresponding first division line, determine a first division distance between adjacent first division intersection points on each first division line, generate a corresponding first division shape matrix based on the first division distances included in all the first division lines included in the first division line set, and determine a corresponding first key vector based on the first division shape matrices corresponding to all the first rotation lines;
A fourth determining subunit, configured to determine a plurality of second lifting lines by using the second center point as a rotation center and rotating a straight line passing through the second center point for a preset number of times according to a preset rotation angle gradient;
A fifth determining subunit, configured to determine a corresponding second division point in the second rotation starting line based on a maximum horizontal coordinate difference value corresponding to the second key contour and a preset division interval, and obtain a second division line set corresponding to the second rotation starting line by using a straight line that passes through the second division point and is perpendicular to the second rotation starting line as a second division line corresponding to the second rotation starting line;
A sixth determining subunit, configured to determine second division points of the second key contour and corresponding second division lines, determine second division distances between adjacent second division points on each second division line, generate corresponding second division shape matrices based on the second division distances in all second division lines included in the second division line set, and determine corresponding second key vectors based on the second division shape matrices corresponding to all second rotation lines;
A contour judging subunit, configured to calculate a first similarity between the first key contour and the second key contour based on the first key vector and the second key vector, determine a second key contour corresponding to a first similarity between the first key contours, and judge whether more than one preliminary matching contour corresponds to the first key contour;
the first matching subunit is used for performing depth gradient feature matching on the first key contour and the corresponding preliminary matching contour when more than one preliminary matching contour corresponds to the first key contour, and determining a final matching contour corresponding to the first key contour;
A second matching subunit, configured to take the corresponding preliminary matching contour as the corresponding final matching contour when only one preliminary matching contour corresponding to the first key contour exists;
And the final matching subunit is used for matching the first depth distribution data with the second depth distribution data based on the first key profile and the corresponding final matching profile to obtain a corresponding key matching result.
2. The cooperative robot high precision multi-sensor fusion ranging system of claim 1, wherein the acquisition layer comprises:
the acoustic acquisition module is used for acquiring an acoustic panoramic imaging image around the cooperative robot based on the microphone acoustic array;
the multi-sensing ranging module is used for acquiring ultrasonic ranging data, millimeter wave ranging data and laser radar ranging data around the cooperative robot based on an ultrasonic sensor, a millimeter wave sensor and a laser radar sensor which are uniformly arranged outside the cooperative robot, and taking the ultrasonic ranging data, the millimeter wave ranging data and the laser radar ranging data as corresponding multi-sensing ranging data;
And the optical acquisition module is used for acquiring an optical panoramic imaging image around the cooperative robot by controlling the rotation of a high-definition camera arranged outside the cooperative robot.
3. The collaborative robotic high precision multi-sensor fusion ranging system of claim 1, wherein the acoustic acquisition module comprises:
The array determining unit is used for determining a corresponding microphone acoustic array model based on the operation environment parameters of the cooperative robot and the current required ranging precision;
An image generation unit for controlling a plurality of acoustic sensor units arranged outside the collaborative robot to start capturing acoustic signals from the surroundings of the collaborative robot based on the microphone acoustic array model, obtaining corresponding captured signals based on the acoustic signals, and generating an acoustic panoramic imaging map of the surroundings of the collaborative robot based on the captured signals and acoustic imaging technology.
4. A collaborative robotic high precision multisensor fusion ranging system as set forth in claim 3, wherein the array determination unit includes:
The parameter determining subunit is used for determining corresponding array parameters based on the operation environment parameters of the cooperative robot and the current required ranging precision;
And the model generation subunit is used for generating a corresponding microphone acoustic array model based on the array parameters and a preset array arrangement form.
5. The collaborative robotic high precision multi-sensor fusion ranging system of claim 1, wherein the first matching subunit comprises:
the first generation subunit is used for randomly selecting a first starting point from the first key contour, determining first depth gradient data corresponding to the first key contour in a clockwise direction from the first starting point based on first sub-depth distribution data corresponding to the first key contour, and generating a corresponding first depth gradient curve based on the first depth gradient data;
The second generation subunit is used for randomly selecting a second starting point in the preliminary matching contour, determining second depth gradient data corresponding to the preliminary matching contour from the second starting point along the clockwise direction based on second sub-depth distribution data corresponding to the preliminary matching contour, and generating a corresponding second depth gradient curve based on the second depth gradient data;
the ordinal number determining subunit is used for dividing the first depth gradual change curve into a plurality of gradual change curve segments based on preset dividing precision, and determining an initial ordinal number corresponding to the gradual change curve segments based on the position of the gradual change curve segments in the first gradual change curve;
A multiple sequential moving subunit, configured to set, in order from the initial ordinal number to the large number, a current ordinal number corresponding to each gradual change curve segment as 1, and regarding each setting process as a sequential moving process, regarding the gradual change curve segment with the current ordinal number of 1 as a leading curve segment corresponding to the sequential moving process, determining a sequential moving difference value between the new leading curve segment and 1 when each new leading curve segment is obtained, subtracting the sequential moving difference value from the current ordinal number corresponding to the gradual change curve segment with the current ordinal number greater than the initial ordinal number of the current leading curve segment, obtaining a new ordinal number corresponding to the gradual change curve segment, and meanwhile, adding the current ordinal number corresponding to the gradual change curve segment with the current ordinal number less than the initial ordinal number of the current leading curve segment to the initial ordinal number of the leading curve segment, obtaining a new ordinal number corresponding to the corresponding gradual change curve segment, and sorting all gradual change segments based on the latest determined ordinal numbers, so as to obtain a corresponding gradual change curve segment sorting sequence in the current sequential moving process;
The first calculating subunit is used for connecting all the gradual curve segments to generate corresponding depth gradual forward-shifting curves based on the gradual curve segment sequencing sequence, and calculating second similarity between the depth gradual forward-shifting curves and the second depth gradual curves;
The feature extraction subunit is used for determining a first region contained in the first key contour and a second region contained in the preliminary matching contour, determining a corresponding first transverse depth change feature sequence and a corresponding first longitudinal depth change feature sequence in the first region, and simultaneously determining a corresponding second transverse depth change feature sequence and a corresponding second longitudinal depth change feature sequence in the second region;
A second computing subunit configured to calculate a third similarity between the first lateral depth change feature sequence and the second lateral depth change feature sequence, and a fourth similarity between the first longitudinal depth change feature sequence and the second longitudinal depth change feature sequence;
and the final determining subunit is used for calculating the comprehensive similarity between the first key contour and the corresponding preliminary matching contour based on the second similarity, the third similarity and the fourth similarity, and taking the preliminary matching contour corresponding to the maximum comprehensive similarity as the corresponding final matching contour.
6. The cooperative robot high precision multisensor fusion ranging system of claim 5, wherein the output layer comprises:
The data determining module is used for extracting panoramic point cloud data around the cooperative robot from the fusion result;
The model construction module is used for constructing a three-dimensional panoramic model around the cooperative robot based on the panoramic point cloud data and the point cloud construction method;
and the result determining module is used for determining a ranging result of the corresponding position based on the three-dimensional panoramic data.
7. The collaborative robotic high precision multi-sensor fusion ranging system of claim 6, wherein the model construction module comprises:
the point cloud correction unit is used for calculating the local coherence corresponding to each point cloud data contained in the panoramic point cloud data, correcting the point cloud data with the local coherence smaller than a coherence threshold value, and obtaining corresponding accurate full-view point cloud data;
the model construction unit is used for constructing a three-dimensional panoramic model around the cooperative robot based on the accurate full-view point cloud data and the point cloud construction method.
CN202210598478.3A 2022-05-30 2022-05-30 High-precision multi-sensor fusion ranging system of cooperative robot Active CN114910901B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210598478.3A CN114910901B (en) 2022-05-30 2022-05-30 High-precision multi-sensor fusion ranging system of cooperative robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210598478.3A CN114910901B (en) 2022-05-30 2022-05-30 High-precision multi-sensor fusion ranging system of cooperative robot

Publications (2)

Publication Number Publication Date
CN114910901A CN114910901A (en) 2022-08-16
CN114910901B true CN114910901B (en) 2024-07-12

Family

ID=82768705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210598478.3A Active CN114910901B (en) 2022-05-30 2022-05-30 High-precision multi-sensor fusion ranging system of cooperative robot

Country Status (1)

Country Link
CN (1) CN114910901B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN112799151A (en) * 2021-01-16 2021-05-14 蓓伟机器人科技(上海)有限公司 Six-dimensional accurate imaging, identifying and positioning technology and method for deep sea detection

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5276770A (en) * 1989-08-11 1994-01-04 Hughes Aircraft Company Training of neural network for multi-source data fusion
CN110873879A (en) * 2018-08-30 2020-03-10 沈阳航空航天大学 Device and method for deep fusion of characteristics of multi-source heterogeneous sensor
US11361470B2 (en) * 2019-05-09 2022-06-14 Sri International Semantically-aware image-based visual localization
CN110146846B (en) * 2019-06-06 2021-04-13 青岛理工大学 Sound source position estimation method, readable storage medium and computer equipment
CN111352112B (en) * 2020-05-08 2022-11-29 泉州装备制造研究所 Target detection method based on vision, laser radar and millimeter wave radar
CN113627473B (en) * 2021-07-06 2023-09-29 哈尔滨工程大学 Multi-mode sensor-based water surface unmanned ship environment information fusion sensing method
CN114255238A (en) * 2021-11-26 2022-03-29 电子科技大学长三角研究院(湖州) Three-dimensional point cloud scene segmentation method and system fusing image features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415342A (en) * 2019-08-02 2019-11-05 深圳市唯特视科技有限公司 A kind of three-dimensional point cloud reconstructing device and method based on more merge sensors
CN112799151A (en) * 2021-01-16 2021-05-14 蓓伟机器人科技(上海)有限公司 Six-dimensional accurate imaging, identifying and positioning technology and method for deep sea detection

Also Published As

Publication number Publication date
CN114910901A (en) 2022-08-16

Similar Documents

Publication Publication Date Title
US6847392B1 (en) Three-dimensional structure estimation apparatus
CN110230998B (en) Rapid and precise three-dimensional measurement method and device based on line laser and binocular camera
Zhang et al. A 3D reconstruction method for pipeline inspection based on multi-vision
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
JP2019159940A (en) Point group feature extraction device, point group feature extraction method, and program
CN107025657A (en) A kind of vehicle action trail detection method based on video image
WO2020221443A1 (en) Scale-aware monocular localization and mapping
CN113284251B (en) Cascade network three-dimensional reconstruction method and system with self-adaptive view angle
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN112630469B (en) Three-dimensional detection method based on structured light and multiple light field cameras
CN109974618A (en) The overall calibration method of multisensor vision measurement system
CN115018934B (en) Stereoscopic image depth detection method combining cross skeleton window and image pyramid
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
GB2244621A (en) Machine vision stereo matching
CN114910901B (en) High-precision multi-sensor fusion ranging system of cooperative robot
US11348261B2 (en) Method for processing three-dimensional point cloud data
JP4524514B2 (en) Image processing apparatus, image processing method, and recording medium
US12094227B2 (en) Object recognition device and object recognition method
Xiao et al. Calibformer: A transformer-based automatic lidar-camera calibration network
CN109859313B (en) 3D point cloud data acquisition method and device, and 3D data generation method and system
CN114092500A (en) Method for processing point cloud of central line of catheter under multi-view vision
CN115775214A (en) Point cloud completion method and system based on multi-stage fractal combination
EP4120192A1 (en) Computing device comprising an end-to-end learning-based architecture for determining a scene flow from two consecutive scans of point clouds
CN105260989B (en) The method of 3-D image is restored based on more figure registrations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant