CN112857314A - Bimodal terrain identification method, hardware system and sensor installation method thereof - Google Patents
Bimodal terrain identification method, hardware system and sensor installation method thereof Download PDFInfo
- Publication number
- CN112857314A CN112857314A CN202011602421.3A CN202011602421A CN112857314A CN 112857314 A CN112857314 A CN 112857314A CN 202011602421 A CN202011602421 A CN 202011602421A CN 112857314 A CN112857314 A CN 112857314A
- Authority
- CN
- China
- Prior art keywords
- module
- terrain
- fifo
- recognition
- biped robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 84
- 230000002902 bimodal effect Effects 0.000 title claims abstract description 24
- 238000009434 installation Methods 0.000 title abstract description 10
- 230000000007 visual effect Effects 0.000 claims abstract description 47
- 238000005259 measurement Methods 0.000 claims abstract description 35
- 230000008569 process Effects 0.000 claims abstract description 30
- 230000004927 fusion Effects 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 10
- 210000000689 upper leg Anatomy 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000007635 classification algorithm Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 230000003750 conditioning effect Effects 0.000 claims description 3
- 230000007613 environmental effect Effects 0.000 claims description 3
- 210000002414 leg Anatomy 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims 1
- 230000008447 perception Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005021 gait Effects 0.000 description 2
- 210000000629 knee joint Anatomy 0.000 description 2
- 230000015541 sensory perception of touch Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C7/00—Tracing profiles
- G01C7/02—Tracing profiles of land surfaces
- G01C7/04—Tracing profiles of land surfaces involving a vehicle which moves along the profile to be traced
Abstract
The invention provides a bimodal terrain recognition method, a hardware system and a sensor installation method thereof.A terrain sensing, recognition, storage and resolving unit of a field detection biped robot based on vision and touch bimodal fusion is adopted to realize road condition information acquisition and priori knowledge learning of the biped robot in the field walking process, and further realize the self-adaptive switching of a remote recognition mode and a short-range recognition mode. Meanwhile, a heterogeneous integrated structure of a DSP (digital signal processor), an FPGA (field programmable gate array) and a FLASH three-core processor is used as a hardware system for acquiring and generating a navigation map of navigation map and touch semantic information under a visual semantic environment and real-time position and posture information of the robot in real time, and a FLASH memory is used for storing a visual image acquired in the walking process of the robot and data output by a sole sensor array in real time. Meanwhile, the video sensor, the plurality of inertia measurement units and the pressure sensor array are respectively arranged on the biped robot body, so that the reality and the effectiveness of information acquisition in the field walking process are ensured.
Description
Technical Field
The invention relates to the field of intelligent perception and navigation of field detection biped robots, in particular to a bimodal terrain identification method, a hardware system and a sensor installation method thereof.
Background
The mobile robot is a comprehensive intelligent system integrating multiple functions of work environment perception, decision planning, intelligent control and the like. At present, different kinds of robots such as wheeled robots, walking robots, crawler robots, crawling robots, mobile robots and the like are mainly used. Among them, the walking robot mainly has a single-foot type, a double-foot type, and a multi-foot type. Compared with other types of robots, the biped robot has the advantages of higher degree of freedom, more flexibility and convenience, almost adaptability to various complex terrains and the like. If the biped robot cannot sense and accurately identify the terrain condition of the biped robot, the gait cannot be changed in time, so that how to sense and accurately identify the terrain is the key point for realizing the walking flexibility of the biped robot.
Disclosure of Invention
In order to solve the problems, the invention aims to provide a bimodal terrain identification method, a hardware system and a sensor installation method thereof, in the existing field of sensing and identification of a biped robot, a terrain sensing and identification storage resolving unit of a field biped robot based on vision and touch bimodal fusion is adopted, the information of the terrain where the robot is located is obtained through an IMU (inertial measurement unit), a 3D (three-dimensional) vision sensor and a touch sensor of a strapdown installation robot, and the terrain sensing and identification storage resolving system is used for completing the identification of the current terrain of the biped robot in real time, so that the gait of the biped robot is timely and accurately adjusted.
Specifically, the bimodal terrain recognition method provided by the invention is used for detecting terrain recognition of a biped robot in a field walking process, and comprises the following steps:
s1: starting a remote identification mode, acquiring road condition information, and based on prior knowledge, turning to S2 for terrain identification; if the road condition information cannot be acquired, switching to S4 for mode switching;
s2: judging whether the terrain condition is safe, and if so, continuing the current mode; otherwise, turning to S3;
or judging whether the current terrain can pass through, if so, continuing the current mode; otherwise, turning to S3;
or judging whether the path needs to be re-planned, if not, continuing the current mode; otherwise, turning to S3;
s3: replanning a path or entering a pause mode;
s4: and switching to a short-range identification mode.
Wherein the remote recognition mode in S1 performs terrain recognition using a visual modality, including:
s11: acquiring a color image and a depth image of a ground environment within a preset range in real time in the walking process of the biped robot;
s12, extracting characteristic points of each acquired frame image;
and S13, obtaining the navigation map in the visual semantic environment by adopting a visual semantic segmentation method.
Further, the short-range recognition mode in S4 performs terrain recognition using a visual and tactile fusion bimodal mode, including:
s41: adjusting the posture of the biped robot to be in a short-range recognition mode according to the platform posture information obtained by resolving through an inertial measurement unit;
s42: performing pressure signal characteristic extraction on a biped robot sole sensor array output signal, and completing a local terrain classification algorithm based on a deep neural network through offline learning and online detection to obtain a navigation map of touch semantic information;
s43: acquiring a color image and a depth image of a ground environment within a preset range in a short-range recognition mode of the biped robot in real time, extracting feature points of each acquired frame of image, and acquiring a navigation map in a visual semantic environment by adopting a visual semantic segmentation method;
s43: and performing fusion recognition on the results obtained by the S42 and the S43 by adopting an association mechanism algorithm to obtain a short-range recognition result.
The state when the road condition information cannot be acquired at least comprises a remote light limited state or a shielding state.
As another preferred aspect, the present invention further provides a hardware system of the bimodal terrain recognition method, including:
a storage resolving hardware system: the integrated structure is a heterogeneous integrated structure of three processors including a DSP, an FPGA and a FLASH; the FPGA unit further comprises: FIFO-1 module, FIFO-2 module, FIFO-3 module, FIFO-4 module, CPU1 and CPU 2; the CPU1 and the CPU2 are connected with a DSP, and the DSP adopts an association mechanism algorithm to fuse the processing results of the CPU1 and the CPU2 to obtain a long-range recognition result or a short-range recognition result and is used for detecting terrain recognition of the biped robot in the field walking process; the FIFO-4 module is connected with 5G communication equipment and used for remote control of the robot. The FIFO-3 module is connected with 5G communication equipment and used for remote control.
A FLASH memory: the system is respectively connected with the FIFO-1 module, the FIFO-2 module and the upper computer and is used for storing the visual image acquired in the walking process of the robot and the output data of the foot sensor array in real time; and the upper computer reads the data in the FLASH memory to complete off-line data processing.
Inertia-big dipper integrated navigation system module: the system consists of an inertial measurement unit IMU1, a BDS module and an FIFO-3 module, and the CPU2 processor completes the integrated navigation calculation; the BDS module is connected with the FIFO-3 module and used for obtaining real-time position and attitude information of the detected biped robot in the advancing process according to platform pose information obtained by combining and resolving inertia measurement data and navigation data.
Further, the hardware system further includes:
and the image acquisition unit and the pressure sensing unit are respectively connected with the FIFO-1 module and the FIFO-2 module.
The image acquisition unit adopts a 3D camera to acquire and detect color images and depth images of the ground environment in the walking process of the biped robot in real time and sends the color images and the depth images to the FIFO-1 module.
The pressure sensing unit comprises a pressure sensor array arranged at the bottoms of the feet of the biped robot and is used for acquiring pressure signals in real time, processing the pressure signals by the signal conditioning circuit and the digital-to-analog conversion module and then sending the pressure signals to the FIFO-2 module.
Further, the hardware system further includes:
the CPU1 calls the visual data of the FIFO-1 module in real time, extracts the characteristic points of each frame of image and obtains the navigation map in the visual semantic environment by adopting a visual semantic segmentation method.
The CPU2 extracts the pressure signal characteristics of the output signals of the biped robot sole sensor array, and completes the local terrain classification algorithm based on the deep neural network through off-line learning and on-line detection to obtain the navigation map of the touch semantic information.
The DSP is also used for controlling the switching of the terrain recognition mode, and when the road condition information cannot be acquired, the remote recognition mode is switched to the short-range recognition mode.
Further, the hardware system further includes:
and the FLASH memory stores the real-time data of the image acquisition unit, the pressure sensing unit and the inertia measurement unit in real time, and generates track information of the biped robot walking in the field for priori learning.
As another preferred embodiment, the present invention further provides a sensor installation method for a hardware system of the bimodal terrain recognition method, specifically including:
the method comprises the following steps that a 3D camera is installed on the head of the biped robot and used for measuring environmental terrain image information; an inertial measurement unit IMU1 is also installed and used for sensing the attitude information of the head holder of the robot; a terrain recognition storage resolution system is also installed.
The pressure sensor array is arranged at the bottoms of the feet of the biped robot and used for measuring the topographic environment state information of the bottoms of the feet of the robot.
Further, the sensor mounting method further comprises the following steps: install inertia measurement unit on biped of biped robot, specifically include: the left thigh and the right thigh of the biped robot are respectively provided with an inertia measurement unit IMU2 and an IMU3, the left shank and the right shank are respectively provided with an inertia measurement unit IMU4 and an IMU5, and the two measurement units are connected with the digital-to-analog conversion module and used for measuring the posture change information of the two legs of the robot in real time.
In summary, the invention provides a bimodal terrain recognition method, a hardware system and a sensor installation method thereof, wherein a terrain sensing, recognition, storage and resolving unit of a field detection biped robot based on vision and touch bimodal fusion is adopted to realize road condition information acquisition and priori knowledge learning of the biped robot in the field walking process, and further realize self-adaptive switching of a long-range recognition mode and a short-range recognition mode. Meanwhile, a heterogeneous integrated structure of a DSP (digital signal processor), an FPGA (field programmable gate array) and a FLASH three-core processor is used as a hardware system for acquiring and generating a navigation map of navigation map and touch semantic information under a visual semantic environment and real-time position and posture information of the robot in real time. In addition, different from the existing robot, the invention also respectively installs video sensors on the head parts of the biped robot, and also arranges a plurality of inertia measurement units and pressure sensor arrays on the biped of the robot, thereby ensuring the real effectiveness of information acquisition in the field walking process.
Drawings
FIG. 1 is a diagram of a bimodal terrain recognition probe biped robot in one embodiment.
Fig. 2 is a schematic diagram illustrating a visual and tactile bimodal terrain recognition process of the biped robot shown in fig. 1.
Fig. 3 is a hardware system for the bimodal terrain identification method in an embodiment.
Detailed Description
The bimodal terrain recognition method and hardware system of the present invention, and the sensor installation method thereof, will be described in further detail with reference to the following embodiments and accompanying drawings.
Fig. 1 is a schematic view of a dual-mode terrain recognition and detection biped robot provided by the present invention, wherein the robot is a body with a bilateral symmetry structure, and specifically comprises:
a head part: the robot comprises a video sensor, preferably a 3D camera, mounted on the head, and an inertial measurement unit IMU1 mounted on the head, and is used for sensing the attitude information of a robot head holder; and installing a terrain recognition storage resolving system.
Two feet: the knee joint comprises a thigh and a shank which are connected with a knee joint, and inertia measurement units IMU2, IMU3, IMU4 and IMU5 are arranged on the left thigh, the right thigh and the left shank respectively.
Two feet: the pressure sensor array is connected with the lower leg and is arranged under the feet, and the pressure sensor array is used for sensing road condition information of the ground contacted with the pressure sensor array.
In order to help the biped robot understand the current unknown environment, the current walking terrain needs to be quickly identified to ensure that it can move smoothly and accurately. The dual-mode terrain recognition and detection biped robot adopts a vision/touch dual-mode combined recognition storage resolving system and is used for detecting terrain recognition of the biped robot in the field walking process. Among them, visual recognition is a terrain recognition method based on real-time image data acquired by a 3D camera, and tactile recognition is a contact sensing recognition method based on a bipedal pressure sensor array and an airborne Inertial Measurement Unit (IMU).
As shown in fig. 2, the invention provides a bimodal terrain recognition method, wherein vision is used for remote terrain recognition, the terrain state in front is perceived and recognized by using a vision recognition method, and whether the ground is safe, can pass through, needs to plan a path again or not is judged by priori knowledge. If the front terrain state cannot be judged through vision due to limited and shielded remote light, the mode is switched to a visual/tactile fusion short-range recognition mode, and short-range terrain recognition is carried out by utilizing array signals of the double-foot pressure sensor and short-range visual image information.
The short-range visual sense/tactile sense fusion recognition can adopt two fusion modes of data level fusion or semantic level fusion, and the data level fusion is considered to be fusion processing of bottom layer data, including analysis, data processing and association and the like of two different data characteristics of visual sense and tactile sense, so that the realization difficulty is high, and the requirement on a hardware system is high. Therefore, the vision/touch semantic fusion scheme of the biped robot is detected by adopting a semantic fusion mode based on vision and touch.
The method specifically comprises the following steps: a bimodal terrain recognition method for detecting terrain recognition of a biped robot during walking in the field, comprising the steps of:
s1: starting a remote identification mode, acquiring road condition information, and based on prior knowledge, turning to S2 for terrain identification; if the road condition information cannot be acquired, switching to S4 for mode switching;
s2: judging whether the terrain condition is safe, and if so, continuing the current mode; otherwise, turning to S3;
or judging whether the current terrain can pass through, if so, continuing the current mode; otherwise, turning to S3;
or judging whether the path needs to be re-planned, if not, continuing the current mode; otherwise, turning to S3;
s3: replanning a path or entering a pause mode;
s4: and switching to a short-range identification mode.
Further, the remote recognition mode in S1 performs terrain recognition using a visual modality, including:
s11: acquiring a color image and a depth image of a ground environment within a preset range in real time in the walking process of the biped robot;
s12, extracting characteristic points of each acquired frame image;
and S13, obtaining the navigation map in the visual semantic environment by adopting a visual semantic segmentation method.
Further, the short-range recognition mode in S4 performs terrain recognition using a visual and tactile fusion bimodal mode, including:
s41: adjusting the posture of the biped robot to be in a short-range recognition mode according to the platform posture information obtained by resolving through an inertial measurement unit;
s42: performing pressure signal characteristic extraction on a biped robot sole sensor array output signal, and completing a local terrain classification algorithm based on a deep neural network through offline learning and online detection to obtain a navigation map of touch semantic information;
s43: acquiring a color image and a depth image of a ground environment within a preset range in a short-range recognition mode of the biped robot in real time, extracting feature points of each acquired frame of image, and acquiring a navigation map in a visual semantic environment by adopting a visual semantic segmentation method;
s43: and performing fusion recognition on the results obtained by the S42 and the S43 by adopting an association mechanism algorithm to obtain a short-range recognition result.
The state when the road condition information cannot be acquired at least comprises a remote light limited state or a shielding state.
To illustrate the bimodal terrain recognition method in more detail, the following examples are given here:
in the process of walking the biped robot in the field, the 3D camera collects a color image and a depth image of the ground environment within a preset range in the process of walking the biped robot in a remote recognition mode in real time, feature point extraction is carried out on each collected frame image, and the processes of resolving the position and the posture of the 3D camera, point cloud reconstruction of the depth image, visual semantic segmentation of the color image and the like are carried out, so that a navigation map under the visual semantic environment in the remote recognition mode is obtained, namely a terrain recognition result is obtained. In the real-time monitoring and continuous walking process, the position and attitude information of the platform obtained by resolving through further combining the inertial measurement unit is further obtained, and then the real-time position and attitude information in the advancing process of the biped robot is obtained. According to the platform pose information obtained by resolving through an inertia measurement unit, when the road condition information cannot be obtained, the pose of the biped robot is automatically adjusted to be in a short-range recognition mode, a navigation map under the visual semantic environment in the short-range recognition mode is combined, the navigation map of the tactile semantic information is further obtained, namely, a local terrain classification algorithm based on a deep neural network is completed through feature extraction, off-line learning and on-line detection of output signals of the biped pressure sensor array, finally, the navigation map of the tactile semantic information is obtained, finally, fusion recognition of the two semantic information is completed through an association mechanism algorithm, and the advantage complementation of the two detections is realized, so that a terrain recognition result in the short-range recognition mode is obtained.
Fig. 3 shows a hardware system for a bimodal terrain recognition method provided in the present invention, which specifically includes the following steps:
acquisition of visual images: preferably a 3D camera; the 3D camera collects and detects color images and depth images of the ground environment in the walking process of the biped robot in real time, and the 3D camera enhances the receiving capacity of light and the perception capacity of the biped robot to the terrain in a dark environment.
A storage resolving hardware system: the integrated circuit is a heterogeneous integrated structure of three processors including a DSP (digital signal processor), an FPGA (Field Programmable Gate Array) and a FLASH, wherein the FPGA is a Field Programmable Gate Array (Field Programmable Gate Array), the DSP is a digital signal processor, and the FLASH is a FLASH memory; the FPGA unit further comprises: FIFO-1 module, FIFO-2 module, FIFO-3 module, FIFO-4 module, CPU1 and CPU 2; the CPU1 and the CPU2 are connected with a DSP, and the DSP adopts an association mechanism algorithm to fuse the processing results of the CPU1 and the CPU2 to obtain a long-range recognition result or a short-range recognition result and is used for detecting terrain recognition of the biped robot in the field walking process; the FIFO-4 module is connected with 5G communication equipment and used for remote control of the robot. The FIFO-3 module is connected with 5G communication equipment and used for remote control.
The FIFO module is a first-in first-out data buffer, the first-in instruction is completed and exited, and then the second instruction is executed as the buffer of the visual data; the CPU1 of the ARM framework calls the visual data of the FIFO module 1 in real time to extract features, and the navigation map under the visual semantic environment is completed; the data processing module DSP is respectively connected with the CPU1 and the CPU2 for fast data communication and visual semantic fusion. The method specifically comprises the following steps: the ARM architecture acquires data information after the characteristics of the CPU1 and the CPU2 are extracted, and the DSP completes fusion and identification of two semantic information.
A FLASH memory: the system is respectively connected with the FIFO-1 module, the FIFO-2 module and the upper computer and is used for storing the visual image acquired in the walking process of the robot and the output data of the foot sensor array in real time; and the upper computer reads the data in the FLASH memory to complete off-line data processing.
Inertia-big dipper integrated navigation system module: the system consists of an inertial measurement unit IMU1, a BDS (Beidou navigation System) module and an FIFO-3 module, and the CPU2 processor completes the integrated navigation solution; the BDS module is connected with the FIFO-3 module and used for obtaining real-time position and attitude information of the detected biped robot in the advancing process according to platform pose information obtained by combining and resolving inertia measurement data and navigation data. The BDS module adopts serial communication and the FIFO-3 module to communicate under the environment of small data volume and not very high communication speed; and when the data volume is large and the requirement on the communication speed is high, the FIFO-3 module is communicated in a parallel bus mode.
Further, the method also comprises the following steps:
the image acquisition unit and the pressure sensing unit are respectively connected with the FIFO-1 module and the FIFO-2 module;
the image acquisition unit adopts a 3D camera to acquire and detect a color image and a depth image of a ground environment in the walking process of the biped robot in real time and sends the color image and the depth image to the FIFO-1 module;
the pressure sensing unit comprises a pressure sensor array arranged at the bottoms of the feet of the biped robot and is used for acquiring pressure signals in real time, processing the pressure signals by the signal conditioning circuit and the digital-to-analog conversion module and then sending the pressure signals to the FIFO-2 module.
Further, the method also comprises the following steps:
the CPU1 calls visual data of the FIFO-1 module in real time, extracts feature points of each frame of image and obtains a navigation map in a visual semantic environment by adopting a visual semantic segmentation method;
the CPU2 extracts pressure signal characteristics of the output signals of the biped robot sole sensor array, and completes a local terrain classification algorithm based on a deep neural network through off-line learning and on-line detection to obtain a navigation map of touch semantic information;
the DSP is also used for controlling the switching of the terrain recognition mode, and when the road condition information cannot be acquired, the remote recognition mode is switched to the short-range recognition mode.
The FLASH memory stores real-time data of the image acquisition unit, the pressure sensing unit and the inertia measurement unit in real time, and generates track information of the biped robot walking in the field for priori learning.
The hardware system also comprises a system power supply module which is used for providing stable power supply for the hardware system.
As another preferred embodiment, the present invention further provides a sensor installation method for a hardware system of the bimodal terrain recognition method, specifically including:
the method comprises the following steps that a 3D camera is installed on the head of the biped robot and used for measuring environmental terrain image information; an inertial measurement unit IMU1 is also installed and used for sensing the attitude information of the head holder of the robot; a terrain recognition, storage and calculation system is also installed;
the pressure sensor array is arranged at the bottoms of the feet of the biped robot and used for measuring the topographic environment state information of the bottoms of the feet of the robot.
Further, the sensor mounting method further comprises the following steps: the left thigh and the right thigh of the biped robot are respectively provided with an inertia measurement unit IMU2 and an IMU3, the left shank and the right shank are respectively provided with an inertia measurement unit IMU4 and an IMU5, and the two measurement units are connected with the digital-to-analog conversion module and used for measuring the posture change information of the two legs of the robot in real time.
The robot comprises inertial measurement units IMU1, IMU2, IMU3, IMU4 and IMU5, and real-time data acquisition of robot biped pressure sensor array signals. The input of the signal adjuster is respectively connected with the 5 IMU modules and the 2 pressure sensor arrays, the output of the signal adjuster is connected with the A/D conversion module and inputs the AD signal into the FIFO module 1, and the CPU2 of the ARM framework calls the AD signal to extract features and complete AD signal semantic information.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A bimodal terrain recognition method is characterized in that the method is used for detecting terrain recognition of a biped robot in the field walking process, and comprises the following steps:
s1: starting a remote identification mode, acquiring road condition information, and based on prior knowledge, turning to S2 for terrain identification; if the road condition information cannot be acquired, switching to S4 for mode switching;
s2: judging whether the terrain condition is safe, and if so, continuing the current mode; otherwise, turning to S3;
or judging whether the current terrain can pass through, if so, continuing the current mode; otherwise, turning to S3;
or judging whether the path needs to be re-planned, if not, continuing the current mode; otherwise, turning to S3;
s3: replanning a path or entering a pause mode;
s4: and switching to a short-range identification mode.
2. The dual-modality terrain recognition method of claim 1, wherein the remote recognition mode in S1 employs visual modalities for terrain recognition, including:
s11: acquiring a color image and a depth image of a ground environment within a preset range in real time in the walking process of the biped robot;
s12, extracting characteristic points of each acquired frame image;
and S13, obtaining the navigation map in the visual semantic environment by adopting a visual semantic segmentation method.
3. The dual-modality terrain recognition method of claim 1, wherein the short-range recognition mode of S4 employs a visual and haptic fusion of dual modalities for terrain recognition, comprising:
s41: adjusting the posture of the biped robot to be in a short-range recognition mode according to the platform posture information obtained by resolving through an inertial measurement unit;
s42: performing pressure signal characteristic extraction on a biped robot sole sensor array output signal, and completing a local terrain classification algorithm based on a deep neural network through offline learning and online detection to obtain a navigation map of touch semantic information;
s43: acquiring a color image and a depth image of a ground environment within a preset range in a short-range recognition mode of the biped robot in real time, extracting feature points of each acquired frame of image, and acquiring a navigation map in a visual semantic environment by adopting a visual semantic segmentation method;
s43: and performing fusion recognition on the results obtained by the S42 and the S43 by adopting an association mechanism algorithm to obtain a short-range recognition result.
4. The dual-modality terrain recognition method of claim 1, wherein the status of the inability to obtain road condition information includes at least a remote light-limited status or a blocked status.
5. A hardware system for a dual-modality terrain recognition method according to any of claims 1-4, characterized by comprising:
a storage resolving hardware system: the integrated structure is a heterogeneous integrated structure of three processors including a DSP, an FPGA and a FLASH; the FPGA unit further comprises: FIFO-1 module, FIFO-2 module, FIFO-3 module, FIFO-4 module, CPU1 and CPU 2; the CPU1 and the CPU2 are connected with a DSP, and the DSP adopts an association mechanism algorithm to fuse the processing results of the CPU1 and the CPU2 to obtain a long-range recognition result or a short-range recognition result and is used for detecting terrain recognition of the biped robot in the field walking process; the FIFO-4 module is connected with 5G communication equipment and used for remote control of the robot;
the FIFO-3 module is connected with 5G communication equipment and used for remote control;
a FLASH memory: the system is respectively connected with the FIFO-1 module, the FIFO-2 module and the upper computer and is used for storing the visual image acquired in the walking process of the robot and the output data of the foot sensor array in real time; the upper computer reads the data in the FLASH memory to complete off-line data processing;
inertia-big dipper integrated navigation system module: the system consists of an inertial measurement unit IMU1, a BDS module and an FIFO-3 module, and the CPU2 processor completes the integrated navigation calculation; the BDS module is connected with the FIFO-3 module and used for obtaining real-time position and attitude information of the detected biped robot in the advancing process according to platform pose information obtained by combining and resolving inertia measurement data and navigation data.
6. The hardware system of claim 5, further comprising:
the image acquisition unit and the pressure sensing unit are respectively connected with the FIFO-1 module and the FIFO-2 module;
the image acquisition unit adopts a 3D camera to acquire and detect a color image and a depth image of a ground environment in the walking process of the biped robot in real time and sends the color image and the depth image to the FIFO-1 module;
the pressure sensing unit comprises a pressure sensor array arranged at the bottoms of the feet of the biped robot and is used for acquiring pressure signals in real time, processing the pressure signals by the signal conditioning circuit and the digital-to-analog conversion module and then sending the pressure signals to the FIFO-2 module.
7. The hardware system of claim 6, further comprising:
the CPU1 calls visual data of the FIFO-1 module in real time, extracts feature points of each frame of image and obtains a navigation map in a visual semantic environment by adopting a visual semantic segmentation method;
the CPU2 extracts pressure signal characteristics of the output signals of the biped robot sole sensor array, and completes a local terrain classification algorithm based on a deep neural network through off-line learning and on-line detection to obtain a navigation map of touch semantic information;
the DSP is also used for controlling the switching of the terrain recognition mode, and when the road condition information cannot be acquired, the remote recognition mode is switched to the short-range recognition mode.
8. The hardware system of claim 7, further comprising: and the FLASH memory stores the real-time data of the image acquisition unit, the pressure sensing unit and the inertia measurement unit in real time, and generates track information of the biped robot walking in the field for priori learning.
9. A sensor mounting method for the hardware system of claim 8, further comprising:
the method comprises the following steps that a 3D camera is installed on the head of the biped robot and used for measuring environmental terrain image information; an inertial measurement unit IMU1 is also installed and used for sensing the attitude information of the head holder of the robot; a terrain recognition, storage and calculation system is also installed;
the pressure sensor array is arranged at the bottoms of the feet of the biped robot and used for measuring the topographic environment state information of the bottoms of the feet of the robot.
10. The sensor mounting method of claim 9, further comprising: installing an inertial measurement unit on both feet of a biped robot, comprising: the left thigh and the right thigh are respectively provided with an inertial measurement unit IMU2 and an IMU3, the left shank and the right shank are respectively provided with an inertial measurement unit IMU4 and an IMU5, and the inertial measurement units IMU4 and the IMU5 are connected with the digital-to-analog conversion module and used for measuring posture change information of the two legs of the robot in real time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011602421.3A CN112857314A (en) | 2020-12-30 | 2020-12-30 | Bimodal terrain identification method, hardware system and sensor installation method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011602421.3A CN112857314A (en) | 2020-12-30 | 2020-12-30 | Bimodal terrain identification method, hardware system and sensor installation method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112857314A true CN112857314A (en) | 2021-05-28 |
Family
ID=75998389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011602421.3A Pending CN112857314A (en) | 2020-12-30 | 2020-12-30 | Bimodal terrain identification method, hardware system and sensor installation method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112857314A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114474053A (en) * | 2021-12-30 | 2022-05-13 | 暨南大学 | Robot terrain recognition and speed control method and system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317652B1 (en) * | 1998-09-14 | 2001-11-13 | Honda Giken Kogyo Kabushiki Kaisha | Legged mobile robot |
CN106547237A (en) * | 2016-10-24 | 2017-03-29 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of navigation calculation device based on heterogeneous polynuclear framework |
CN109249429A (en) * | 2018-09-25 | 2019-01-22 | 安徽果力智能科技有限公司 | A kind of biped robot's classification of landform system |
CN110956651A (en) * | 2019-12-16 | 2020-04-03 | 哈尔滨工业大学 | Terrain semantic perception method based on fusion of vision and vibrotactile sense |
CN111080659A (en) * | 2019-12-19 | 2020-04-28 | 哈尔滨工业大学 | Environmental semantic perception method based on visual information |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111444838A (en) * | 2020-03-26 | 2020-07-24 | 安徽果力智能科技有限公司 | Robot ground environment sensing method |
-
2020
- 2020-12-30 CN CN202011602421.3A patent/CN112857314A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317652B1 (en) * | 1998-09-14 | 2001-11-13 | Honda Giken Kogyo Kabushiki Kaisha | Legged mobile robot |
CN106547237A (en) * | 2016-10-24 | 2017-03-29 | 华中光电技术研究所(中国船舶重工集团公司第七七研究所) | A kind of navigation calculation device based on heterogeneous polynuclear framework |
CN109249429A (en) * | 2018-09-25 | 2019-01-22 | 安徽果力智能科技有限公司 | A kind of biped robot's classification of landform system |
CN110956651A (en) * | 2019-12-16 | 2020-04-03 | 哈尔滨工业大学 | Terrain semantic perception method based on fusion of vision and vibrotactile sense |
CN111080659A (en) * | 2019-12-19 | 2020-04-28 | 哈尔滨工业大学 | Environmental semantic perception method based on visual information |
CN111179344A (en) * | 2019-12-26 | 2020-05-19 | 广东工业大学 | Efficient mobile robot SLAM system for repairing semantic information |
CN111444838A (en) * | 2020-03-26 | 2020-07-24 | 安徽果力智能科技有限公司 | Robot ground environment sensing method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114474053A (en) * | 2021-12-30 | 2022-05-13 | 暨南大学 | Robot terrain recognition and speed control method and system |
CN114474053B (en) * | 2021-12-30 | 2023-01-17 | 暨南大学 | Robot terrain recognition and speed control method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10328575B2 (en) | Method for building a map of probability of one of absence and presence of obstacles for an autonomous robot | |
Talukder et al. | Real-time detection of moving objects from moving vehicles using dense stereo and optical flow | |
US9563528B2 (en) | Mobile apparatus and localization method thereof | |
KR20190024962A (en) | Systems and methods for robotic behavior around moving bodies | |
CN205898143U (en) | Robot navigation system based on machine vision and laser sensor fuse | |
EP2590042A1 (en) | Mobile apparatus performing position recognition using several local filters and a fusion filter | |
JP2008197884A (en) | Generation method for environmental map and mobile robot | |
Zhai et al. | Coal mine rescue robots based on binocular vision: A review of the state of the art | |
US20220245856A1 (en) | Position identification system for construction machinery | |
CN107038406B (en) | Method for analyzing gestures | |
US20110173831A1 (en) | Autonomous system and method for determining information representative of the movement of an articulated chain | |
CN114683290B (en) | Method and device for optimizing pose of foot robot and storage medium | |
CN107066937A (en) | The apparatus and method of the curb stone in surrounding environment for detecting vehicle and for vehicle curb stone control system | |
Weon et al. | Intelligent robotic walker with actively controlled human interaction | |
CN112857314A (en) | Bimodal terrain identification method, hardware system and sensor installation method thereof | |
CN115435772A (en) | Method and device for establishing local map, electronic equipment and readable storage medium | |
CN113701750A (en) | Fusion positioning system of underground multi-sensor | |
Miyagusuku et al. | Toward autonomous garbage collection robots in terrains with different elevations | |
CN113158779A (en) | Walking method and device and computer storage medium | |
CN112513931A (en) | System and method for creating a single-view composite image | |
CN110232301A (en) | A kind of detection method of human body, device and storage medium | |
CN110216675B (en) | Control method and device of intelligent robot, intelligent robot and computer equipment | |
CN106940208A (en) | Robot target demarcates the system with oneself state monitoring function | |
Wang | Autonomous mobile robot visual SLAM based on improved CNN method | |
Crnokić et al. | Fusion of infrared sensors and camera for mobile robot navigation system-simulation scenario |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210528 |