CN111552289B - Detection method, virtual radar device, electronic apparatus, and storage medium - Google Patents

Detection method, virtual radar device, electronic apparatus, and storage medium Download PDF

Info

Publication number
CN111552289B
CN111552289B CN202010352986.4A CN202010352986A CN111552289B CN 111552289 B CN111552289 B CN 111552289B CN 202010352986 A CN202010352986 A CN 202010352986A CN 111552289 B CN111552289 B CN 111552289B
Authority
CN
China
Prior art keywords
coordinate system
coordinate
area
point
travelable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010352986.4A
Other languages
Chinese (zh)
Other versions
CN111552289A (en
Inventor
侯林杰
沈孝通
秦宝星
程昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Gaussian Automation Technology Development Co Ltd
Suzhou Gaozhixian Automation Technology Co Ltd
Original Assignee
Shanghai Gaussian Automation Technology Development Co Ltd
Suzhou Gaozhixian Automation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Gaussian Automation Technology Development Co Ltd, Suzhou Gaozhixian Automation Technology Co Ltd filed Critical Shanghai Gaussian Automation Technology Development Co Ltd
Priority to CN202010352986.4A priority Critical patent/CN111552289B/en
Publication of CN111552289A publication Critical patent/CN111552289A/en
Application granted granted Critical
Publication of CN111552289B publication Critical patent/CN111552289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0261Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic plots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/028Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using a RF signal

Abstract

The application provides a detection method, a virtual radar device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring visual data collected on a travelable area; extracting travelable area edge points of the visual data; converting the travelable area edge point in the first coordinate system into a target point in the second coordinate system according to a calibrated coordinate conversion rule; and according to the plurality of sector scanning areas of the virtual radar constructed in the second coordinate system, extracting a target point which is closest to the origin of the virtual radar in each sector scanning area to form a travelable area boundary point in the second coordinate system. The scheme can be used for detecting the travelable area without depending on a hardware laser radar, and the capability of detecting the obstacles, which is the same as that of the hardware laser radar, is realized, so that the cost is reduced, and the automatic obstacle avoidance function is realized.

Description

Detection method, virtual radar device, electronic apparatus, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a detection method, a virtual radar apparatus, an electronic device, and a computer-readable storage medium.
Background
A mobile robot is a machine device that automatically performs work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. Unmanned vehicles, delivery robots, cleaning robots, balance cars, and the like are common mobile robots.
In order to realize autonomous action, one key point of the mobile robot is to reliably avoid obstacles. In the prior art, a long-distance laser radar and a millimeter wave radar are generally used for sensing and fusing to detect a target object. Clustering and dividing point clouds from the radar based on Euclidean distances among data points, establishing a convex hull (covex hull) for each point cloud, regarding the convex hull as a potential obstacle, and then fusing object detection results of the millimeter wave radar to confirm the obstacle, but the technology has the following problems: the hardware laser radar has high cost and is difficult to popularize in the mobile robot.
Disclosure of Invention
The embodiment of the application provides a method for detecting a travelable area, and the obstacle avoidance can be performed without a hardware laser radar, so that the cost is reduced.
The embodiment of the application provides a detection method, which is characterized by comprising the following steps:
acquiring visual data collected on a travelable area;
extracting travelable area edge points in the visual data;
converting the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to a calibrated coordinate conversion rule;
and constructing a plurality of sector scanning areas of the virtual radar in a second coordinate system, and extracting a target point which is closest to the origin of the virtual radar in each sector scanning area to form a travelable area boundary point in the second coordinate system.
According to the detection method provided by the embodiment of the application, the travelable area edge point in the first coordinate system can be determined firstly based on the visual data collected by the camera and the like, then the position of the travelable area edge point in the second coordinate system can be determined through coordinate transformation, and then the accurate travelable area boundary point can be obtained through screening, so that a hardware laser radar is not needed for detecting the travelable area, and the production cost of the mobile robot is reduced.
In an embodiment, the extracting travelable region edge points in the visual data comprises:
and extracting travelable region edge points of the visual data through the constructed visual segmentation model.
The embodiment can quickly obtain the accurate travelable area edge point through the visual segmentation model.
In an embodiment, before the extracting, by the constructed visual segmentation model, travelable region edge points of the visual data, the method further comprises:
acquiring a sample visual set marked with edge points of a travelable area;
turning and/or changing the color of the sample image in the sample visual set to expand the sample visual set;
and training the visual segmentation model according to the extended sample visual set and the correspondingly marked travelable area edge points.
According to the embodiment, more sample images can be obtained by turning and/or changing the color of the sample images, so that the sample visual set is expanded, the quantity of the original sample images is reduced, and the training efficiency and the model accuracy are improved.
In an embodiment, before the converting the travelable area edge point in the first coordinate system to the target point in the second coordinate system according to the calibrated coordinate conversion rule, the method further includes:
acquiring a first position coordinate of the same characteristic point in a first coordinate system and a second position coordinate in a second coordinate system;
and calculating the coordinate conversion rule according to the first position coordinate and the second position coordinate.
In the embodiment, the coordinate conversion rule is calculated according to the position coordinates of the same feature point in the two coordinate systems, so that a more accurate coordinate conversion rule can be obtained.
In an embodiment, the acquiring a first position coordinate of the same feature point in a first coordinate system and a second position coordinate in a second coordinate system includes:
acquiring a first image containing a corner index point and scanning radar data of the corner index point;
and obtaining a second position coordinate in the second coordinate system according to the position coordinate of the wall corner index point in the radar data and obtaining a first position coordinate of the same characteristic point in the first coordinate system according to the position coordinate of the wall corner index point in the first image.
In the embodiment, the corner is used as the feature point, and the radar data suddenly changes at the corner, so that the accurate second position coordinate can be obtained by detecting the radar data of the corner calibration point, and the coordinate conversion rule can be more accurate.
In an embodiment, the converting the edge points of the travelable area in the first coordinate system into the target points in the second coordinate system according to the calibrated coordinate conversion rule includes:
and multiplying the coordinate data of the travelable area edge point in the first coordinate system by a conversion matrix or a projection matrix corresponding to the coordinate conversion rule to obtain the coordinate data of the target point in the second coordinate system corresponding to the travelable area edge point.
In the embodiment, the coordinate conversion rule is expressed by the matrix, and then the coordinate data of the edge point is multiplied by the matrix, so that the coordinate data of the target point can be obtained, the calculation difficulty is reduced, and the calculation efficiency is improved.
In an embodiment, before the extracting, for each of the plurality of sector scanning areas according to the virtual radar constructed in the second coordinate system, a target point within the sector scanning area and closest to the virtual radar origin, the method further includes:
constructing a circular scanning area of the virtual radar in the second coordinate system;
the circular scan area is divided into a plurality of sector scan areas at a specified angular resolution.
According to the embodiment, the circular scanning area of the virtual radar is constructed, and the circular scanning area is divided into the plurality of fan-shaped scanning areas according to the specified angular resolution, so that the scanning mode of the laser radar can be accurately simulated, the scanning range of the laser radar can be simulated, and the meaningless target points can be conveniently and accurately filtered.
In one embodiment, constructing a circular scanning area of a virtual radar in the second coordinate system comprises:
constructing a circular scanning area of the virtual radar by taking the origin of the second coordinate system as the circle center and the designated distance as the radius;
alternatively, the first and second electrodes may be,
according to a calibrated coordinate conversion rule, converting the center coordinate of the vision sensor into a second coordinate system to obtain the coordinate position of the virtual radar;
and constructing a circular scanning area of the virtual radar by taking the coordinate position of the virtual radar as a center and the designated distance as a radius.
In the above embodiment, two different methods can be adopted to construct the circular scanning area, so that the effective scanning range of the laser radar can be simulated, and the meaningless target points can be conveniently and accurately filtered.
In an embodiment, the extracting, for each of the sector scanning areas according to the plurality of sector scanning areas of the virtual radar constructed in the second coordinate system, a target point closest to the virtual radar origin within the sector scanning area to form a travelable area boundary point in the second coordinate system includes:
removing target points which are not in the circular scanning area;
for each sector scanning area, reserving a target point which is closest to the center of the circular scanning area, and removing the rest target points in the sector scanning area;
the target points reserved for each sector scanning area constitute travelable area boundary points in the second coordinate system.
According to the embodiment, only the target point which is closest to the center in the sector scanning area is reserved, so that the target point which has no significance on obstacle avoidance can be removed, and the calculation amount and the calculation difficulty in subsequent path planning are reduced.
In an embodiment, the reserving, for each sector scanning area, a target point closest to the center of the circular scanning area, and removing the remaining target points in the sector scanning area includes:
for each sector scanning area, if no target point exists in the sector scanning area, reserving a point which is farthest away from the center in the sector scanning area as a target point;
if at least one target point exists in the sector scanning area, reserving the target point which is closest to the center of the circular scanning area, and removing the rest target points in the sector scanning area.
In the embodiment, the farthest point in the sector scanning area is taken as the target point for the sector area without the target point, so that the target points reserved in each sector scanning area are ensured, and the complete boundary point of the travelable area can be obtained.
The embodiment of the application provides a virtual radar device, includes:
the data acquisition module is used for acquiring visual data acquired on the drivable area;
the edge segmentation module is used for extracting travelable region edge points in the visual data;
the projection transformation module is used for transforming the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to a calibrated coordinate transformation rule;
and the boundary extraction module is used for constructing a plurality of sector scanning areas of the virtual radar in a second coordinate system, and extracting a target point which is closest to the origin of the virtual radar in each sector scanning area to form a travelable area boundary point in the second coordinate system.
In an embodiment, the edge segmentation module is specifically configured to: and extracting travelable region edge points of the visual data through the constructed visual segmentation model.
In an embodiment, the virtual radar apparatus further comprises:
the sample acquisition module is used for acquiring a sample visual set marked with the edge points of the travelable area;
the sample expansion module is used for turning and/or changing the color of a sample image in the sample visual set to expand the sample visual set;
and the model training module is used for training the visual segmentation model according to the extended sample visual set and the correspondingly marked travelable region edge points.
In an embodiment, the virtual radar apparatus further comprises:
the coordinate acquisition module is used for acquiring a first position coordinate of the same characteristic point in a first coordinate system and a second position coordinate in a second coordinate system;
and the rule calculation module is used for calculating the coordinate conversion rule according to the first position coordinate and the second position coordinate.
In one embodiment, the coordinate acquisition module comprises:
the system comprises a data acquisition unit, a data acquisition unit and a data processing unit, wherein the data acquisition unit is used for acquiring a first image containing a corner index point and scanning radar data of the corner index point;
and the coordinate obtaining unit is used for obtaining a second position coordinate in the second coordinate system according to the position coordinate of the corner index point in the radar data and obtaining a first position coordinate of the same characteristic point in the first coordinate system according to the position coordinate of the corner index point in the first image.
In one embodiment, the projective transformation module includes:
and the matrix transformation unit is used for multiplying the coordinate data of the travelable area edge point in the first coordinate system by a transformation matrix or a projection matrix corresponding to the coordinate transformation rule to obtain the coordinate data of the target point corresponding to the travelable area edge point in the second coordinate system.
In an embodiment, the virtual radar apparatus further comprises:
the area construction module is used for constructing a circular scanning area of the virtual radar in the second coordinate system;
and the sector dividing module is used for dividing the circular scanning area into a plurality of sector scanning areas according to the specified angular resolution.
In an embodiment, the region building module includes:
the first construction unit is used for constructing a circular scanning area of the virtual radar by taking the origin of the second coordinate system as a circle center and the designated distance as a radius;
alternatively, the first and second electrodes may be,
the central coordinate determining unit is used for converting the central coordinate of the visual sensor into a second coordinate system according to a calibrated coordinate conversion rule to obtain the coordinate position of the virtual radar;
and the second construction unit is used for constructing a circular scanning area of the virtual radar by taking the coordinate position of the virtual radar as a center and the designated distance as a radius.
In one embodiment, the boundary extraction module includes:
a removing unit for removing a target point not located within the circular scanning area;
the reservation unit is used for reserving a target point which is closest to the center of the circular scanning area and removing the rest target points in the sector scanning area aiming at each sector scanning area;
and the forming unit is used for forming the boundary point of the travelable area in the second coordinate system by the target point reserved in each sector scanning area.
In one embodiment, the reservation unit includes:
a first reserving subunit, configured to reserve, for each sector scanning area, a point that is farthest from the center in the sector scanning area as a target point when the target point does not exist in the sector scanning area;
and the second reserving subunit is configured to reserve a target point closest to the center of the circular scanning area when at least one target point exists in the sector scanning area, and remove the remaining target points in the sector scanning area.
An embodiment of the present application further provides an electronic device, where the electronic device includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the detection method described above.
The embodiment of the application also provides a computer readable storage medium, wherein the storage medium stores a computer program, and the computer program can be executed by a processor to complete the detection method.
The virtual radar device, the electronic device and the computer-readable storage medium provided by the embodiment of the application can determine the travelable area edge point in the first coordinate system based on the visual data acquired by the camera and the like, and then can determine the position of the travelable area edge point in the second coordinate system through coordinate transformation, so that the accurate travelable area edge point can be obtained through screening, the travelable area does not need to be detected by a hardware laser radar, and the production cost of the mobile robot is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic view of an application scenario of a detection method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a detection method provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of the travelable area edge points provided in the embodiment of the present application;
FIG. 5 is a flow chart of the construction of a visual segmentation model provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a sector scanning area for constructing a virtual radar according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart diagram of step 340 provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of filtering target points provided by an embodiment of the present application;
fig. 9 is a block diagram of a virtual radar apparatus provided in an embodiment of the present application;
fig. 10 is a schematic diagram of a computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is an application scenario schematic diagram of a method for detecting a travelable area according to an embodiment of the present application. As shown in fig. 1, the application scenario includes a mobile robot 100 mounted with a camera 101. The mobile robot 100 is internally provided with the control terminal 102, the control terminal 102 can acquire image data of the mobile robot 100 on a traveling road, which is acquired by the camera 101, and then the method provided by the embodiment of the application is adopted to determine the boundary point of the travelable area, so that the mobile robot 100 can avoid crossing the boundary point of the travelable area when planning a traveling path, and always travel in a safe range, thereby achieving the effect of avoiding obstacles.
As shown in fig. 2, an embodiment of the present application further provides an electronic device. The electronic device may be the control terminal 102 in the application scenario shown in fig. 1. As shown in fig. 2, the control terminal 102 may include a processor 201; a memory 202 for storing processor-executable instructions; the processor 201 is configured to execute the travelable region detection method provided by the embodiment of the present application.
In one embodiment, the processor 201 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors, or other electronic components for performing the methods described below.
The Memory 202 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The control terminal 102 may also include a power component 203, a sensor component 204, an audio component 205, and a communication component 206 coupled to the processor 201, as desired. The power supply component 203 may provide power to the entire control terminal 102, and the sensor component 204 may include one or more sensors for providing status assessment of various aspects to the control terminal 102. In one embodiment, the sensor assembly 204 may be used to detect the open/close status of the control terminal 102, a change in the position of the mobile robot 100, a change in the temperature of the assembly. In an embodiment, the sensor assembly 204 may include a magnetic sensor, a pressure sensor, or a temperature sensor.
The audio components 205 may include a microphone and a speaker. The communication component 206 is used for realizing wired or wireless communication between the control terminal 102 and other devices (such as a liquid crystal display). In one embodiment, the WIreless transmission method may be WiFi (WIreless-Fidelity), Mobile Communication (Universal Mobile communications System), Bluetooth (Bluetooth), ZigBee, and the like, and in one embodiment, the Communication element 206 may include an NFC (Near Field Communication) module to facilitate short-range Communication. In an embodiment, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wideband (UWB) technology, bluetooth technology, and other technologies.
Fig. 3 is a schematic flowchart of a detection method according to an embodiment of the present application. As shown in fig. 3, the method may include the following steps S310 to S340.
Step S310: visual data acquired over a travelable region is acquired.
The visual data refers to a road image shot by a camera in the process of moving the mobile robot. Besides roads, the visual data may also include target objects such as vehicles, pedestrians, flowers and trees on the roadsides, billboards and the like. The travelable region may be considered as a region on the road surface where the target object does not exist.
Step S320: and extracting travelable region edge points in the visual data.
The travelable region edge point is a pixel point at the edge of the travelable region in the image, and 401 represents the travelable region edge point as shown in fig. 4. In one embodiment, the travelable region edge points of the visual data can be extracted through the constructed visual segmentation model. In one embodiment, the captured image of the large road may be used as an input of a visual segmentation model, and an output of the visual segmentation model is a position of an edge point of the travelable region. The visual segmentation model can be obtained by training a large number of sample images of edge points of a known travelable region in advance.
In an embodiment, before the step S320, the method provided by the embodiment of the present application further includes a step of constructing a visual segmentation model, and as shown in fig. 5, constructing the visual segmentation model may include the following steps S301 to S303.
Step S301: and acquiring a sample visual set marked with the edge points of the travelable area.
The sample image is image data in which the edge points of the travelable area have been marked, and is referred to as a sample image for distinction. The sample visual set includes a large number of sample images, and travelable region edge points for each sample image are known.
Step S302: and turning and/or changing the color of the sample image in the sample visual set to expand the sample visual set.
The sample image can be horizontally inverted to obtain a new sample image, and the gray-scale value of the sample image can be integrally increased or reduced to obtain the new sample image. By the method, sample images acquired under different natural environments can be simulated, and new sample images can be added into the sample visual set, so that the types of the sample images in the sample visual set are enriched, and the robustness of the visual segmentation model is improved.
Step S303: and training the visual segmentation model according to the extended sample visual set and the correspondingly marked travelable area edge points.
In order to improve the real-time performance and the calculation efficiency, model parameters of an existing FCHardNet network pre-trained through a Cityscapes data set can be used, on the basis, a sample visual set is further used as input, model parameters are adjusted, the error between the travelable area edge point corresponding to each output sample image and the labeled travelable area edge point is minimized, and the trained FCHardNet network can be used as the visual segmentation model of the embodiment of the application. And then, extracting the travelable region edge points of the image data by using the trained visual segmentation model.
Step S330: and converting the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to a calibrated coordinate conversion rule.
The coordinate transformation rule is a mapping rule for mapping points in the first coordinate system to points in the second coordinate system. The calibration method of the coordinate transformation rule can be developed in detail below, and is not described herein again. The target point is a corresponding point when the travelable area edge point in the first coordinate system is mapped to the second coordinate system. The first coordinate system may have an origin (0,0) at one of vertices of the image, and the length and width directions are the x-axis direction and the y-axis direction, respectively. The first coordinate system may be defined by a center of the image as an origin, a direction parallel to the length as an x-axis, and a direction parallel to the width as a y-axis. The second coordinate system is a coordinate system established in the radiation plane by taking the position of the laser radar as an origin.
In an embodiment, the coordinate transformation rule may be represented by a transformation matrix, a projection matrix, or transformation parameters, the travelable region edge point in the first coordinate system is transformed into the second coordinate system, and the coordinate data of the target point in the second coordinate system corresponding to the travelable region edge point may be obtained by multiplying the coordinate data of the travelable region edge point in the first coordinate system by the transformation matrix or the projection matrix corresponding to the coordinate transformation rule.
Step S340: according to the plurality of sector scanning areas of the virtual radar constructed in the second coordinate system, aiming at each sector scanning area, extracting a target point which is closest to the origin of the virtual radar in the sector scanning area to form a boundary point of a travelable area in the second coordinate system.
The virtual radar is different from a real hardware laser radar, is a simulation of the hardware laser radar, and can obtain a plurality of sector scanning areas by simulating a scanning mode of the hardware laser radar. The position of the virtual radar in the second coordinate system and the sector scanning area may be preset in advance. Thus, for each sector scanning area, the target point closest to the virtual radar can be extracted from within the sector scanning area. The target point closest to the virtual radar extracted from each sector-scan area may be considered as a travelable area boundary point in the second coordinate system. For the distinction, the pixel points of the edge of the travelable region in the first coordinate system are referred to as travelable edge points, and the edge points of the travelable region in the second coordinate system are referred to as travelable region boundary points.
In an embodiment, if there is no target point in a certain sector scanning area, a point in the sector scanning area farthest from the virtual radar may be taken as a target point extracted from the sector scanning area and closest to the virtual radar, and the target point and target points extracted from other sector scanning areas and closest to the virtual radar may together form a travelable area boundary point in the second coordinate system.
According to the technical scheme provided by the embodiment of the application, the travelable area edge points in the first coordinate system can be determined firstly based on the image data acquired by the camera, the position of the travelable area edge points in the second coordinate system can be determined through coordinate transformation, accurate travelable area boundary points can be obtained through screening, a travelable area does not need to be detected by a hardware laser radar, and the production cost of the mobile robot is reduced.
In an embodiment, before the step S340, a circular scanning area of the virtual radar may be constructed in the second coordinate system; and then dividing the circular scanning area into a plurality of fan-shaped scanning areas according to the specified angular resolution.
In an embodiment, as shown in fig. 6, an origin (i.e., 0 point) of the second coordinate system may be used as a position where the virtual radar is located, and a circular scanning area of the virtual radar may be constructed by using the origin of the second coordinate system as a center of a circle and designating the distance r as a radius, where the circular scanning area may have a plurality of sector scanning areas. The specified distance r may be determined from the visible distance of the camera that acquired the image data, for example, the specified distance may be 10 meters.
In other embodiments, the center coordinates of the visual sensor (e.g., a camera) may be converted into the second coordinate system according to a calibrated coordinate conversion rule, so as to obtain the coordinate position of the virtual radar; and then, constructing a circular scanning area of the virtual radar by taking the coordinate position of the virtual radar as a center and the designated distance as a radius.
The central coordinate of the visual sensor may be a position coordinate of the camera center in a camera coordinate system, the position coordinate of the camera center in the first coordinate system may be obtained according to a mapping relationship between the camera coordinate system and an image coordinate system (i.e., the first coordinate system), and then the position coordinate of the camera center in the first coordinate system may be mapped into a second coordinate system according to a calibrated coordinate conversion rule to serve as a coordinate position of the virtual radar. And then, a circular scanning area of the virtual radar can be constructed by taking the position of the virtual radar as the center of a circle and designating the distance r as the radius.
Wherein the specified angular resolution is an empirical value, related to the resolution of the image data. The specified angular resolution refers to the number of sector scan areas within a circular scan area. The larger the resolution of the image data is, the larger the designated angular resolution is, i.e., the smaller the central angle of each sector-scanned area is, the larger the number of sector-scanned areas is.
In an embodiment, as shown in fig. 7, the step S340 may include the following steps S341 to S342.
Step S341: and removing target points which are not in the circular scanning area.
Step S342: and for each sector scanning area, reserving a target point which is closest to the center of the circular scanning area, and removing the rest target points in the sector scanning area.
Step S343: the target points reserved for each sector scanning area constitute travelable area boundary points in the second coordinate system.
As shown in fig. 8, 604 indicates the position of the virtual radar, 601 indicates the maximum effective range, and the effective range 601 may be considered as a circular scanning area formed by using the virtual radar 604 as the center and the predetermined distance r as the radius. 603, the distance between the target point 603 outside the effective range 601 and the virtual radar 604 is greater than the specified distance, and these target points can be considered as a longer distance from the mobile robot, which is not helpful for avoiding obstacles, so the target point 603 outside the effective range 601 can be filtered out first.
In an embodiment, the central angle 602 of each sector-shaped scanning area may be 11.5 °. Based on the scanning principle of the lidar, since a target point (e.g., point a) at a far distance is blocked, a target point closest to the virtual radar 604 may be reserved for each sector-shaped scanning area, and the remaining target points in the sector-shaped scanning area may be removed. If there is no target point in a sector scanning area, a point 605 on the sector arc (i.e. the point in the sector scanning area that is farthest from the center of the circle) may be selected as the target point in the sector scanning area that is closest to the virtual radar 604, and the target points retained in all the sector scanning areas together form the boundary point of the travelable area in the second coordinate system.
In an embodiment, a coordinate transformation rule between the first coordinate system and the second coordinate system may obtain a first position coordinate of the same feature point in the first coordinate system and a second position coordinate of the same feature point in the second coordinate system by calibrating a plurality of feature points in advance; and then calculating a coordinate transformation rule according to the first position coordinate and the second position coordinate.
The first position coordinates refer to position coordinates of the feature point in a first coordinate system (i.e., an image coordinate system), and the second position coordinates refer to position coordinates of the feature point in a second coordinate system (i.e., a radar coordinate system), which are respectively referred to as first position coordinates and second position coordinates for distinction.
The feature points may be artificially calibrated points. Because the laser radar can detect the sudden change of distance when scanning the corner, in an embodiment, the position of the corner can be selected by the characteristic point, and the corner index point is obtained by pasting a mark on the corner. And then the corner index point can be shot through the camera, and a first image containing the corner index point is obtained. A first coordinate system is established in the first image, and the position coordinates of the corner index points in the first coordinate system can be used as first position coordinates.
And radar data can be obtained by scanning the corner through the laser radar. The radar data is used to indicate the distance in different directions between where the obstruction and the lidar are detected. In an embodiment, the second coordinate system may be established with the position of the laser radar as an origin, and the position where the sudden change of distance occurs may be used as a position coordinate of the corner index point in the second coordinate system, so as to obtain the second position coordinate.
In one embodiment, the coordinate transformation rule between the first coordinate system and the second coordinate system can be expressed by the following formula:
Figure BDA0002471518410000171
wherein (x)l,yl) Is the position coordinates of the feature points in the second coordinate system, (x)r,yr) Is the position coordinates of the feature points in the first coordinate system. H is defined as a homography transformation matrix, i.e., a coordinate transformation rule.
To calculate the homography change matrix H, a set of matching points may be collected
Figure BDA0002471518410000181
(xi,yi) Representing the position coordinates of the feature point i in the first coordinate system, namely first position coordinates, (x'i,y’i) And expressing the position coordinates of the characteristic point i in a second coordinate system, namely second position coordinates, and satisfying the following formula:
Figure BDA0002471518410000182
the homography matrix H is a 3x3 matrix, each element denoted by H, for a total of 9.
The corresponding relation of the plane coordinate and the homogeneous coordinate is
Figure BDA0002471518410000183
Wherein x represents an abscissa value, y represents an ordinate value, w represents a zoom scale, R2 represents a two-dimensional planar coordinate system, and P3 represents a homogeneous coordinate system.
Therefore, the above formula can be expressed as:
Figure BDA0002471518410000184
Figure BDA0002471518410000185
further possible transformations are:
(h31xi+h32yi+h33)·x′i=h11xi+h12yi+h13
(h31xi+h32yi+h33)·y′i=h21xi+h22yi+h23
that is, a set of matching points can obtain 2 sets of equations, and then only 4 sets of non-collinear matching points are needed to solve the unique solution of H.
In this way, a unique coordinate transformation rule H can be calculated, so that the coordinates of the edge point of the travelable area in the first coordinate system are multiplied by the coordinate transformation rule H, and the position coordinates of the edge point in the second coordinate system, that is, the coordinates of the target point, can be obtained. The travelable region boundary point in the second coordinate system may then be determined and output through step S340 described above.
In an embodiment, the object type of each pixel point of the sample image may be labeled, and the sample image may be used to train the image recognition model. The image recognition model may then be used to identify the object type of each pixel in the image data obtained in step S310. The object type of the pixel point is used for indicating that the pixel point belongs to a pedestrian, a vehicle, a road surface or a tree. Therefore, the object type at the boundary point of the travelable region can be determined according to the boundary point of the travelable region in the second coordinate system and the object type of the corresponding pixel point of the boundary point in the first coordinate system, so that a proper safety distance can be set during path planning.
The following are embodiments of the apparatus of the present application that can be used to perform the above-described embodiments of the detection method of the present application. For details not disclosed in the embodiments of the apparatus of the present application, refer to the embodiments of the detection method of the present application.
Fig. 9 is a block diagram of a virtual radar apparatus according to an embodiment of the present application. As shown in fig. 9, the apparatus includes: a data acquisition module 910, an edge segmentation module 920, a projective transformation module 930, and a boundary extraction module 940.
A data acquisition module 910, configured to acquire visual data collected over a travelable region;
an edge segmentation module 920, configured to extract travelable region edge points in the visual data;
a projective transformation module 930, configured to transform the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to the calibrated coordinate transformation rule;
a boundary extraction module 940, configured to extract, for each sector scanning area, a target point that is closest to the virtual radar origin in the sector scanning area according to the plurality of sector scanning areas of the virtual radar constructed in the second coordinate system, so as to form a travelable area boundary point in the second coordinate system.
In an embodiment, the edge segmentation module 920 is specifically configured to: and extracting travelable region edge points of the visual data through the constructed visual segmentation model.
In an embodiment, the virtual radar apparatus further comprises:
a sample obtaining module 901, configured to obtain a sample visual set labeled with an edge point of a travelable region;
a sample expansion module 902, configured to perform flipping and/or color change on a sample image in the sample visual set, and expand the sample visual set;
and the model training module 903 is configured to train the visual segmentation model according to the extended sample visual set and the edge points of the travelable region labeled correspondingly.
In an embodiment, the virtual radar apparatus further comprises:
a coordinate obtaining module 970, configured to obtain a first position coordinate of the same feature point in a first coordinate system and a second position coordinate in a second coordinate system;
a rule calculation module 980, configured to calculate the coordinate transformation rule according to the first position coordinate and the second position coordinate.
In an embodiment, the coordinate acquiring module 970 includes:
a data obtaining unit 971, configured to obtain a first image including a corner index point and radar data for scanning the corner index point;
a coordinate obtaining unit 972, configured to obtain a second position coordinate in the second coordinate system according to the position coordinate of the corner calibration point in the radar data, and obtain a first position coordinate of the same feature point in the first coordinate system according to the position coordinate of the corner calibration point in the first image.
In an embodiment, the projective transformation module 930 is specifically configured to: and multiplying the coordinate data of the travelable area edge point in the first coordinate system by a conversion matrix or a projection matrix corresponding to the coordinate conversion rule to obtain the coordinate data of the target point in the second coordinate system corresponding to the travelable area edge point.
In an embodiment, the virtual radar apparatus further comprises:
a region construction module 950 for constructing a circular scanning region of the virtual radar in the second coordinate system;
a sector division module 960 for dividing the circular scan area into a plurality of sector scan areas according to a specified angular resolution.
In one embodiment, the region building module 950 includes:
a first constructing unit (not shown) configured to construct a circular scanning area of the virtual radar by using the origin of the second coordinate system as a center of a circle and an appointed distance as a radius;
alternatively, the first and second electrodes may be,
a central coordinate determination unit 951, configured to convert the central coordinate of the vision sensor into a second coordinate system according to a calibrated coordinate conversion rule, so as to obtain a coordinate position of the virtual radar;
a second constructing unit 952, configured to construct a circular scanning area of the virtual radar by taking the coordinate position of the virtual radar as a center and the designated distance as a radius.
In one embodiment, the boundary extraction module 940 includes:
a removal unit 941 configured to remove a target point that is not within the circular scanning area;
a reserving unit 942, configured to reserve, for each sector scanning area, a target point closest to the center of the circular scanning area, and remove remaining target points in the sector scanning area;
a construction unit 943 for the target point reserved for each sector scan area, which constitutes a travelable area boundary point in the second coordinate system.
In one embodiment, reservation unit 942 includes:
a first retaining subunit 9421, configured to, for each sector scanning area, retain, as a target point, a point in the sector scanning area that is farthest from the center when the target point does not exist in the sector scanning area;
a second reserving subunit 9422, configured to reserve, when at least one target point exists in the sector scanning area, a target point closest to the center of the circular scanning area, and remove the remaining target points in the sector scanning area.
The implementation process of the functions and actions of each module in the device is specifically detailed in the implementation process of the corresponding step in the detection method, and is not described herein again.
As shown in fig. 10, an embodiment of the present application further provides a computer-readable storage medium 500, where the storage medium 500 stores a computer program (i.e., computer-executable instructions 501), and the computer program is executable by the processor 201 to perform the detection method provided by the embodiment of the present application.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (11)

1. A method of detection, comprising:
acquiring visual data collected on a travelable area;
extracting travelable area edge points in the visual data;
converting the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to a calibrated coordinate conversion rule;
constructing a plurality of sector scanning areas of a virtual radar in a second coordinate system, and extracting a target point which is closest to the origin of the virtual radar in each sector scanning area to form a travelable area boundary point in the second coordinate system;
converting the edge point of the travelable area in the first coordinate system into the target point in the second coordinate system according to the calibrated coordinate conversion rule, comprising:
multiplying the coordinate data of the travelable area edge point in the first coordinate system by a conversion matrix or a projection matrix corresponding to the coordinate conversion rule to obtain the coordinate data of a target point in the second coordinate system corresponding to the travelable area edge point;
constructing a plurality of sector-shaped scanning areas of a virtual radar in the second coordinate system, including:
constructing a circular scanning area of the virtual radar in the second coordinate system;
the circular scan area is divided into a plurality of sector scan areas at a specified angular resolution.
2. The method of claim 1, wherein said extracting travelable region edge points in the visual data comprises:
and extracting travelable region edge points of the visual data through the constructed visual segmentation model.
3. The method of claim 2, wherein before said extracting travelable region edge points of the visual data by the constructed visual segmentation model, the method further comprises:
acquiring a sample visual set marked with edge points of a travelable area;
turning and/or changing the color of the sample image in the sample visual set to expand the sample visual set;
and training the visual segmentation model according to the extended sample visual set and the correspondingly marked travelable area edge points.
4. The method according to claim 1, wherein before said converting edge points of the travelable area in the first coordinate system to target points in the second coordinate system according to the calibrated coordinate conversion rule, the method further comprises:
acquiring a first position coordinate of the same characteristic point in a first coordinate system and a second position coordinate in a second coordinate system;
and calculating the coordinate conversion rule according to the first position coordinate and the second position coordinate.
5. The method of claim 4, wherein the obtaining a first position coordinate of the same feature point in a first coordinate system and a second position coordinate in a second coordinate system comprises:
acquiring a first image containing a corner index point and scanning radar data of the corner index point;
and obtaining a second position coordinate in the second coordinate system according to the position coordinate of the wall corner index point in the radar data and obtaining a first position coordinate of the same characteristic point in the first coordinate system according to the position coordinate of the wall corner index point in the first image.
6. The method of claim 1, wherein constructing a circular scan area of a virtual radar in the second coordinate system comprises:
constructing a circular scanning area of the virtual radar by taking the origin of the second coordinate system as the circle center and the designated distance as the radius;
alternatively, the first and second electrodes may be,
according to a calibrated coordinate conversion rule, converting the center coordinate of the vision sensor into a second coordinate system to obtain the coordinate position of the virtual radar;
and constructing a circular scanning area of the virtual radar by taking the coordinate position of the virtual radar as a center and the designated distance as a radius.
7. The method according to claim 1, wherein the extracting, for each sector scanning area, a target point within the sector scanning area that is closest to the virtual radar origin from the plurality of sector scanning areas of the virtual radar constructed in the second coordinate system, and constituting a travelable area boundary point in the second coordinate system, comprises:
removing target points which are not in the circular scanning area;
for each sector scanning area, reserving a target point which is closest to the center of the circular scanning area, and removing the rest target points in the sector scanning area;
the target points reserved for each sector scanning area constitute travelable area boundary points in the second coordinate system.
8. The method of claim 7, wherein for each sector scan area, reserving the target point closest to the center of the circular scan area and removing the remaining target points within the sector scan area comprises:
for each sector scanning area, if no target point exists in the sector scanning area, reserving a point which is farthest away from the center in the sector scanning area as a target point;
if at least one target point exists in the sector scanning area, reserving the target point which is closest to the center of the circular scanning area, and removing the rest target points in the sector scanning area.
9. A virtual radar apparatus, comprising:
the data acquisition module is used for acquiring visual data acquired on the drivable area;
the edge segmentation module is used for extracting travelable region edge points in the visual data;
the projection transformation module is used for transforming the edge points of the travelable area in the first coordinate system into target points in the second coordinate system according to a calibrated coordinate transformation rule;
the boundary extraction module is used for constructing a plurality of sector scanning areas of the virtual radar in a second coordinate system, and extracting a target point which is closest to the origin of the virtual radar in each sector scanning area to form a travelable area boundary point in the second coordinate system;
wherein, according to the calibrated coordinate conversion rule, converting the edge point of the travelable area in the first coordinate system into the target point in the second coordinate system comprises:
multiplying the coordinate data of the travelable area edge point in the first coordinate system by a conversion matrix or a projection matrix corresponding to the coordinate conversion rule to obtain the coordinate data of a target point in the second coordinate system corresponding to the travelable area edge point;
constructing a plurality of sector-shaped scanning areas of a virtual radar in the second coordinate system, comprising:
constructing a circular scanning area of the virtual radar in the second coordinate system;
the circular scan area is divided into a plurality of sector scan areas at a specified angular resolution.
10. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the detection method of any one of claims 1-8.
11. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the detection method of any one of claims 1-8.
CN202010352986.4A 2020-04-28 2020-04-28 Detection method, virtual radar device, electronic apparatus, and storage medium Active CN111552289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010352986.4A CN111552289B (en) 2020-04-28 2020-04-28 Detection method, virtual radar device, electronic apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010352986.4A CN111552289B (en) 2020-04-28 2020-04-28 Detection method, virtual radar device, electronic apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN111552289A CN111552289A (en) 2020-08-18
CN111552289B true CN111552289B (en) 2021-07-06

Family

ID=72003326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010352986.4A Active CN111552289B (en) 2020-04-28 2020-04-28 Detection method, virtual radar device, electronic apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN111552289B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348777B (en) * 2020-10-19 2024-01-12 深圳市优必选科技股份有限公司 Human body target detection method and device and terminal equipment
CN112835026B (en) * 2020-12-31 2024-02-20 福瑞泰克智能系统有限公司 Radar mirror image target detection method and device, radar equipment and vehicle
CN112987734B (en) * 2021-02-23 2023-05-02 京东科技信息技术有限公司 Robot travel method, robot travel device, electronic device, storage medium, and program product
CN113552574B (en) * 2021-07-13 2023-01-06 上海欧菲智能车联科技有限公司 Region detection method and device, storage medium and electronic equipment
CN113917450B (en) * 2021-12-07 2022-03-11 深圳佑驾创新科技有限公司 Multi-extended-target radar measurement set partitioning method and device
CN114476061B (en) * 2021-12-24 2024-02-09 中国电信股份有限公司 Interference positioning method and unmanned aerial vehicle

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928320A (en) * 2020-02-10 2020-03-27 上海高仙自动化科技发展有限公司 Path generation method and generation device, intelligent robot and storage medium

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3304687B2 (en) * 1995-05-24 2002-07-22 日産自動車株式会社 Vehicle lane recognition device, obstacle detection device, road departure notification device
JP4682809B2 (en) * 2005-11-04 2011-05-11 株式会社デンソー Parking assistance system
JP2008028478A (en) * 2006-07-18 2008-02-07 Sumitomo Electric Ind Ltd Obstacle detection system, and obstacle detecting method
JP5124147B2 (en) * 2007-02-01 2013-01-23 三洋電機株式会社 Camera calibration apparatus and method, and vehicle
CN101549683B (en) * 2009-04-23 2011-09-28 上海交通大学 Vehicle intelligent method for automatically identifying road pit or obstruction
US8848978B2 (en) * 2011-09-16 2014-09-30 Harman International (China) Holdings Co., Ltd. Fast obstacle detection
CN102538763B (en) * 2012-02-14 2014-03-12 清华大学 Method for measuring three-dimensional terrain in river model test
CN102840853A (en) * 2012-07-25 2012-12-26 中国航空工业集团公司洛阳电光设备研究所 Obstacle detection and alarm method for vehicle-mounted night vision system
CN103033817B (en) * 2012-11-25 2014-08-13 中国船舶重工集团公司第七一○研究所 Obstruction automatic recognition system for collision preventing of large-scale autonomous underwater vehicle (AUV)
KR102152641B1 (en) * 2013-10-31 2020-09-08 엘지전자 주식회사 Mobile robot
CN104036279B (en) * 2014-06-12 2017-04-05 北京联合大学 A kind of intelligent vehicle traveling control method and system
CN104390644B (en) * 2014-11-25 2017-05-24 浙江理工大学 Method for detecting field obstacle based on field navigation image collection equipment
CN104570147B (en) * 2014-12-26 2017-05-31 北京控制工程研究所 A kind of obstacle detection method based on monocular camera and initiating structure light
CN107835997B (en) * 2015-08-06 2021-07-30 埃森哲环球服务有限公司 Vegetation management for powerline corridor monitoring using computer vision
US20180025640A1 (en) * 2016-07-19 2018-01-25 Ford Global Technologies, Llc Using Virtual Data To Test And Train Parking Space Detection Systems
CN106020204A (en) * 2016-07-21 2016-10-12 触景无限科技(北京)有限公司 Obstacle detection device, robot and obstacle avoidance system
CN106338989B (en) * 2016-08-01 2019-03-26 内蒙古大学 A kind of field robot binocular vision navigation methods and systems
CN106485233B (en) * 2016-10-21 2020-01-17 深圳地平线机器人科技有限公司 Method and device for detecting travelable area and electronic equipment
CN106503653B (en) * 2016-10-21 2020-10-13 深圳地平线机器人科技有限公司 Region labeling method and device and electronic equipment
CN107977995B (en) * 2016-10-25 2022-05-06 菜鸟智能物流控股有限公司 Target area position detection method and related device
CN106679671B (en) * 2017-01-05 2019-10-11 大连理工大学 A kind of navigation identification figure recognition methods based on laser data
JP7018566B2 (en) * 2017-04-28 2022-02-14 パナソニックIpマネジメント株式会社 Image pickup device, image processing method and program
WO2019039733A1 (en) * 2017-08-21 2019-02-28 (주)유진로봇 Moving object and combined sensor using camera and lidar
CN108256413B (en) * 2017-11-27 2022-02-25 科大讯飞股份有限公司 Passable area detection method and device, storage medium and electronic equipment
CN108734124A (en) * 2018-05-18 2018-11-02 四川国软科技发展有限责任公司 A kind of laser radar dynamic pedestrian detection method
CN108961343A (en) * 2018-06-26 2018-12-07 深圳市未来感知科技有限公司 Construction method, device, terminal device and the readable storage medium storing program for executing of virtual coordinate system
CN109283538B (en) * 2018-07-13 2023-06-13 上海大学 Marine target size detection method based on vision and laser sensor data fusion
CN109977845B (en) * 2019-03-21 2021-08-17 百度在线网络技术(北京)有限公司 Driving region detection method and vehicle-mounted terminal
CN110728288B (en) * 2019-10-12 2022-06-28 上海高仙自动化科技发展有限公司 Corner feature extraction method based on three-dimensional laser point cloud and application thereof
CN110765922B (en) * 2019-10-18 2023-05-02 华南理工大学 Binocular vision object detection obstacle system for AGV

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110928320A (en) * 2020-02-10 2020-03-27 上海高仙自动化科技发展有限公司 Path generation method and generation device, intelligent robot and storage medium

Also Published As

Publication number Publication date
CN111552289A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
CN111552289B (en) Detection method, virtual radar device, electronic apparatus, and storage medium
US11632536B2 (en) Method and apparatus for generating three-dimensional (3D) road model
Shinzato et al. Road terrain detection: Avoiding common obstacle detection assumptions using sensor fusion
US20180188733A1 (en) Multi-channel sensor simulation for autonomous control systems
US20180224863A1 (en) Data processing method, apparatus and terminal
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN112861653A (en) Detection method, system, equipment and storage medium for fusing image and point cloud information
Stepan et al. Robust data fusion with occupancy grid
JP2020502654A (en) Human-machine hybrid decision-making method and apparatus
KR20190134231A (en) Apparatus and method for estimating location of vehicle and computer recordable medium storing computer program thereof
CN113673282A (en) Target detection method and device
JP7440005B2 (en) High-definition map creation method, apparatus, device and computer program
CN113095184B (en) Positioning method, driving control method, device, computer equipment and storage medium
CN110705385B (en) Method, device, equipment and medium for detecting angle of obstacle
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
KR102441100B1 (en) Road Fingerprint Data Construction System and Method Using the LAS Data
CN112964263B (en) Automatic drawing establishing method and device, mobile robot and readable storage medium
CN113724387A (en) Laser and camera fused map construction method
CN113189610A (en) Map-enhanced autonomous driving multi-target tracking method and related equipment
CN112907746A (en) Method and device for generating electronic map, electronic equipment and storage medium
KR102316818B1 (en) Method and apparatus of updating road network
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
Titov et al. Multispectral optoelectronic device for controlling an autonomous mobile platform
KR101934297B1 (en) METHOD FOR DEVELOPMENT OF INTERSECTION RECOGNITION USING LINE EXTRACTION BY 3D LiDAR
CN112907659A (en) Mobile device positioning system, method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant