CN113064425A - AGV equipment and navigation control method thereof - Google Patents

AGV equipment and navigation control method thereof Download PDF

Info

Publication number
CN113064425A
CN113064425A CN202110285196.3A CN202110285196A CN113064425A CN 113064425 A CN113064425 A CN 113064425A CN 202110285196 A CN202110285196 A CN 202110285196A CN 113064425 A CN113064425 A CN 113064425A
Authority
CN
China
Prior art keywords
agv
data processing
processing module
information
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110285196.3A
Other languages
Chinese (zh)
Inventor
黄夷
刘文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Integrity Australia Technology Consulting Ltd
Original Assignee
Integrity Australia Technology Consulting Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Integrity Australia Technology Consulting Ltd filed Critical Integrity Australia Technology Consulting Ltd
Priority to CN202110285196.3A priority Critical patent/CN113064425A/en
Publication of CN113064425A publication Critical patent/CN113064425A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides AGV equipment and a navigation control method thereof. Therefore, images around the AGV are acquired based on the high-definition camera, a 3D perception fusion model is formed by combining the information of the depth camera and the information of the temperature, and the position and the posture of the AGV are judged and the conditions of the surrounding road are accurately judged through the fusion model. Meanwhile, the AGV is vehicle-mounted and can receive more scheduling data, obstacle avoidance adjustment is carried out on the AGV, the route is planned again in time, unnecessary parking is reduced, and the operating efficiency of the AGV is accelerated.

Description

AGV equipment and navigation control method thereof
Technical Field
The invention relates to the technical field of navigation, in particular to AGV equipment based on deep learning and image recognition and a navigation control method thereof.
Background
With the rapid development of machine vision, the machine vision is rapidly developed in various fields, particularly in the automatic driving industry, and the unmanned driving technology is developed in a crossing manner. The AGV also remains visually using the point cloud (SLAM) technique. With the development of machine vision technology, the progress of image recognition technology, the application of semantic analysis and machine learning, the use of machine vision on the AGV is becoming strong.
The mainstream technologies used by the existing AGV include magnetic navigation, laser navigation, inertial navigation, two-dimensional code, visual navigation and the like. In contrast, magnetic navigation cannot achieve intelligent obstacle avoidance, and can only travel according to a laid magnetic tape path despite accurate positioning. Laser navigation belongs to the mainstream and uses the technology precision height and the ambient brightness requirement is low, but lays place and equipment cost too high, and the circuit transformation is more troublesome. The inertial navigation technology is advanced, is adopted by a plurality of families abroad, depends on a gyroscope with higher manufacturing cost, and the guiding precision is influenced by the manufacturing precision and the service life of the gyroscope. The laser navigation technology mainly comprises reflector arrangement and SLAM navigation technology. The laser reflector navigation technology is old, and mainly uses a laser radar to continuously emit laser pulses, and a rotary optical mechanism emits the laser pulses to all directions in a scanning angle at certain angle intervals to form a two-dimensional scanning surface taking a radial coordinate as a reference. The laser radar identifies the position information of the laser reflector by identifying the surface reflectivity of an object in a scanning range, and then calculates the position and attitude information of the AGV where the laser radar is located according to the positions of at least three reflectors. The inside of laser reflector is the prism structure, can realize the same way reflection of incident light, and the reflectivity is greater than ordinary object surface reflectivity far away, so laser radar can discern the reflector panel easily. In the embodiment, the field arrangement is troublesome, and the map is easy to jump, so that the positioning failure is caused. The SLAM navigation technology is also called laser natural contour navigation, and is based on the SLAM navigation principle (SLAM is synchronous positioning and mapping), in an unknown environment, a robot positions itself through an internal sensor (an encoder, an IMU and the like) and an external sensor (a laser sensor or a visual sensor) carried by the robot, and an environment map is incrementally constructed by using environment information acquired by the external sensor on the basis of positioning. In the map building process, the laser sensor is used for detecting and learning the object outline (such as a wall, a column or other fixed objects) in the surrounding natural environment, the object outline comprises information such as distance, angle and reflectivity of a measured object, and then the positioning navigation of the mobile robot is realized through a SLAM algorithm and the like. The method is difficult to construct the image, and the cost is high, so that the popularization of the AGV is not facilitated.
There is therefore a need for improvements to existing navigation approaches to balance cost with accurate navigation.
Disclosure of Invention
In view of the above-mentioned drawbacks, the present disclosure provides an AGV navigation control method through deep learning and image recognition, which replaces the conventional AGV navigation method to make AGV navigation more secure and intelligent.
In order to achieve the purpose, the following technical scheme is adopted in the application:
a control method for AGV navigation is characterized by comprising the following steps:
s11, based on a thermal imaging device, obtaining the ambient temperature around the current AGV body and transmitting the ambient temperature to a data processing module,
s12, obtaining the current depth image information of the AGV based on the trinocular camera and transmitting the current depth image information to the data processing module,
s13, the data processing module receives and responds to the environment temperature information and the depth graphic information and generates annotation information through processing,
s14, obtaining an environment image around the current AGV based on the high-definition camera and transmitting the environment image to the data processing module, receiving and responding the environment image by the data processing module and processing the environment image,
and the matched annotation information is labeled on the processed environment image,
and transmits the marked result to the control module,
and S15, the control module receives and controls the motion attitude of the AGV according to the result after the response is marked. The processed temperature information and depth information are annotated to the image fed back by the high-definition camera, and the fusion model judges the position and the posture of the AGV and accurately judges the condition of the surrounding road. Unnecessary parking is reduced, and the operating efficiency of the AGV is accelerated.
Preferably, the step S13 further includes:
the data processing module receives and responds to the depth image information and processes the depth image through semantic segmentation.
Preferably, the step S14 further includes:
the data processing module receives and responds to the environment image and processes the depth image based on semantic segmentation.
Further, optimizing the image information subjected to semantic segmentation processing based on the conditional random field model.
An AGV device is characterized by being operated according to the control method.
The embodiment of the application provides AGV equipment, which is characterized in that,
a high-definition camera electrically connected with the data processing module for transmitting the real-time shot environment image around the AGV to the data processing module,
an infrared thermal imager, a data processing module electrically connected with the infrared thermal imager, a data processing module and a thermal imaging module, wherein the infrared thermal imager is used for detecting the surrounding environment of the AGV and transmitting the detected environment temperature information to the data processing module,
and the wireless module is connected with the remote control end, and the AGV equipment and the remote control end carry out information interaction through the wireless module. Therefore, the remote control end can monitor the AGV equipment remotely in real time and intervene in time.
Preferably, the high-definition camera is a binocular high-definition camera.
Preferably, the AGV device further includes a gyroscope electrically connected to the data processing module for timely deviation correction.
Preferably, this AGV equipment still includes, ultrasonic sensor disposes on AGV equipment casing, its electric connection's data processing module, ultrasonic sensor is used for in time monitoring the environment of coming and going around.
Preferably, the AGV equipment is characterized in that a plurality of AGV equipments are included in a preset area, each AGV equipment has an ad hoc network function, when the AGV equipment is in an automatic mode, one of the AGV equipments is configured as a master AGV, and the others are configured as slave AGVs,
and only the main AGV is connected with the remote control end to perform information interaction, and the main AGV sends a control instruction to the connected auxiliary AGV.
Has the advantages that:
the AGV navigation method based on combination of deep learning and image recognition is improved on the basis of the SLAM technology, a three-dimensional map is established, a camera is used for obtaining images in real time, a processing module receives and responds to image information fed back by the camera and processes the image information, semantic segmentation is conducted to mark objects in the images according to classes respectively, semantic prediction results are optimized, unnecessary parking is reduced, and the operation efficiency of the AGV is improved.
Drawings
Fig. 1 and fig. 2 are schematic flow charts of control methods according to embodiments of the present application;
FIG. 3 is a schematic view of a topology of a conditional random field according to an embodiment of the present application;
FIG. 4 is a functional topology diagram of an AGV according to an embodiment of the present application.
Detailed Description
The above-described scheme is further illustrated below with reference to specific examples. It should be understood that these examples are for illustrative purposes and are not intended to limit the scope of the present invention. The conditions employed in the examples may be further adjusted as determined by the particular manufacturer, and the conditions not specified are typically those used in routine experimentation.
In the present application, the terms "upper", "lower", "inside", "middle", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
The invention provides AGV equipment and a navigation control method thereof. Therefore, images around the AGV are acquired based on the high-definition camera, a 3D perception fusion model is formed by combining the information of the depth camera and the information of the temperature, and the position and the posture of the AGV are judged and the conditions of the surrounding road are accurately judged through the fusion model. Meanwhile, the AGV is vehicle-mounted and can receive more scheduling data, obstacle avoidance adjustment is carried out on the AGV, the route is planned again in time, unnecessary parking is reduced, and the operating efficiency of the AGV is accelerated.
The control method of the AGV navigation will be described in detail with reference to FIG. 1.
The AGV navigation control method comprises the following steps:
s1, establishing a three-dimensional map, acquiring image information of the surrounding environment of the AGV based on a camera,
s2, the data processing module receives the camera and responds to the shot image information, processes the image through semantic segmentation,
s3, respectively labeling the processed image information according to a preset model based on a labeling module,
and S4, the control module receives and controls the motion attitude of the AGV according to the optimized information.
The method is improved on the basis of building a three-dimensional map of the SLAM, the surrounding environment information of the AGV is obtained in real time through a camera, the shot image is transmitted to a data processing module, and the data processing module processes and marks the received image through semantic segmentation. And marking the objects in the picture on a pixel level through semantic segmentation, and respectively marking the objects in the picture according to classes. The labeled class (category) may be: "pedestrians, vehicles, obstacles, cargo, etc. Therefore, through semantic segmentation, environmental data around the AGV are collected, and the control module of the AGV controls the motion attitude (such as behaviors of parking, obstacle avoidance, acceleration and deceleration and the like) of corresponding conditions. In this embodiment, the semantic segmentation of the image is based on optimizing the semantic prediction result by using a Conditional Random Field (CRF) as the final post-processing. The CRF takes the category of each pixel point in the image as a variable xk∈{y0,y1...ynThen, a complete graph is built (as shown in fig. 3, n is 5, but in other embodiments, the graph is not limited) by considering the relationship between any two variables. In a fully-linked CRF model, the corresponding energy function is:
Figure BDA0002980168530000061
wherein theta isp(Xk) Is a unary item, representing XkSemantic categories corresponding to the pixels, wherein the categories can be obtained by prediction results of FCNs or other semantic segmentation models; while the second term is a binary term that can take into account the semantic relationship between pixels. For example, the probability that pixels such as "shelves" and "goods" are adjacent in physical space should be greater than the probability that pixels such as "shelves" and "doors" are adjacent. And finally, optimizing the image semantic prediction result of the FCN by optimizing and solving the CRF energy function so as to obtain a final semantic segmentation result. It is worth mentioning that the CRF process originally split from the deep model training is embedded into the neural network, that is, the process of FCN + CRF is integrated into an end-to-end system, which has the advantage that the energy function of the final prediction result of CRF can be directly used to guide the training of the FCN model parameters, so as to obtain a better image semantic segmentation result. Images need only provide image level annotations (e.g., "people", "cars", "shelves") without expensive pixel level information to achieve semantic segmentation accuracy comparable to existing methods. In order to improve the accuracy of the model, the model can be trained on the road conditions through the convolutional neural network, the AGV can be used for responding to various road conditions, and the AGV can be used for decelerating, bypassing, stopping, whistling, reminding and the like on different occasions so as to improve the safety and intelligent judgment.
As a variation of the above-described implementation, as shown in figure 2,
the AGV navigation control method comprises the following steps:
s11, obtaining the ambient temperature around the AGV body based on an infrared thermal imager and transmitting the ambient temperature to a data processing module,
s12, obtaining the current depth image information of the AGV based on the trinocular camera and transmitting the current depth image information to the data processing module,
s13, the data processing module receives and responds to the environmental temperature information and the depth graphic information to generate annotation information through processing,
s14, the data processing module receives and responds to the AGV body surrounding environment image transmitted by the high-definition camera and processes the image, the generated annotation information is matched and marked on the image transmitted by the high-definition camera, the processing result is transmitted to the control module,
and S15, the control module receives and controls the motion attitude of the AGV according to the optimized information.
In one embodiment, when the AGV runs, the infrared thermal imager is started first, the environment temperature around the AGV body is collected and analyzed primarily through the infrared thermal imager, then the depth image information is obtained through analyzing the three-purpose depth camera, the data processing module receives and responds to the environment temperature information and the depth image information to process the environment temperature information and the depth image information so as to obtain the annotation information,
the data processing module receives images shot by a high-definition camera (such as a binocular high-definition camera) in real time and processes the images (the images are processed through semantic segmentation),
marking the processed annotation information at the matching position of the image shot by the high-definition camera, feeding the optimized information back to the control module,
the control module receives and controls the attitude of the AGV in response to the optimized information. Therefore, the AGV is based on the infrared thermal imager and the depth camera and based on the optimization processing of the data processing module to obtain the annotation information around the AGV body, and the annotation information is labeled on the image acquired by the (binocular) high-definition camera in a matching mode. Therefore, images around the AGV are acquired based on the high-definition camera, a 3D perception fusion model is formed by combining the information of the depth camera and the information of the temperature, and the position and the posture of the AGV are judged and the conditions of the surrounding road are accurately judged through the fusion model. After a large number of data tests are summarized, a data processing module (such as a GPU) is used for carrying out comprehensive weighting processing on a large number of data to obtain a three-dimensional 3D model, and the position of a vehicle body, the condition of surrounding roads, the condition of a goods shelf and the like in the model are marked. Meanwhile, the AGV is vehicle-mounted and can receive more scheduling data, obstacle avoidance adjustment is carried out on the AGV, the route is planned again in time, unnecessary parking is reduced, and the operating efficiency of the AGV is accelerated.
The present application proposes an agv (automated Guided vehicle) device, which has a configuration function topology as shown in fig. 4,
the AGV is configured with:
a high-definition camera electrically connected to a data processing module of the AGV device (AGV) for transmitting the real-time photographed three-dimensional map to the data processing module,
the system comprises a gyroscope, an ultrasonic sensor, an infrared thermal imager and the like, and is used for detecting the surrounding environment of the AGV. The AGV has the advantages that the surrounding environment of the AGV body can be detected more accurately, sensing of a human simulator to the surrounding environment is achieved, the AGV runs more humanoid, and the safety and the intelligent degree of the AGV are improved. This AGV equipment disposes wireless module (like 5G module) for AGV can be real-time with self current information transmission for distal end (like the dispatch room), and remote operation's personnel can long-rangely know in real time AGV automobile body and behavior like this, make things convenient for the remote processing emergency. Data exchange is accelerated through the 5G module, and real-time data monitoring and prediction of the AGV are achieved. Meanwhile, the AGV is vehicle-mounted and can receive more scheduling data, obstacle avoidance adjustment is carried out on the AGV, the route is planned again in time, unnecessary parking is reduced, and the operating efficiency of the AGV is accelerated. In this embodiment, the AGV is equipped with a laser radar, and a three-dimensional map is created by using the SLAM-based method.
This AGV passes through infrared thermal imager during operation, and the ambient temperature around the preliminary collection analysis AGV (for example, body temperature is great with ambient temperature difference when someone, and the temperature difference of temperature such as conflagration takes place in the scene is great with the temperature difference of automobile body, and the temperature of self is greater than predetermined threshold value), then acquires the degree of depth image information through three mesh depth cameras of analysis, and the rethread binocular high definition camera acquires real-time image and carries out image analysis processing. The infrared thermal imager and the depth camera are used for weighting the image acquired by the high-definition camera to preliminarily mark, and a GPU is used for comprehensively weighting a large amount of data through summarizing a large amount of data experience to obtain a three-dimensional 3D model, wherein the position of a vehicle body, the condition of surrounding roads, the condition of a goods shelf and the like are marked in the model.
The ultrasonic sensor is configured on the AGV equipment shell (such as the front side and/or the rear side or two sides, and the front side/the rear side is overlooked by a user when the AGV equipment normally runs), and the ultrasonic sensor is used for monitoring the surrounding environment in time, so that the AGV can adjust the posture in time. The infrared thermal imager is configured at the front side of the AGV.
The gyroscope is used for further improving the accuracy of the AGV and timely rectifying deviation.
The infrared sensor can sense the temperature of the infrared sensor and the ambient temperature, adjust the temperature of the infrared sensor in time, early warn the ambient abnormal environment and avoid obstacles in the abnormal environment. Through the camera shooting/video recording in real time, the surrounding environment of the AGV is remotely monitored, and the AGV is conveniently remotely checked and remotely controlled to operate.
In an embodiment, this AGV disposes integrated form image recognition module, and this module fuses a degree of depth camera and high definition digtal camera integratively, fuses perception technique through its realization with the 3D model and accurately judges AGV's position, gesture and accurately judge road conditions around, improves the accuracy that AGV intelligence operation and road conditions were judged. This high definition digtal camera is two mesh high definition digtal camera. The binocular high-definition camera is configured to rotate, so that the whole body information of 360-degree panorama can be acquired through rotation in no-load. Experiments prove that the 3D perception fusion model based on the method can improve the detection rate and the accuracy of the system to 98%.
In an embodiment, the AGVs have an ad hoc network function, specifically, in the same preset area, when multiple AGVs are in an automatic mode, one of the AGVs is configured as a master AGV, and the other AGVs are configured as slave AGVs, so that the master AGV is connected with a remote end (such as a dispatching room) to perform information interaction, and the master AGV sends an instruction to the slave AGVs connected with the master AGV in a preset scene (such as a fault, a fire, etc.). All the other follow AGV from autonomic accordant connection to master AGV in same predetermined region, follow automatic accordant connection mutual fault information between the AGV, environmental information on every side, keep away the barrier or path optimization in order to improve the whole efficiency of many AGV operations, master AGV and follow AGV carry out the information interaction each other in same predetermined region like this, avoid the not smooth defect of information transmission in finite space.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A control method for AGV navigation is characterized by comprising the following steps:
s11, based on a thermal imaging device, obtaining the ambient temperature around the current AGV body and transmitting the ambient temperature to a data processing module,
s12, obtaining the current depth image information of the AGV based on the trinocular camera and transmitting the current depth image information to the data processing module,
s13, the data processing module receives and responds to the environment temperature information and the depth graphic information and generates annotation information through processing,
s14, obtaining an environment image around the current AGV based on the high-definition camera and transmitting the environment image to the data processing module, receiving and responding the environment image by the data processing module and processing the environment image,
and labeling the matched annotation information to the processed environment image, and transmitting the labeled result to the control module,
and S15, the control module receives and controls the motion attitude of the AGV according to the result after the response is marked.
2. The AGV navigation control method of claim 1,
step S13 further includes:
the data processing module receives and responds to the depth image information and processes the depth image through semantic segmentation.
3. The AGV navigation control method of claim 1,
step S14 further includes:
the data processing module receives and responds to the environment image and processes the depth image based on semantic segmentation.
4. The AGV navigation control method according to claim 2 or 3,
image information for semantic segmentation processing is optimized based on a conditional random field model.
5. An AGV installation, characterised by operating according to the control method of any one of claims 1-4.
6. The AGV apparatus of claim 5,
a high-definition camera electrically connected with the data processing module for transmitting the real-time shot environment image around the AGV to the data processing module,
an infrared thermal imager, a data processing module electrically connected with the infrared thermal imager, a data processing module and a thermal imaging module, wherein the infrared thermal imager is used for detecting the surrounding environment of the AGV and transmitting the detected environment temperature information to the data processing module,
and the wireless module is connected with the remote control end, and the AGV equipment and the remote control end carry out information interaction through the wireless module.
7. The AGV apparatus of claim 6, wherein said high definition camera is a binocular high definition camera.
8. The AGV apparatus of claim 6, further comprising a gyroscope electrically connected to the data processing module for timely deviation correction.
9. The AGV apparatus of claim 6, further comprising an ultrasonic sensor disposed on the AGV apparatus housing and electrically connected to the data processing module, wherein said ultrasonic sensor is configured to monitor the surrounding environment in a timely manner.
10. The AGV apparatus of claim 6, wherein a plurality of AGV apparatuses each having an ad hoc network function are included in said predetermined area, one of said AGV apparatuses is configured as a master AGV and the other is configured as a slave AGV when said AGV apparatuses are operated in an automatic mode,
and only the main AGV is connected with the remote control end to perform information interaction, and the main AGV sends a control instruction to the connected auxiliary AGV.
CN202110285196.3A 2021-03-17 2021-03-17 AGV equipment and navigation control method thereof Pending CN113064425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110285196.3A CN113064425A (en) 2021-03-17 2021-03-17 AGV equipment and navigation control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110285196.3A CN113064425A (en) 2021-03-17 2021-03-17 AGV equipment and navigation control method thereof

Publications (1)

Publication Number Publication Date
CN113064425A true CN113064425A (en) 2021-07-02

Family

ID=76560948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110285196.3A Pending CN113064425A (en) 2021-03-17 2021-03-17 AGV equipment and navigation control method thereof

Country Status (1)

Country Link
CN (1) CN113064425A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113805580A (en) * 2021-07-09 2021-12-17 北京京东乾石科技有限公司 Equipment control method, system, device and storage medium thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113805580A (en) * 2021-07-09 2021-12-17 北京京东乾石科技有限公司 Equipment control method, system, device and storage medium thereof

Similar Documents

Publication Publication Date Title
CN111837083B (en) Information processing apparatus, information processing method, and storage medium
US11314249B2 (en) Teleoperation system and method for trajectory modification of autonomous vehicles
JP6962926B2 (en) Remote control systems and methods for trajectory correction of autonomous vehicles
CN111033590B (en) Vehicle travel control device, vehicle travel control method, and program
EP3371668B1 (en) Teleoperation system and method for trajectory modification of autonomous vehicles
US9916703B2 (en) Calibration for autonomous vehicle operation
EP3477616A1 (en) Method for controlling a vehicle using a machine learning system
Zhao et al. SLAM in a dynamic large outdoor environment using a laser scanner
KR100669250B1 (en) System and method for real-time calculating location
JP7259274B2 (en) Information processing device, information processing method, and program
WO2019181284A1 (en) Information processing device, movement device, method, and program
WO2019188391A1 (en) Control device, control method, and program
Sridhar et al. Cooperative perception in autonomous ground vehicles using a mobile‐robot testbed
US20230282000A1 (en) Multi-object tracking
US20230045416A1 (en) Information processing device, information processing method, and information processing program
WO2021153176A1 (en) Autonomous movement device, autonomous movement control method, and program
US11080530B2 (en) Method and system for detecting an elevated object situated within a parking facility
CN114489112A (en) Cooperative sensing system and method for intelligent vehicle-unmanned aerial vehicle
CN116504053A (en) Sensor fusion
KR20220150212A (en) Method and assistance device for supporting driving operation of a motor vehicle and motor vehicle
CN115249066A (en) Quantile neural network
CN113064425A (en) AGV equipment and navigation control method thereof
Valente et al. Evidential SLAM fusing 2D laser scanner and stereo camera
US20230186587A1 (en) Three-dimensional object detection
WO2022004333A1 (en) Information processing device, information processing system, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination