WO2019128496A1 - 控制设备运动 - Google Patents

控制设备运动 Download PDF

Info

Publication number
WO2019128496A1
WO2019128496A1 PCT/CN2018/114999 CN2018114999W WO2019128496A1 WO 2019128496 A1 WO2019128496 A1 WO 2019128496A1 CN 2018114999 W CN2018114999 W CN 2018114999W WO 2019128496 A1 WO2019128496 A1 WO 2019128496A1
Authority
WO
WIPO (PCT)
Prior art keywords
candidate values
target
image information
target device
values
Prior art date
Application number
PCT/CN2018/114999
Other languages
English (en)
French (fr)
Inventor
刘宇达
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Priority to US16/959,126 priority Critical patent/US20200401151A1/en
Priority to EP18897378.8A priority patent/EP3722906A4/en
Publication of WO2019128496A1 publication Critical patent/WO2019128496A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/0202Control of position or course in two dimensions specially adapted to aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the field of driverless technology, and in particular, to a method, device, and electronic device for controlling motion of a device.
  • the present application provides a method, device, and electronic device for controlling motion of a device.
  • a method for controlling motion of a device comprising:
  • the initial data including a plurality of sets of candidate values of motion parameters
  • a target value of a motion parameter currently used to control motion of the target device is selected from the plurality of sets of candidate values based on the image information.
  • the initial data further includes a weight of each set of the candidate values
  • a target value of a motion parameter currently used to control motion of the target device from the plurality of sets of candidate values, including:
  • the target value is selected from the plurality of sets of candidate values according to the weight of the candidate value of each group after the correction.
  • the modifying the weight of each set of the candidate values based on the image information comprises:
  • the determining, according to the image information, the correction data corresponding to each set of the candidate values including:
  • Correction data corresponding to each set of the candidate values is determined based on the target angle.
  • Correction data corresponding to each set of the candidate values is determined according to the normal distribution function.
  • the probability density values obtained by substituting each of the values into the normal distribution function are respectively obtained as correction data corresponding to each set of the candidate values.
  • an apparatus for controlling motion of a device including:
  • a detecting module configured to detect location information of an object within a target range around the target device with respect to the target device
  • a determining module configured to determine initial data based on the location information, the initial data including a plurality of sets of candidate values of the motion parameter;
  • a selection module configured to select, from the plurality of sets of candidate values, a target value of a motion parameter currently used to control motion of the target device based on the image information.
  • the initial data further includes a weight of each set of the candidate values
  • the selection module includes:
  • selecting a submodule configured to select the target value from the plurality of sets of candidate values according to the weight of the candidate value of each group after the correction.
  • a computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the use of any of the above first aspects A method of controlling the movement of a device.
  • an electronic device comprising a storage medium, a processor, and a computer program stored on the storage medium and operable on the processor, the processor implementing the program to implement the above A method for controlling motion of a device as described in any one of the first aspects.
  • the method and device for controlling the motion of the device provided by the embodiment of the present application, by detecting location information of an object around the target device relative to the target device, acquiring image information collected by the target device, and determining initial data based on the location information, where The initial data includes a plurality of sets of candidate values of the motion parameters, and based on the image information, the target value of the motion parameter currently used to control the motion of the target device is selected from the plurality of sets of candidate values. Since the candidate parameters obtained according to the above location information can more accurately make the target device avoid the objects that are close to the surrounding, the collected image information can more accurately reflect the distribution of the distant obstacles around the target device.
  • the obstacles that are closer to the target device and farther away from the target device can be fully considered, and the control target device travels through a more optimized path, thereby improving the smoothness of the travel path. , to achieve the optimization of the path.
  • FIG. 2 is a flow chart of another method for controlling motion of a device, according to an exemplary embodiment of the present application.
  • FIG. 3 is a flow chart showing another method for controlling motion of a device according to an exemplary embodiment of the present application.
  • FIG. 4 is a block diagram of an apparatus for controlling motion of a device, according to an exemplary embodiment of the present application.
  • FIG. 5 is a block diagram of another apparatus for controlling motion of a device, according to an exemplary embodiment of the present application.
  • FIG. 6 is a block diagram of another apparatus for controlling motion of a device, according to an exemplary embodiment of the present application.
  • FIG. 7 is a block diagram of another apparatus for controlling motion of a device, according to an exemplary embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application.
  • FIG. 1 is a flowchart of a method for controlling motion of a device, which may be applied to an electronic device, according to an exemplary embodiment.
  • the electronic device is an unmanned device, and may include, but is not limited to, a navigation positioning robot, a drone, an unmanned vehicle, and the like.
  • the method includes the following steps.
  • step 101 position information of an object within a target range around the target device with respect to the target device is detected.
  • the target device is an unmanned device, and may be a navigation positioning robot, or a drone, or an unmanned vehicle.
  • a ranging sensor (for example, a laser ranging sensor or an infrared ranging sensor) may be disposed on the target device, and the ranging sensor may be used to detect the position information of the object around the target device relative to the target device.
  • the target range may be a range that the above-described ranging sensor can detect.
  • the location information may include distance information of the object around the target device with respect to the target device, orientation information, and the like, which is not limited in this application. It should be noted that an object farther from the target device may be difficult to be detected. Therefore, the detected position information can better reflect the distribution of objects closer to the target device.
  • step 102 image information collected by the target device is acquired.
  • an image capturing device (such as a camera or a camera) may be disposed on the target device, and the image capturing device may be used to collect image information of the environment surrounding the target device.
  • the image information includes not only images of objects near the target device but also images of distant objects around the target device. Therefore, the image information described above is more capable of reflecting the distribution of objects farther from the target device than the position information.
  • the initial data may be determined by using any reasonable path planning algorithm according to the above location information.
  • a PRM Probabilistic Roadmap Method
  • RRT Rapidly research Random
  • the Tree, Fast Search Random Tree algorithm determines multiple sets of alternative values.
  • a DWA Dynamic Window Approach
  • the DWA algorithm is used to determine multiple sets of candidate values, and the obtained result is more accurate.
  • any other path planning algorithms known in the art and which may occur in the future can be applied to the present application, which is not limited in this respect.
  • a target value of a motion parameter currently used to control the motion of the target device is selected from the plurality of sets of candidate values based on the image information.
  • each set of candidate values according to the motion parameters can control the movement of the target device and make the target device move away from the objects that can be detected around
  • the different candidate values are used to control the motion of the target device.
  • the path is different. Therefore, it is necessary to select a set of target values capable of optimizing the travel path of the target device from the plurality of sets of candidate values as the parameter values currently used to control the motion of the target device.
  • weights of each set of candidate values may be determined, and weights of each set of candidate values are corrected based on the image information, and the weights of each set of candidate values are corrected from the plurality of sets of candidate values according to the modified Pick a set of target values.
  • the obstacle feature vector may also be extracted according to the candidate value of the motion parameter and the image information, and the obstacle feature vector is input into the pre-trained convolutional neural network to obtain the convolutional nerve. The result of the network output is used as the target value of the motion parameter.
  • target value can also be selected from the above alternative values by other means, and the present application does not limit the specific manner of selecting the target value.
  • the method for controlling the motion of the device acquires the collected image information by detecting location information of the object around the target device with respect to the target device, and determines initial data based on the location information, where the initial data includes A plurality of sets of candidate values are selected, and based on the image information, a target value currently used to control the movement of the target device is selected from the above alternative values. Since the candidate value obtained according to the above position information can more accurately make the target device avoid the objects closer to the surroundings, the collected image information can more accurately reflect the distribution of the distant obstacles around the target device. Combining the above image information to select the target value from the candidate values can fully consider the obstacles that are closer to the target device and farther away, and control the target device to travel through a more optimized path, thereby improving the smoothness of the travel path, and realizing Path optimization.
  • FIG. 2 is a flowchart of another method for controlling motion of a device according to an exemplary embodiment.
  • the embodiment describes a process of selecting a target parameter, and the method may be applied to an electronic device. Includes the following steps:
  • initial data is determined based on the location information, the initial data including a plurality of sets of candidate values of the motion parameters and a weight of each set of candidate values.
  • step 204 the weight of each set of candidate values is corrected based on the image information described above.
  • the correction data corresponding to each set of candidate values may be determined based on the image information, and the weight of each set of candidate values may be corrected according to the correction data corresponding to each set of candidate values. It can be understood that the weights of each set of candidate values can also be modified by other means, and the application is not limited in this regard.
  • a set of target values is selected from the plurality of sets of candidate values according to the weight of each set of candidate values after the correction.
  • the candidate value with the largest weight after correction may be selected as the target value.
  • FIG. 3 is a flowchart of another method for controlling motion of a device according to an exemplary embodiment, which describes a process of correcting weights of each set of candidate values, the method Can be applied to electronic devices, including the following steps:
  • step 301 location information of an object within a target range around the target device relative to the target device is detected.
  • step 302 image information collected by the target device is acquired.
  • step 304 correction data corresponding to each set of candidate values is determined based on the image information described above.
  • the correction data is used to correct the weights of the candidate values, and each set of candidate values corresponds to a set of correction data, and the weights of the set of candidate values may be corrected by using the correction data corresponding to each set of candidate values.
  • the correction data corresponding to each set of candidate values may be determined by inputting the acquired image information into a pre-trained target convolutional neural network, acquiring a target angle of the target convolutional neural network output, and based on the target angle Determine the correction data corresponding to each set of alternative values.
  • the target angle is a desired angle of the current target device motion, and if the target device moves according to the desired angle, not only the obstacle can be avoided, but also the optimal path can be traveled.
  • the target convolutional neural network is a pre-trained convolutional neural network, which can pre-collect sample data and train the target convolutional neural network based on the sample data.
  • the human controlled target device moves in a designated field with a smoother path and avoids surrounding obstacles.
  • multiple image acquisition devices are used to acquire image information in real time.
  • the collected image information is stored in association with a desired angle corresponding to the collection device that collects the image information to obtain sampled data.
  • the collected image information is input into the convolutional neural network to be trained, and the angle of the output of the convolutional neural network to be trained is compared with the expected angle corresponding to the image information, and the convolutional nerve is compared based on the comparison result.
  • the parameters of the network are continuously adjusted until the angle of the output angle of the convolutional neural network to be trained and the expected angle corresponding to the image data satisfy the preset condition, and the parameter-adjusted convolutional neural network is used as the target convolutional neural network.
  • the correction data corresponding to each set of candidate values may be determined based on the target angle. For example, a normal distribution function with the target angle as the expected value may be generated, and each set of candidate values is determined according to the normal distribution function. Corrected data.
  • a normal distribution function can be generated:
  • the mathematical expectation value ⁇ can be the target angle
  • the standard deviation ⁇ can be an empirical value, and any reasonable value can be taken as the standard deviation.
  • the candidate angular velocities included in each set of candidate values may be taken out and multiplied by a preset duration to obtain multiple values of the random variable x of the normal distribution function, and each value is substituted into the above normal distribution.
  • the function obtains the corresponding probability density value as the correction data corresponding to each set of alternative values.
  • the default duration may be an empirical value, and may be any reasonable value, for example, 1 second or 2 seconds.
  • the specific value of the preset duration is not limited.
  • the weight of each set of candidate values may be modified according to the correction data corresponding to each set of candidate values. Specifically, the correction data corresponding to each set of candidate values may be multiplied by the weight of the set of candidate values to obtain the weight of each set of candidate values after the correction.
  • step 306 a set of target values is selected from the plurality of sets of candidate values according to the weight of each set of candidate values after the correction.
  • the method for controlling the motion of the device acquires the collected image information by detecting location information of the object around the target device with respect to the target device, and determines initial data based on the location information, where the initial data includes a plurality of sets of candidate values and weights of each set of candidate values, determining correction data corresponding to each set of candidate values based on the image information, and correcting weights of each set of candidate values according to the correction data corresponding to each set of candidate values, And selecting a set of target values from the plurality of sets of candidate values according to the weight of each set of candidate values after the correction.
  • the present application also provides an embodiment of a device for controlling the motion of the device.
  • FIG. 4 is a block diagram of a device for controlling motion of a device according to an exemplary embodiment.
  • the device may include: a detecting module 401, an obtaining module 402, a determining module 403, and a selecting module 404. .
  • the detecting module 401 is configured to detect location information of an object within a target range around the target device with respect to the target device.
  • the determining module 403 is configured to determine initial data based on the location information, where the initial data includes multiple sets of candidate values of the motion parameters.
  • the selecting module 404 is configured to select, from the plurality of sets of candidate values, a target value of a motion parameter currently used to control motion of the target device based on the image information.
  • FIG. 5 is a block diagram of another apparatus for controlling motion of a device according to an exemplary embodiment of the present application.
  • the embodiment may be based on the foregoing embodiment shown in FIG.
  • the method includes: a correction submodule 501 and a selection submodule 502.
  • the correction sub-module 501 is configured to correct the weight of each set of candidate values based on the image information.
  • the selecting sub-module 502 is configured to select a set of target values from the plurality of sets of candidate values according to the weight of each set of candidate values after the correction.
  • the initial data also includes the weight of each set of alternative values.
  • the determining submodule 601 is configured to determine, according to the image information, correction data corresponding to each set of candidate values.
  • the weight correction sub-module 602 is configured to correct the weight of each set of candidate values according to the modified data.
  • FIG. 7 is a block diagram of another apparatus for controlling motion of a device according to an exemplary embodiment of the present application. Based on the foregoing embodiment shown in FIG. 6, the sub-module 601 is determined. The information input sub-module 701, the information output sub-module 702, and the data determination sub-module 703 may be included.
  • the information input sub-module 701 is configured to input the image information into a pre-trained target convolutional neural network.
  • the information output sub-module 702 is configured to acquire a target angle of the target convolutional neural network output.
  • the data determination sub-module 703 is configured to determine correction data corresponding to each set of candidate values based on the target angle.
  • the data determination sub-module 703 is configured to: generate a normal distribution function with the target angle as an expected value, and determine correction data corresponding to each set of candidate values according to the normal distribution function.
  • the above device may be preset in the electronic device, or may be loaded into the electronic device by downloading or the like.
  • Corresponding modules in the above described devices can cooperate with modules in the electronic device to implement a solution for controlling the motion of the device.
  • the embodiment of the present application further provides a computer readable storage medium, where the storage medium stores a computer program, and the computer program can be used to perform the method for controlling device motion provided by any of the foregoing embodiments of FIG. 1 to FIG.
  • the embodiment of the present application further provides a schematic structural diagram of the electronic device according to an exemplary embodiment of the present application shown in FIG. 8.
  • the electronic device includes a processor 801, an internal bus 802, a network interface 803, a memory 804, and a non-volatile storage medium 805.
  • the processor 801 reads the corresponding computer program from the non-volatile storage medium 805 into the memory 804 and then operates to form a device on the logical level for controlling the motion of the device.
  • the present application does not exclude other implementation manners, such as a logic device or a combination of software and hardware, etc., that is, the execution body of the following processing flow is not limited to each logical unit, and may be Hardware or logic device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Optics & Photonics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

一种用于控制设备运动的方法、装置及电子设备,方法包括:检测目标设备周围目标范围内的物体相对于目标设备的位置信息(101);获取目标设备采集到的图像信息(102);基于位置信息确定初始数据,初始数据为运动参数的多组备选值(103);基于图像信息从多组备选值中选取当前用于控制目标设备运动的运动参数的目标值(104),使目标设备能根据周围环境实时进行调整。

Description

控制设备运动
相关申请的交叉引用
本专利申请要求于2017年12月29日提交的、申请号为201711471549.9、发明名称为“用于控制设备运动的方法、装置及电子设备”的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及无人驾驶技术领域,特别涉及一种用于控制设备运动的方法、装置及电子设备。
背景技术
一般来说,诸如机器人、无人车以及无人机等无人驾驶设备,在自动运行时,需要根据周围环境对该无人驾驶设备的运动进行调整,以躲避障碍物。通常根据无人驾驶设备周围障碍物的分布状况进行路径规划,以使无人驾驶设备躲避开障碍物,但实际效果较差。
发明内容
为了解决上述技术问题之一,本申请提供一种用于控制设备运动的方法、装置及电子设备。
根据本申请实施例的第一方面,提供一种用于控制设备运动的方法,包括:
检测目标设备周围目标范围内的物体相对于所述目标设备的位置信息;
获取所述目标设备采集到的图像信息;
基于所述位置信息确定初始数据,所述初始数据包括运动参数的多组备选值;
基于所述图像信息从所述多组备选值中选取当前用于控制所述目标设备运动的运动参数的目标值。
可选的,所述初始数据还包括每组所述备选值的权重;
所述基于所述图像信息从所述多组备选值中选取当前用于控制所述目标设备运动 的运动参数的目标值,包括:
基于所述图像信息对每组所述备选值的权重进行修正;
根据修正后每组所述备选值的权重从所述多组备选值中选取目标值。
可选的,所述基于所述图像信息对每组所述备选值的权重进行修正,包括:
基于所述图像信息确定每组所述备选值对应的修正数据;
根据所述修正数据对每组所述备选值的权重进行修正。
可选的,所述基于所述图像信息确定每组所述备选值对应的修正数据,包括:
将所述图像信息输入到预先训练的目标卷积神经网络中;
获取所述目标卷积神经网络输出的目标角度;
基于所述目标角度确定每组所述备选值对应的修正数据。
可选的,所述基于所述目标角度确定每组所述备选值对应的修正数据,包括:
生成以所述目标角度为期望值的正态分布函数;
根据所述正态分布函数确定每组所述备选值对应的修正数据。
可选的,所述运动参数包括线速度和角速度;
所述根据所述正态分布函数确定每组所述备选值对应的修正数据,包括:
分别确定每组所述备选值中备选角速度与预设时长的乘积,作为所述正态分布函数的随机变量x的多个值;
分别获取每个所述值代入所述正态分布函数所得的概率密度值作为每组所述备选值对应的修正数据。
根据本申请实施例的第二方面,提供一种用于控制设备运动的装置,包括:
检测模块,用于检测目标设备周围目标范围内的物体相对于所述目标设备的位置信息;
获取模块,用于获取目标设备采集到的图像信息;
确定模块,用于基于所述位置信息确定初始数据,所述初始数据包括运动参数的多组备选值;
选取模块,用于基于所述图像信息从所述多组备选值中选取当前用于控制所述目标 设备运动的运动参数的目标值。
可选的,所述初始数据还包括每组所述备选值的权重;
所述选取模块包括:
确定子模块,用于基于所述图像信息对每组所述备选值的权重进行修正;
选取子模块,用于根据修正后每组所述备选值的权重从所述多组备选值中选取所述目标值。
根据本申请实施例的第三方面,提供一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述第一方面中任一项所述的用于控制设备运动的方法。
根据本申请实施例的第四方面,提供一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述第一方面中任一项所述的用于控制设备运动的方法。
本申请的实施例提供的技术方案可以包括以下有益效果:
本申请的实施例提供的用于控制设备运动的方法和装置,通过检测目标设备周围的物体相对于目标设备的位置信息,获取目标设备采集到的图像信息,基于该位置信息确定初始数据,该初始数据包括运动参数的多组备选值,并基于该图像信息从上述多组备选值中选取当前用于控制目标设备运动的运动参数的目标值。由于根据上述位置信息得到的备选参数能够较为准确的使目标设备避开周围较近的物体,而采集到的图像信息能够较为准确的体现出目标设备周围较远障碍物的分布状况。结合上述图像信息从运动参数的备选值中选取目标值,能够充分考虑到距离目标设备较近以及较远的障碍物,控制目标设备通过更为优化的路径行进,提高了行进路径的平滑度,实现了路径的优化。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本申请。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1是本申请根据一示例性实施例示出的一种用于控制设备运动的方法的流程图。
图2是本申请根据一示例性实施例示出的另一种用于控制设备运动的方法的流程图。
图3是本申请根据一示例性实施例示出的另一种用于控制设备运动的方法的流程图。
图4是本申请根据一示例性实施例示出的一种用于控制设备运动的装置的框图。
图5是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置的框图。
图6是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置的框图。
图7是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置的框图。
图8是本申请根据一示例性实施例示出的一种电子设备的结构示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本申请的一些方面相一致的装置和方法的例子。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本申请。在本申请和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
应当理解,尽管在本申请可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本申请范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。
如图1所示,图1是根据一示例性实施例示出的一种用于控制设备运动的方法的流程图,该方法可以应用于电子设备中。在本实施例中,本领域技术人员可以理解,该电子设备为无人驾驶设备,可以包括但不限于导航定位机器人、无人机、无人车等等。该方法包括以下步骤。
在步骤101中,检测目标设备周围目标范围内的物体相对于目标设备的位置信息。
在本实施例中,目标设备为无人驾驶设备,可以是导航定位机器人,或者无人机,或者无人车等。在目标设备上可以设置测距传感器(如,激光测距传感器,或者红外测距传感器等),可以采用测距传感器检测目标设备周围的物体相对于目标设备的位置信息。目标范围可以是上述测距传感器所能够检测到的范围。该位置信息可以包括目标设备周围的物体相对于目标设备的距离信息以及方位信息等,本申请对此方面不限定。需要说明的是,距离目标设备较远的物体可能难以被检测到,因此,上述被检测到的位置信息更能体现距离目标设备较近的物体的分布状况。
在步骤102中,获取所述目标设备采集到的图像信息。
在本实施例中,在目标设备上还可以设置图像采集装置(如,照相机,或者摄像机等),可以采用图像采集装置采集目标设备周围环境的图像信息。该图像信息中不仅包括目标设备周围近处物体的影像,还包括目标设备周围远处物体的影像。因此,上述图像信息与上述位置信息相比,更能体现距离目标设备较远的物体的分布状况。
在步骤103中,基于该位置信息确定初始数据,该初始数据包括运动参数的多组备选值。
在本实施例中,可以基于目标设备周围的物体相对于目标设备的位置信息确定初始数据,该初始数据至少可以包括运动参数的多组备选值。例如,该运动参数可包括线速度和角速度,则每组备选值可以包括一个备选线速度值和一个备选角速度值。该运动参数的每组备选值均可以用于控制目标设备运动,使目标设备运动时能够避开周围能够检测到的物体。
具体来说,可以根据上述位置信息采用任意合理的路径规划算法确定初始数据,例如,可以采用PRM(Probabilistic Roadmap Method,随机路标图)算法确定多组备选值,也可以采用RRT(Rapidly exploring Random Tree,快速搜索随机树)算法确定多组备选值。可选地,还可以采用DWA(Dynamic Window Approach,局部避障的动态窗口)算法确定多组备选值,采用DWA算法确定多组备选值,所得到的结果更为精准。当然,本领域中已知的以及将来可能出现的任何其他路径规划算法都可以应用于本申请,本申请对此方面不限定。
在步骤104中,基于该图像信息从上述多组备选值中选取当前用于控制目标设备运动的运动参数的目标值。
在本实施例中,虽然按照运动参数的每组备选值均可以控制目标设备运动,并使目标设备运动时避开周围能够检测到的物体,但采用不同备选值控制目标设备运动产生的路径不同。因此,需要从上述多组备选值中选取能够使目标设备的行进路径更为优化的一组目标值,作为当前要用于控制目标设备运动的参数值。
在本实施例中,可以基于采集到的图像信息从上述备选值中选取一组目标值。具体来说,上述运动参数可包括线速度和角速度,则目标值可以包括目标线速度值和目标角速度值,该目标线速度值和目标角速度值可以是当前线速度值和当前角速度值的期望值。可以根据检测到的目标设备当前的瞬时线速度和瞬时角速度,结合目标线速度和目标角速度,确定出控制目标设备运动的控制数据(如,对目标设备的牵引力,或者对目标设备的牵引方向等)。
在一种实现方式中,可以确定每组备选值的权重,基于该图像信息对每组备选值的权重进行修正,并根据修正后每组备选值的权重从多组备选值中选取一组目标值。
在另一种实现方式中,还可以根据上述运动参数的备选值及该图像信息提取障碍物特征向量,并将障碍物特征向量输入到预先训练的卷积神经网络中,获取该卷积神经网络输出的结果作为运动参数的目标值。
可以理解,还可以通过其它的方式从上述备选值中选取目标值,本申请对选取目标值的具体方式方面不限定。
应当注意,尽管在上述图1的实施例中,以特定顺序描述了本申请方法的操作,但是,这并非要求或者暗示必须按照该特定顺序来执行这些操作,或是必须执行全部所示的操作才能实现期望的结果。相反,流程图中描绘的步骤可以改变执行顺序。例如,步骤102可以在步骤101之前执行,也可以在步骤101与步骤103之间执行,也可以在步骤103之后执行,还可以与步骤101同时执行,或者与步骤103同时执行。附加地或备选地,可以省略某些步骤,将多个步骤合并为一个步骤执行,和/或将一个步骤分解为多个步骤执行。
本申请的上述实施例提供的用于控制设备运动的方法,通过检测目标设备周围的物体相对于目标设备的位置信息,获取采集到的图像信息,基于该位置信息确定初始数据,该初始数据包括多组备选值,并基于该图像信息从上述备选值中选取当前要用于控制目标设备运动的目标值。由于根据上述位置信息得到的备选值能够较为准确的使目标设备避开周围较近的物体,而采集到的图像信息能够较为准确的体现出目标设备周围较远障 碍物的分布状况。结合上述图像信息从备选值中选取目标值,能够充分考虑到距离目标设备较近以及较远的障碍物,控制目标设备通过更为优化的路径行进,提高了行进路径的平滑度,实现了路径的优化。
如图2所示,图2根据一示例性实施例示出的另一种用于控制设备运动的方法的流程图,该实施例描述了选取目标参数的过程,该方法可以应用于电子设备中,包括以下步骤:
在步骤201中,检测目标设备周围的物体相对于目标设备的位置信息。
在步骤202中,获取目标设备采集到的图像信息。
在步骤203中,基于上述位置信息确定初始数据,该初始数据包括运动参数的多组备选值以及每组备选值的权重。
在步骤204中,基于上述图像信息对每组备选值的权重进行修正。
在本实施例中,初始数据除了可以包括多组备选值,还可以包括每组备选值的权重。可选的,可以采用DWA算法获取多组备选值及每组备选值的权重,还可以采用PRM算法获取多组备选值及每组备选值的权重等等。其中,每组备选值的权重为能够体现该组备选值对行进路径优化程度的权重,但获得该权重时仅考虑了能够检测到的近处物体的分布状况。由于目标设备采集的图像数据还能体现目标设备周围较远障碍物的分布状况,因此,可以基于上述图像信息对每组备选值的权重进行修正。使得对每组备选值的权重的修正,进一步考虑了远处物体的分布状况。
具体来说,可以基于上述图像信息确定每组备选值对应的修正数据,并根据每组备选值对应的修正数据对每组备选值的权重进行修正。可以理解,还可以通过其它方式对每组备选值的权重进行修正,本申请对此方面不限定。
在步骤205中,根据修正后每组备选值的权重从所述多组备选值中选取一组目标值。
在本实施例中,可以选取修正后权重最大的备选值作为目标值。
需要说明的是,对于与图1实施例中相同的步骤,在上述图2实施例中不再进行赘述,相关内容可参见图1实施例。
本申请的上述实施例提供的用于控制设备运动的方法,通过检测目标设备周围的物体相对于目标设备的位置信息,获取目标设备采集到的图像信息,基于上述位置信息确定初始数据,该初始数据包括运动参数的多组备选值以及每组备选值的权重,基于上述 图像信息对每组备选值的权重进行修正,并根据修正后每组备选值的权重从上述多组备选值中选取一组目标值。由于基于上述位置信息获得每组备选值的权重时,仅考虑了能够检测到的近处物体的分布状况,因此,基于上述图像信息对每组备选值的权重进行修正后,进一步考虑了远处物体的分布状况。有助于控制目标设备通过更为优化的路径行进,进一步提高了行进路径的平滑度,有助于实现路径的优化。
如图3所示,图3根据一示例性实施例示出的另一种用于控制设备运动的方法的流程图,该实施例描述了对每组备选值的权重进行修正的过程,该方法可以应用于电子设备中,包括以下步骤:
在步骤301中,检测目标设备周围目标范围内的物体相对于目标设备的位置信息。
在步骤302中,获取目标设备采集到的图像信息。
在步骤303中,基于上述位置信息确定初始数据,该初始数据包括运动参数的多组备选值以及每组备选值的权重。
在步骤304中,基于上述图像信息确定每组备选值对应的修正数据。
在本实施例中,修正数据用于对备选值的权重进行修正,每组备选值对应一组修正数据,可以采用每组备选值对应的修正数据修正该组备选值的权重。例如,可以通过如下方式确定每组备选值对应的修正数据:将采集到的图像信息输入到预先训练的目标卷积神经网络中,获取目标卷积神经网络输出的目标角度,并基于目标角度确定每组备选值对应的修正数据。
其中,目标角度为当前目标设备运动的期望角度,如果目标设备按照该期望角度运动,不仅可以避开障碍物,还可以以较优的路径行进。目标卷积神经网络为预先训练好的卷积神经网络,可以预先采集样本数据,并基于样本数据训练得到目标卷积神经网络。
具体来说,在训练阶段,首先,可以在目标设备上设置多个图像采集设备(如,照相机,或者摄像机等),使得一个图像采集设备朝向目标设备的正前方,其它图像采集设备分别与正前方形成不同的角度(如,10°,或者20°,或者30°等)。将目标设备正前方设定为期望方向,与该期望方向所形成的角度设定为期望角度。因此,每个图像采集设备所对应的期望角度不同。
接着,由人控制目标设备以较为平滑的路径在指定场地中运动,且避开周围的障碍物。同时,采用多个图像采集设备实时采集图像信息。将采集到的图像信息与采集该图像信息的采集设备所对应的期望角度进行关联地存储,以得到采样数据。
最后,将采集到的图像信息输入到待训练的卷积神经网络中,将待训练的卷积神经网络输出的角度与该图像信息对应的期望角度进行比较,并基于比较的结果对卷积神经网络的参数进行不断调整,直至待训练的卷积神经网络输出的角度与该图像数据对应的期望角度的近似程度满足预设条件,将参数调整后的卷积神经网络作为目标卷积神经网络。
在本实施例中,可以基于目标角度确定每组备选值对应的修正数据,例如,可以生成以目标角度为期望值的正态分布函数,并根据该正态分布函数确定每组备选值对应的修正数据。
具体来说,首先,可以生成正态分布函数:
Figure PCTCN2018114999-appb-000001
其中,数学期望值μ可以为目标角度,标准差σ可以是一个经验值,可以取任意合理的数值作为标准差。当确定随机变量x的数值后,可以根据上述正态分布函数确定随机变量x对应的概率密度值。
接着,可以将每组备选值中包括的备选角速度取出,并分别与预设时长相乘,得到正态分布函数的随机变量x的多个值,将每个值分别代入上述正态分布函数,得到对应的概率密度值作为每组备选值对应的修正数据。其中,预设时长可以是一个经验值,可以取任意合理的数值,例如可以是1秒,或者2秒等,本申请对预设时长的具体取值方面不限定。
在步骤305中,根据每组备选值对应的修正数据对每组备选值的权重进行修正。
在本实施例中,可以根据每组备选值对应的修正数据对每组备选值的权重进行修正。具体来说,可以将每组备选值对应的修正数据与该组备选值的权重相乘,得到修正后每组备选值的权重。
在步骤306中,根据修正后每组备选值的权重从所述多组备选值中选取一组目标值。
需要说明的是,对于与图1和图2实施例中相同的步骤,在上述图3实施例中不再进行赘述,相关内容可参见图1和图2实施例。
本申请的上述实施例提供的用于控制设备运动的方法,通过检测目标设备周围的物体相对于目标设备的位置信息,获取采集到的图像信息,基于上述位置信息确定初始数据,该初始数据包括多组备选值以及每组备选值的权重,基于上述图像信息确定每组备 选值对应的修正数据,根据每组备选值对应的修正数据对每组备选值的权重进行修正,并根据修正后每组备选值的权重从多组备选值中选取一组目标值。本实施例由于基于上述图像信息确定每组备选值对应的修正数据,并根据每组备选值对应的修正数据对每组备选值的权重进行修正,从而进一步控制目标设备更为准确的选择优化的路径行进。
与前述用于控制设备运动的方法实施例相对应,本申请还提供了用于控制设备运动的装置的实施例。
如图4所示,图4是本申请根据一示例性实施例示出的一种用于控制设备运动的装置框图,该装置可以包括:检测模块401,获取模块402,确定模块403以及选取模块404。
其中,检测模块401,用于检测目标设备周围目标范围内的物体相对于目标设备的位置信息。
获取模块402,用于获取目标设备采集到的图像信息。
确定模块403,用于基于上述位置信息确定初始数据,该初始数据包括运动参数的多组备选值。
选取模块404,用于基于上述图像信息从多组备选值中选取当前用于控制目标设备运动的运动参数的目标值。
如图5所示,图5是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置框图,该实施例在前述图4所示实施例的基础上,选取模块404可以包括:修正子模块501和选取子模块502。
其中,修正子模块501,用于基于上述图像信息对每组备选值的权重进行修正。
选取子模块502,用于根据修正后每组备选值的权重从多组备选值中选取一组目标值。
其中,初始数据还包括每组备选值的权重。
如图6所示,图6是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置框图,该实施例在前述图5所示实施例的基础上,修正子模块501可以包括:确定子模块601和权重修正子模块602。
其中,确定子模块601,用于基于上述图像信息确定每组备选值对应的修正数据。
权重修正子模块602,用于根据上述修正数据对每组备选值的权重进行修正。
如图7所示,图7是本申请根据一示例性实施例示出的另一种用于控制设备运动的装置框图,该实施例在前述图6所示实施例的基础上,确定子模块601可以包括:信息输入子模块701,信息输出子模块702和数据确定子模块703。
其中,信息输入子模块701,用于将上述图像信息输入到预先训练的目标卷积神经网络中。
信息输出子模块702,用于获取目标卷积神经网络输出的目标角度。
数据确定子模块703,用于基于目标角度确定每组备选值对应的修正数据。
在一些可选实施方式中,数据确定子模块703被配置用于:生成以目标角度为期望值的正态分布函数,根据该正态分布函数确定每组备选值对应的修正数据。
在另一些可选实施方式中,数据确定子模块703通过如下方式根据正态分布函数确定每组备选值对应的修正数据:分别确定每组备选值中备选角速度与预设时长的乘积,作为正态分布函数的随机变量的多个值。分别获取每个值代入正态分布函数所得的概率密度值作为每组备选值对应的修正数据。其中,运动参数包括线速度和角速度,备选值包括备选线速度和备选角速度。
在另一些可选实施方式中,确定模块403被配置用于:根据上述位置信息采用局部避障的动态窗口DWA算法确定初始数据。
应当理解,上述装置可以预先设置在电子设备中,也可以通过下载等方式而加载到电子设备中。上述装置中的相应模块可以与电子设备中的模块相互配合以实现用于控制设备运动的方案。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
本申请实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,计算机程序可用于执行上述图1至图3任一实施例提供的用于控制设备运动的方法。
对应于上述的用于控制设备运动的方法,本申请实施例还提出了图8所示的根据本 申请的一示例性实施例的电子设备的示意结构图。请参考图8,在硬件层面,该电子设备包括处理器801、内部总线802、网络接口803、内存804以及非易失性存储介质805,当然还可能包括其他业务所需要的硬件。处理器801从非易失性存储介质805中读取对应的计算机程序到内存804中然后运行,在逻辑层面上形成用于控制设备运动的装置。当然,除了软件实现方式之外,本申请并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本申请的其它实施方案。本申请旨在涵盖本申请的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本申请的一般性原理并包括本申请未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本申请的真正范围和精神由下面的权利要求指出。
应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本申请的范围仅由所附的权利要求来限制。

Claims (10)

  1. 一种用于控制设备运动的方法,包括:
    检测目标设备周围目标范围内的物体相对于所述目标设备的位置信息;
    获取所述目标设备采集到的图像信息;
    基于所述位置信息确定初始数据,所述初始数据包括运动参数的多组备选值;
    基于所述图像信息从所述多组备选值中选取当前用于控制所述目标设备运动的运动参数的目标值。
  2. 根据权利要求1所述的方法,其特征在于,所述初始数据还包括每组所述备选值的权重;
    所述基于所述图像信息从所述多组备选值中选取当前用于控制所述目标设备运动的运动参数的目标值,包括:
    基于所述图像信息对每组所述备选值的权重进行修正;
    根据修正后每组所述备选值的权重从所述多组备选值中选取所述目标值。
  3. 根据权利要求2所述的方法,其特征在于,所述基于所述图像信息对每组所述备选值的权重进行修正,包括:
    基于所述图像信息确定每组所述备选值对应的修正数据;
    根据所述修正数据对每组所述备选值的权重进行修正。
  4. 根据权利要求3所述的方法,其特征在于,所述基于所述图像信息确定每组所述备选值对应的修正数据,包括:
    将所述图像信息输入到预先训练的目标卷积神经网络中;
    获取所述目标卷积神经网络输出的目标角度;
    基于所述目标角度确定每组所述备选值对应的修正数据。
  5. 根据权利要求4所述的方法,其特征在于,所述基于所述目标角度确定每组所述备选值对应的修正数据,包括:
    生成以所述目标角度为期望值的正态分布函数;
    根据所述正态分布函数确定每组所述备选值对应的修正数据。
  6. 根据权利要求5所述的方法,其特征在于,所述运动参数包括线速度和角速度;
    所述根据所述正态分布函数确定每组所述备选值对应的修正数据,包括:
    分别确定每组所述备选值中备选角速度与预设时长的乘积,作为所述正态分布函数的随机变量x的多个值;
    分别获取每个所述值代入所述正态分布函数所得的概率密度值作为每组所述备选 值对应的修正数据。
  7. 一种用于控制设备运动的装置,其特征在于,所述装置包括:
    检测模块,用于检测目标设备周围目标范围内的物体相对于所述目标设备的位置信息;
    获取模块,用于获取所述目标设备采集到的图像信息;
    确定模块,用于基于所述位置信息确定初始数据,所述初始数据包括运动参数的多组备选值;
    选取模块,用于基于所述图像信息从所述多组备选值中选取当前用于控制所述目标设备运动的运动参数的目标值。
  8. 根据权利要求7所述的装置,其特征在于,所述初始数据还包括每组所述备选值的权重;
    所述选取模块包括:
    确定子模块,用于基于所述图像信息对每组所述备选值的权重进行修正;
    选取子模块,用于根据修正后每组所述备选值的权重从所述多组备选值中选取所述目标值。
  9. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述权利要求1-6任一项所述的用于控制设备运动的方法。
  10. 一种电子设备,包括存储介质、处理器及存储在存储介质上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述权利要求1-6中任一项所述的方法。
PCT/CN2018/114999 2017-12-29 2018-11-12 控制设备运动 WO2019128496A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/959,126 US20200401151A1 (en) 2017-12-29 2018-11-12 Device motion control
EP18897378.8A EP3722906A4 (en) 2017-12-29 2018-11-12 DEVICE MOTION CONTROL

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711471549.9A CN108121347B (zh) 2017-12-29 2017-12-29 用于控制设备运动的方法、装置及电子设备
CN201711471549.9 2017-12-29

Publications (1)

Publication Number Publication Date
WO2019128496A1 true WO2019128496A1 (zh) 2019-07-04

Family

ID=62232519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/114999 WO2019128496A1 (zh) 2017-12-29 2018-11-12 控制设备运动

Country Status (4)

Country Link
US (1) US20200401151A1 (zh)
EP (1) EP3722906A4 (zh)
CN (1) CN108121347B (zh)
WO (1) WO2019128496A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121347B (zh) * 2017-12-29 2020-04-07 北京三快在线科技有限公司 用于控制设备运动的方法、装置及电子设备
JP2021143830A (ja) * 2018-06-15 2021-09-24 ソニーグループ株式会社 情報処理装置および情報処理方法
CN111309035B (zh) * 2020-05-14 2022-03-04 浙江远传信息技术股份有限公司 多机器人协同移动与动态避障方法、装置、设备及介质
CN114355917B (zh) * 2021-12-27 2023-11-21 广州极飞科技股份有限公司 超参确定方法、路径规划方法、装置、电子设备和可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576680A (zh) * 2012-07-25 2014-02-12 中国原子能科学研究院 一种机器人路径规划方法及装置
US20150292891A1 (en) * 2014-04-11 2015-10-15 Nissan North America, Inc Vehicle position estimation system
CN105739495A (zh) * 2016-01-29 2016-07-06 大连楼兰科技股份有限公司 行车路径规划方法、装置和自动转向系统
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
CN106020201A (zh) * 2016-07-13 2016-10-12 广东奥讯智能设备技术有限公司 移动机器人3d导航定位系统及导航定位方法
CN106598054A (zh) * 2017-01-16 2017-04-26 深圳优地科技有限公司 机器人路径调整方法及装置
CN106767823A (zh) * 2016-12-14 2017-05-31 智易行科技(武汉)有限公司 基于不完全信息情况下的智能移动路径规划方法
CN108121347A (zh) * 2017-12-29 2018-06-05 北京三快在线科技有限公司 用于控制设备运动的方法、装置及电子设备

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101807484B1 (ko) * 2012-10-29 2017-12-11 한국전자통신연구원 객체 및 시스템 특성에 기반한 확률 분포 지도 작성 장치 및 그 방법
US9463571B2 (en) * 2013-11-01 2016-10-11 Brian Corporation Apparatus and methods for online training of robots
KR101610502B1 (ko) * 2014-09-02 2016-04-07 현대자동차주식회사 자율주행차량의 주행환경 인식장치 및 방법
EP3256815A1 (en) * 2014-12-05 2017-12-20 Apple Inc. Autonomous navigation system
US20170160751A1 (en) * 2015-12-04 2017-06-08 Pilot Ai Labs, Inc. System and method for controlling drone movement for object tracking using estimated relative distances and drone sensor inputs
DE102015225242A1 (de) * 2015-12-15 2017-06-22 Volkswagen Aktiengesellschaft Verfahren und System zur automatischen Steuerung eines Folgefahrzeugs mit einem Scout-Fahrzeug
JP6558239B2 (ja) * 2015-12-22 2019-08-14 アイシン・エィ・ダブリュ株式会社 自動運転支援システム、自動運転支援方法及びコンピュータプログラム
JP6606442B2 (ja) * 2016-02-24 2019-11-13 本田技研工業株式会社 移動体の経路計画生成装置
WO2017147747A1 (en) * 2016-02-29 2017-09-08 SZ DJI Technology Co., Ltd. Obstacle avoidance during target tracking
CN105955273A (zh) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 室内机器人导航系统及方法
CN106558058B (zh) * 2016-11-29 2020-10-09 北京图森未来科技有限公司 分割模型训练方法、道路分割方法、车辆控制方法及装置
CN106598055B (zh) * 2017-01-19 2019-05-10 北京智行者科技有限公司 一种智能车局部路径规划方法及其装置、车辆
CN106970615B (zh) * 2017-03-21 2019-10-22 西北工业大学 一种深度强化学习的实时在线路径规划方法
CN107168305B (zh) * 2017-04-01 2020-03-17 西安交通大学 路口场景下基于Bezier和VFH的无人车轨迹规划方法
CN107065883A (zh) * 2017-05-18 2017-08-18 广州视源电子科技股份有限公司 移动控制方法、装置、机器人及存储介质
JP6285597B1 (ja) * 2017-06-05 2018-02-28 大塚電子株式会社 光学測定装置および光学測定方法
CN107515606A (zh) * 2017-07-20 2017-12-26 北京格灵深瞳信息技术有限公司 机器人实现方法、控制方法及机器人、电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103576680A (zh) * 2012-07-25 2014-02-12 中国原子能科学研究院 一种机器人路径规划方法及装置
US20150292891A1 (en) * 2014-04-11 2015-10-15 Nissan North America, Inc Vehicle position estimation system
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
CN105739495A (zh) * 2016-01-29 2016-07-06 大连楼兰科技股份有限公司 行车路径规划方法、装置和自动转向系统
CN106020201A (zh) * 2016-07-13 2016-10-12 广东奥讯智能设备技术有限公司 移动机器人3d导航定位系统及导航定位方法
CN106767823A (zh) * 2016-12-14 2017-05-31 智易行科技(武汉)有限公司 基于不完全信息情况下的智能移动路径规划方法
CN106598054A (zh) * 2017-01-16 2017-04-26 深圳优地科技有限公司 机器人路径调整方法及装置
CN108121347A (zh) * 2017-12-29 2018-06-05 北京三快在线科技有限公司 用于控制设备运动的方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3722906A4 *

Also Published As

Publication number Publication date
US20200401151A1 (en) 2020-12-24
EP3722906A4 (en) 2020-12-30
EP3722906A1 (en) 2020-10-14
CN108121347B (zh) 2020-04-07
CN108121347A (zh) 2018-06-05

Similar Documents

Publication Publication Date Title
WO2019128496A1 (zh) 控制设备运动
US10102429B2 (en) Systems and methods for capturing images and annotating the captured images with information
US11302026B2 (en) Attitude recognition method and device, and movable platform
US11780083B2 (en) Determining and utilizing corrections to robot actions
US20230330848A1 (en) Reinforcement and imitation learning for a task
CN106780608B (zh) 位姿信息估计方法、装置和可移动设备
US20190246036A1 (en) Gesture- and gaze-based visual data acquisition system
CN106803271B (zh) 一种视觉导航无人机的摄像机标定方法及装置
JP7138361B2 (ja) 3次元仮想空間モデルを利用したユーザポーズ推定方法および装置
Yang et al. Reactive obstacle avoidance of monocular quadrotors with online adapted depth prediction network
WO2019047415A1 (zh) 轨迹跟踪方法和装置、存储介质、处理器
WO2022134680A1 (zh) 机器人定位方法、装置、存储介质及电子设备
CN113096151B (zh) 对目标的运动信息进行检测的方法和装置、设备和介质
Sehgal et al. Lidar-monocular visual odometry with genetic algorithm for parameter optimization
US11080562B1 (en) Key point recognition with uncertainty measurement
JP2020149186A (ja) 位置姿勢推定装置、学習装置、移動ロボット、位置姿勢推定方法、学習方法
Brockers et al. Vision-based obstacle avoidance for micro air vehicles using an egocylindrical depth map
WO2022038608A1 (en) Method and system for assessment of sensor performance
JP2021099383A (ja) 情報処理装置、情報処理方法およびプログラム
JP2021099384A (ja) 情報処理装置、情報処理方法およびプログラム
KR102348778B1 (ko) 무인이동체의 조향각 제어 방법 및 그 장치
Yang et al. Unmanned Aerial Vehicle Navigation Using Deep Learning
Liu et al. A Multi-sensor Combined Tracking Method for Following Robots
Sukhov Detection of Scenes Features for Path Following on a Local Map of a Mobile Robot Using Neural Networks
Coaguila Autonomous quadcopter videographer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18897378

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018897378

Country of ref document: EP

Effective date: 20200706