CN115035396A - Robot sight distance path determining method and device - Google Patents

Robot sight distance path determining method and device Download PDF

Info

Publication number
CN115035396A
CN115035396A CN202210950009.3A CN202210950009A CN115035396A CN 115035396 A CN115035396 A CN 115035396A CN 202210950009 A CN202210950009 A CN 202210950009A CN 115035396 A CN115035396 A CN 115035396A
Authority
CN
China
Prior art keywords
robot
information
traveling
attribute
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210950009.3A
Other languages
Chinese (zh)
Inventor
陈企华
黄永军
张春林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dongfang Tongwangxin Technology Co ltd
Original Assignee
Beijing Dongfang Tongwangxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dongfang Tongwangxin Technology Co ltd filed Critical Beijing Dongfang Tongwangxin Technology Co ltd
Priority to CN202210950009.3A priority Critical patent/CN115035396A/en
Publication of CN115035396A publication Critical patent/CN115035396A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method and a device for determining a robot sight distance path, which are used for acquiring the traveling environment information of a robot, acquiring the traveling sight distance information of the robot based on the traveling environment information of the robot, acquiring the camera shooting scene state of the robot when the traveling sight distance information of the robot meets a first condition in a sight distance range, acquiring the self attribute state information of the robot when the traveling sight distance information of the robot meets a second condition, determining the traveling attribute information based on the camera shooting scene state of the robot or the self attribute state information of the robot, inputting the traveling attribute information into a trained neural network, calculating an expected traveling angle, calculating the change of the traveling path in the time by using delay time, and correcting the expected traveling angle. According to the invention, on the basis of the whole advancing path of the robot, the advancing path and the advancing direction are determined through intelligent analysis in different visual range according to the scene state shot by shooting and the self attribute characteristics of the advancing device.

Description

Method and device for determining sight distance path of robot
Technical Field
Embodiments of the invention relate generally to the field of network information technology. More particularly, the embodiment of the invention relates to a robot line-of-sight path determining method and device.
Background
With the development of science and technology, the application fields of unmanned aerial vehicles and robots are more and more extensive, and the application of the unmanned aerial vehicles and the robots in different industries can bring innovation on industrial application and bring superposition leap across industries; meanwhile, the ambitious goal of 'having robots at home' is gradually realized. On the way that unmanned aerial vehicles and robots are popularized, the most important is an automatic obstacle avoidance and navigation technology which is closely related to safety, and if the safety of the unmanned aerial vehicles, the robots and personnel cannot be guaranteed, all the others are not mentioned.
A navigation system in the prior art generally includes a server, a positioning sensor base station, and a navigation receiving terminal, and provides a navigation route by setting a start point and an end point and using a routing algorithm according to mainly static map information. There is also a more advanced indoor navigation system, which takes into account the dynamic position and density of the personnel when calculating the navigation route, so that the navigation route is more accurate.
However, in the navigation system in the prior art, there are a plurality of influencing factors in the actual environment, for example, different types of scenes, such as a road idle state scene, a road busy state scene, a field scene, a busy market scene, and the like, and different traveling device performances, such as the number of miles traveled, the average traveling speed, and the tortuosity attribute information of a traveling route, which play a key role in realizing efficient real-time navigation and decision-making assistance services, and at present, there is no indoor navigation system which takes these influencing factors as input data. The existing navigation system usually only provides two or three maps or a voice navigation mode, cannot provide navigation paths and guidance information suitable for different crowd characteristics, and cannot meet the personalized navigation service requirements of users.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method and a device for determining the sight distance path of a robot.
The invention relates to a method for determining a robot sight distance path, which is characterized by comprising the following steps:
acquiring the traveling environment information of the robot;
acquiring the traveling sight distance information of the robot based on the traveling environment information of the robot;
when the traveling sight distance information of the robot meets a first condition, acquiring a scene state of camera shooting of the robot;
when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and determining travel attribute information based on the scene state of the camera shooting of the robot or the attribute state information of the robot, inputting the travel attribute information into the trained neural network, calculating an expected travel angle, calculating travel path change within the time by using delay time, and correcting the expected travel angle.
Further, the acquiring the traveling apparent distance information of the robot based on the traveling environment information of the robot specifically includes:
the traveling environment information of the robot includes rainfall, snowfall, visibility of the traveling environment of the robot;
setting the visual range of the traveling robot to a close range when at least one of the rainfall amount, the snowfall amount, or the visibility of the traveling environment reaches a low visual range standard;
and when the rainfall capacity, the snowfall capacity or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the advancing robot to be a long distance.
Further, the acquiring a scene state of the camera shooting of the robot when the traveling line-of-sight information of the robot satisfies a first condition includes:
when the sight distance of the traveling robot is a long distance, the traveling sight distance information of the robot meets a first condition;
the scene state of the camera shooting of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
Further, when the traveling sight distance information of the robot meets a second condition, acquiring the self attribute state information of the robot includes:
when the sight distance of the traveling robot is a short distance, the traveling sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self attribute state information of the robot comprises: the number of travelled miles, the average speed of travel, and the tortuosity of the travelled route.
Further, the determining of the travel attribute information based on the scene state of the camera shooting of the robot or the attribute state information of the robot itself and inputting the determined travel attribute information into the trained neural network, calculating the expected travel angle, calculating the travel path change within the time using the delay time, and correcting the expected travel angle includes:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the advancing direction of the robot;
obtaining delay time and forward-looking distance of the scene state shot by the camera of the robot or the attribute state information of the robot;
and calculating the current turning radius and the expected traveling path without delay time, and acquiring the angle change of the traveling path of the robot in the delay time to obtain the expected traveling path with the delay time.
The invention also claims a robot sight distance path determining device, which is characterized by comprising:
the acquisition device is used for acquiring the traveling environment information of the robot;
the sight distance grabbing device is used for acquiring the advancing sight distance information of the robot based on the advancing environment information of the robot acquired by the acquisition device;
the judging device is used for acquiring the scene state of camera shooting of the robot when the advancing sight distance information of the robot meets a first condition; when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and a travel attribute determining device for determining travel attribute information based on the scene state of the camera shot by the robot or the self attribute state information of the robot, inputting the determined travel attribute information into the trained neural network, calculating an expected travel angle, calculating a travel path change within the time by using the delay time, and correcting the expected travel angle.
Further, the above-mentioned stadia grabbing device, based on the environment information that advances of the robot that collection system obtained, obtain the stadia information that advances of robot, still include:
the traveling environment information of the robot includes rainfall, snowfall, visibility of the traveling environment of the robot;
setting the visual range of the traveling robot to a close range when at least one of the rainfall amount, the snowfall amount, or the visibility of the traveling environment reaches a low visual range standard;
and when the rainfall capacity, the snowfall capacity or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the advancing robot to be a long distance.
Further, the determination device, when the traveling visibility range information of the robot satisfies a first condition, acquires a scene state of the camera shooting of the robot, further includes:
when the sight distance of the advancing robot is a long distance, the advancing sight distance information of the robot meets a first condition;
the scene state of the camera shooting of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
Further, when the traveling line-of-sight information of the robot satisfies a second condition, acquiring the self-attribute state information of the robot, further comprising:
when the sight distance of the traveling robot is a short distance, the traveling sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self attribute state information of the robot comprises: the number of travelled miles, the average speed of travel, and the tortuosity of the travelled route.
Further, the travel attribute determination device determines travel attribute information based on a scene state captured by an image of the robot or the attribute state information of the robot itself, inputs the determined travel attribute information to the trained neural network, calculates an expected travel angle, calculates a travel path change within the time using a delay time, and corrects the expected travel angle, and includes:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the advancing direction of the robot;
obtaining delay time and forward-looking distance of the scene state shot by the camera of the robot or the attribute state information of the robot;
and calculating the current turning radius and the expected traveling path without delay time, and acquiring the angle change of the traveling path of the robot in the delay time to obtain the expected traveling path with the delay time.
The invention discloses a method and a device for determining a robot sight distance path, which are used for acquiring the traveling environment information of a robot, acquiring the traveling sight distance information of the robot based on the traveling environment information of the robot, acquiring the scene state shot by shooting of the robot when the traveling sight distance information of the robot meets a first condition, acquiring the self attribute state information of the robot when the traveling sight distance information of the robot meets a second condition, and determining the traveling attribute information based on the scene state shot by shooting of the robot or the self attribute state information of the robot. The invention determines the advancing path and direction through intelligent analysis according to the scene state shot by shooting and the self attribute characteristics of the advancing device in different visual distance ranges on the basis of the whole advancing path of the robot.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a method for determining a line-of-sight path of a robot according to the present invention;
fig. 2 is a flowchart illustrating a first embodiment of a method for determining a line-of-sight path of a robot according to the present invention;
fig. 3 is a block diagram illustrating a structure of a robot sight distance path determining apparatus according to the present invention.
Detailed Description
Illustrative embodiments of the present application include, but are not limited to, a robot line-of-sight path determination method
It is understood that, as used herein, the term; a module; a unit; may refer to or comprise an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality, or may be part of these hardware components.
It is to be appreciated that in various embodiments of the present application, the processor may be a microprocessor, a digital signal processor, a microcontroller, or the like, and/or any combination thereof. According to another aspect, the processor may be a single-core processor, a multi-core processor, the like, and/or any combination thereof.
It is to be appreciated that a robot line-of-sight path determination method provided herein may be implemented on a variety of electronic devices including, but not limited to, a server, a distributed server cluster of multiple servers, a cell phone, a tablet, a laptop, a desktop computer, a wearable device, a head-mounted display, a mobile email device, a portable game console, a portable music player, a reader device, a personal digital assistant, a virtual reality or augmented reality device, a television or other electronic device having one or more processors embedded or coupled therein, and the like.
Referring to fig. 1, the present invention requests a robot line-of-sight path determining method, which is characterized by comprising:
acquiring the traveling environment information of the robot;
acquiring the traveling sight distance information of the robot based on the traveling environment information of the robot;
when the advancing sight distance information of the robot meets a first condition, acquiring a scene state of camera shooting of the robot;
when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and determining travel attribute information based on the scene state shot by the camera of the robot or the attribute state information of the robot, inputting the travel attribute information into the trained neural network, calculating an expected travel angle, calculating travel path change within the time by using delay time, and correcting the expected travel angle.
Further, the acquiring the traveling apparent distance information of the robot based on the traveling environment information of the robot specifically includes:
the traveling environment information of the robot includes rainfall, snowfall, visibility of the traveling environment of the robot;
setting the visual range of the traveling robot to a close range when at least one of the rainfall amount, the snowfall amount, or the visibility of the traveling environment reaches a low visual range standard;
and when the rainfall capacity, the snowfall capacity or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the advancing robot to be a long distance.
The visual distance during navigation is a very important factor for a traveling device, the specific visual distance determines a plurality of traveling attribute values, and the measurement of the visual distance needs to be determined based on a plurality of conditions.
In the embodiment of the invention, the current weather conditions including rainfall, snowfall and haze are obtained;
when the rainfall reaches the level above heavy rain or the snowfall reaches the level above medium snow or haze exists, determining that at least one of the rainfall, the snowfall or the visibility of the traveling environment reaches the low-visibility-range standard;
at this time, the visual range of the traveling robot is set to a short range, and the subsequent traveling attribute value setting is set according to the long-range visual range.
When the rainfall does not reach the level above heavy rain, the snowfall does not reach the level above medium snow and haze does not exist, the rainfall or the snowfall or the visibility of the travelling environment is determined not to reach the low-visibility-distance standard;
at this time, the sight distance of the travel robot is set to a long distance, and the subsequent travel attribute value setting is set according to the long-distance sight distance.
Further, the acquiring a scene state of the camera shooting of the robot when the traveling sight distance information of the robot satisfies a first condition includes:
when the sight distance of the advancing robot is a long distance, the advancing sight distance information of the robot meets a first condition;
the scene state of the camera shooting of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
Since the traveling of the traveling device is performed with reference to the surrounding environment more than necessary when the traveling device is at a long-distance visual range, the present invention acquires the state of the scene photographed by the robot when the visual range of the traveling robot is at a long distance.
Further, when the traveling sight distance information of the robot meets a second condition, acquiring the self attribute state information of the robot includes:
when the sight distance of the traveling robot is a short distance, the traveling sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self attribute state information of the robot comprises: the travel mileage, the average travel speed and the tortuosity of a travel route.
Since the traveling of the traveling device needs to be performed with reference to the self condition when the traveling device is at a short sight distance, the invention acquires the self attribute state information of the robot when the sight distance of the traveling robot is at a short sight distance.
Further, referring to fig. 2, the determining of the travel attribute information based on the scene state of the camera shot by the robot or the attribute state information of the robot itself and inputting the determined travel attribute information into the trained neural network to calculate the desired travel angle, and calculating the travel path change within the time using the delay time to correct the desired travel angle includes:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the advancing direction of the robot;
obtaining delay time and forward-looking distance of the scene state shot by the camera of the robot or the attribute state information of the robot;
and calculating the current turning radius and the expected traveling path without delay time, and acquiring the angle change of the traveling path of the robot in the delay time to obtain the expected traveling path with the delay time.
Specifically, when the scene state captured and photographed by the robot is a road idle state scene or the average speed of the travel of the attribute state information of the robot is greater than a first threshold, it is determined that the robot satisfies a third condition when the scene state captured and photographed by the robot satisfies a third condition or the attribute state information of the robot satisfies a fourth condition.
Since the robot is more likely to adopt a high-speed running travelling mode when the travelling speed of the robot in a road idle state scene is allowed to reach a certain value or the historical travelling average speed reaches a certain value, the travelling speed attribute information of the robot is mainly determined under the condition.
And when the scene state shot by the camera of the robot is a field scene or the travel mileage of the attribute state information of the robot is greater than a second threshold value, determining that the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition.
The safety of the robot under the field scene is influenced or the historical travel mileage reaches a certain value, so that the safety of the robot has higher risk, and the alarm attribute information of the travel route of the robot is mainly determined under the condition.
And when the scene state shot by the camera of the robot is a busy road scene or a busy city scene or the tortuosity of the traveling route of the attribute state information of the robot is greater than a third threshold value, determining that the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition.
The method mainly comprises the steps that when the traveling wind direction of the robot in a busy road scene or a busy city scene is influenced or the tortuosity of a historical traveling route reaches a certain value, the traveling direction of the robot changes more variously, and therefore attribute information of the traveling direction of the robot is mainly determined under the condition of the condition.
Firstly, establishing a neural network and training the neural network comprises the following steps:
determining a BP neural network model: the input layer has three nodes, has a hidden layer, and the output layer has two nodes, and the number of nodes of the hidden layer is determined by the following formula:
Figure 634030DEST_PATH_IMAGE001
h is the number of nodes of the hidden layer, o is the number of nodes of the output layer, i is the number of nodes of the input layer, and a is a constant between 0 and 10;
the output of the hidden layer is calculated by:
Figure 20012DEST_PATH_IMAGE002
wherein, ω is ij For the weight of the hidden layer, i may be 1, 2, 3, j may be 1, 2 … 5, and xi is the input parameter: x1 is the robot velocity, x2 is the lateral error, x3 is the travel path error, α j is the hidden layer threshold, f (x) is the excitation function, and is:
Figure 534169DEST_PATH_IMAGE003
the output layer output is calculated by:
Figure 398220DEST_PATH_IMAGE004
wherein, delta jk J may be 1, 2 … 5, k may be 1, 2, β for the output layer weight k As output layer threshold, O k O1 is a delay time td and O2 is a forward looking distance Lsd, respectively, as output parameters;
calculating the error from the desired output and the actual output of the network:
e k =Y k -O k
wherein the content of the first and second substances,Y k is the expected output of the node;
error according to network e k Calculating and updating the weight of each network;
Figure 196281DEST_PATH_IMAGE005
Figure 487585DEST_PATH_IMAGE006
wherein eta is the learning rate, and p is the serial number of the current learning sample;
obtaining delay time and forward looking distance by using the trained neural network;
Figure 489039DEST_PATH_IMAGE007
wherein network (x) is the neural network;
calculating the current turning radius:
Figure 156781DEST_PATH_IMAGE008
calculate the expected travel path without delay time:
Figure 294501DEST_PATH_IMAGE009
wherein L is w Is the robot width;
calculating the change of the travel path angle of the robot in the delay time:
Figure 271553DEST_PATH_IMAGE010
calculating a desired travel path with delay time:
Figure 760303DEST_PATH_IMAGE011
with reference to fig. 3, the invention also claims a robot line-of-sight path determining device, characterized by comprising:
the acquisition device acquires the traveling environment information of the robot;
the sight distance grabbing device is used for acquiring the advancing sight distance information of the robot based on the advancing environment information of the robot acquired by the acquisition device;
the judging device is used for acquiring the scene state of camera shooting of the robot when the advancing sight distance information of the robot meets a first condition; when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and a travel attribute determining device for determining travel attribute information based on the scene state of the camera shot by the robot or the self attribute state information of the robot, inputting the determined travel attribute information into the trained neural network, calculating an expected travel angle, calculating a travel path change within the time by using the delay time, and correcting the expected travel angle.
Further, the above-mentioned stadia grabbing device, based on the environment information that advances of the robot that collection system obtained, obtain the stadia information that advances of robot, still include:
the traveling environment information of the robot includes rainfall, snowfall, visibility of the traveling environment of the robot;
setting the visual range of the traveling robot to a close range when at least one of the rainfall amount, the snowfall amount, or the visibility of the traveling environment reaches a low visual range standard;
and when the rainfall capacity, the snowfall capacity or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the advancing robot to be a long distance.
Further, the determining device, when the traveling line-of-sight information of the robot satisfies a first condition, acquires a scene state of the camera shooting of the robot, further includes:
when the sight distance of the advancing robot is a long distance, the advancing sight distance information of the robot meets a first condition;
the scene state of the camera shooting of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
Further, when the traveling line-of-sight information of the robot satisfies a second condition, acquiring the self-attribute state information of the robot, the method further includes:
when the sight distance of the traveling robot is a short distance, the traveling sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self attribute state information of the robot comprises: the number of travelled miles, the average speed of travel, and the tortuosity of the travelled route.
Further, the travel attribute determining device determines travel attribute information based on a scene state of the image pickup and the image pickup of the robot or the attribute state information of the robot itself, inputs the determined travel attribute information to the trained neural network, calculates an expected travel angle, calculates a travel path change within the time using a delay time, and corrects the expected travel angle, and includes:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
and when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the traveling direction of the robot.
Firstly, establishing a neural network and training the neural network comprises the following steps:
determining a BP neural network model: the input layer has three nodes, there is a hidden layer, the output layer has two nodes, the number of nodes of the hidden layer is determined by the following formula:
Figure 223963DEST_PATH_IMAGE001
h is the number of hidden layer nodes, o is the number of output layer nodes, i is the number of input layer nodes, and a is a constant between 0 and 10;
the output of the hidden layer is calculated by:
Figure 388228DEST_PATH_IMAGE002
wherein, ω is ij For the weight of the hidden layer, i may be 1, 2, 3, j may be 1, 2 … 5, and xi is the input parameter: x1 is the robot velocity, x2 is the lateral error, x3 is the travel path error, α j is the hidden layer threshold, f (x) is the excitation function, and is:
Figure 98695DEST_PATH_IMAGE003
the output layer output is calculated by:
Figure 357507DEST_PATH_IMAGE004
wherein, delta jk For the output layer weight, j may be 1, 2 … 5, k may be 1, 2, β k is the output layer threshold, Ok is the output parameter, O1 is the delay time td, O2 is the forward looking distance Lsd;
calculating the error from the desired output and the actual output of the network:
e k =Y k -O k
wherein, the first and the second end of the pipe are connected with each other,Y k is the desired output of the node;
error according to network e k Calculating and updating the weight of each network;
Figure 469819DEST_PATH_IMAGE013
Figure 539406DEST_PATH_IMAGE006
wherein eta is the learning rate, and p is the serial number of the current learning sample;
obtaining delay time and forward looking distance by using the trained neural network;
Figure 2749DEST_PATH_IMAGE014
wherein network (x) is the neural network;
calculating the current turning radius:
Figure 815984DEST_PATH_IMAGE008
calculate the expected travel path without delay time:
Figure 782803DEST_PATH_IMAGE009
wherein L is w Is the robot width;
calculating the change of the travel path angle of the robot in the delay time:
Figure 272559DEST_PATH_IMAGE010
calculating the expected travel path with delay time:
Figure 692039DEST_PATH_IMAGE011
it should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of the present application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the term; comprises the following steps of; comprises the following components; or any other variation thereof, is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, by a statement; comprises one; a defined element does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the defined element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (10)

1. A robot sight distance path determining method is characterized by comprising the following steps:
acquiring the traveling environment information of the robot;
acquiring the traveling sight distance information of the robot based on the traveling environment information of the robot;
when the traveling sight distance information of the robot meets a first condition, acquiring a scene state of camera shooting of the robot;
when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and determining travel attribute information based on the scene state shot by the camera of the robot or the attribute state information of the robot, inputting the travel attribute information into the trained neural network, calculating an expected travel angle, calculating travel path change within the time by using delay time, and correcting the expected travel angle.
2. The method for determining the line-of-sight path of the robot according to claim 1, wherein the acquiring the traveling line-of-sight information of the robot based on the traveling environment information of the robot specifically includes:
the traveling environment information of the robot includes rainfall, snowfall, visibility of the traveling environment of the robot;
when at least one of the rainfall amount, the snowfall amount or the visibility of the traveling environment reaches a low visual range standard, setting the visual range of the robot to be a short range;
and when the rainfall or the snowfall or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the robot to be a long distance.
3. The robot sight distance path determining method according to claim 1, wherein the acquiring a scene state of the camera shooting of the robot when the traveling sight distance information of the robot satisfies a first condition, comprises:
when the sight distance of the robot is a long distance, the advancing sight distance information of the robot meets a first condition;
the camera shooting scene state of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
4. The robot sight distance path determining method according to claim 1, wherein the acquiring the self-attribute state information of the robot when the traveling sight distance information of the robot satisfies a second condition includes:
when the sight distance of the robot is a short distance, the advancing sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self attribute state information of the robot comprises: the travel mileage, the average travel speed and the tortuosity of a travel route.
5. The robot sight distance path determining method according to claim 1, wherein the step of determining travel attribute information based on a scene state photographed by a camera of the robot or the attribute state information of the robot itself, inputting the determined travel attribute information to a trained neural network, calculating an expected travel angle, calculating a travel path change within the time using a delay time, and correcting the expected travel angle, comprises the steps of:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the advancing direction of the robot;
obtaining delay time and forward-looking distance of the scene state shot by the camera of the robot or the attribute state information of the robot;
and calculating the current turning radius and the expected traveling path without delay time, and acquiring the angle change of the traveling path of the robot in the delay time to obtain the expected traveling path with the delay time.
6. A robot line-of-sight path determination apparatus, comprising:
the acquisition device is used for acquiring the traveling environment information of the robot;
the sight distance grabbing device is used for acquiring the advancing sight distance information of the robot based on the advancing environment information of the robot acquired by the acquisition device;
the judging device is used for acquiring the scene state of camera shooting of the robot when the advancing sight distance information of the robot meets a first condition; when the traveling sight distance information of the robot meets a second condition, acquiring the attribute state information of the robot;
and a travel attribute determining device for determining travel attribute information based on the scene state of the camera shot by the robot or the self attribute state information of the robot, inputting the determined travel attribute information into the trained neural network, calculating an expected travel angle, calculating a travel path change within the time by using the delay time, and correcting the expected travel angle.
7. The robot sight distance path determining device according to claim 6, wherein the sight distance grasping device acquires the travel sight distance information of the robot based on the travel environment information of the robot acquired by the acquisition device, and further comprises:
the traveling environment information of the robot includes rainfall, snowfall, visibility of a traveling environment of the robot;
when at least one of the rainfall amount, the snowfall amount, or the visibility of the traveling environment reaches a low-visibility-range standard, setting the visibility range of the robot to a short range;
and when the rainfall or the snowfall or the visibility of the advancing environment does not reach the low visual range standard, setting the visual range of the robot to be a long distance.
8. The robot apparent distance path determining apparatus according to claim 6, wherein the judging means acquires a scene state of the camera shooting of the robot when the traveling apparent distance information of the robot satisfies a first condition, further comprising:
when the sight distance of the robot is a long distance, the advancing sight distance information of the robot meets a first condition;
the scene state of the camera shooting of the robot is obtained by shooting the surrounding environment through a camera arranged on the robot;
the scene state of the camera shooting of the robot comprises the following steps: a road idle state scene, a road busy state scene, a field scene and a busy market scene.
9. The robot apparent distance path determining apparatus according to claim 6, wherein said acquiring the self-attribute state information of the robot when the traveling apparent distance information of the robot satisfies a second condition, further comprises:
when the sight distance of the robot is a short distance, the advancing sight distance information of the robot meets a second condition;
the self attribute state information of the robot is acquired by a multi-type sensor arranged on the robot;
the self-attribute state information of the robot comprises: the number of travelled miles, the average speed of travel, and the tortuosity of the travelled route.
10. The robot apparent distance path determining apparatus according to claim 6, wherein the travel attribute determining means determines travel attribute information based on a scene state captured by an image of the robot or the self attribute state information of the robot, inputs the determined travel attribute information to a trained neural network, calculates a desired travel angle, calculates a travel path change during the time using a delay time, and corrects the desired travel angle, and includes:
when the scene state shot by the camera of the robot meets a third condition or the attribute state information of the robot meets a fourth condition, determining the traveling speed attribute information of the robot;
when the scene state shot by the camera of the robot meets a fifth condition or the attribute state information of the robot meets a sixth condition, determining the alarm attribute information of the traveling route of the robot;
when the scene state shot by the camera of the robot meets a seventh condition or the attribute state information of the robot meets an eighth condition, determining the attribute information of the advancing direction of the robot;
obtaining delay time and forward-looking distance of the scene state shot by the camera of the robot or the attribute state information of the robot;
and calculating the current turning radius and the expected traveling path without delay time, and acquiring the angle change of the traveling path of the robot in the delay time to obtain the expected traveling path with the delay time.
CN202210950009.3A 2022-08-09 2022-08-09 Robot sight distance path determining method and device Pending CN115035396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210950009.3A CN115035396A (en) 2022-08-09 2022-08-09 Robot sight distance path determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210950009.3A CN115035396A (en) 2022-08-09 2022-08-09 Robot sight distance path determining method and device

Publications (1)

Publication Number Publication Date
CN115035396A true CN115035396A (en) 2022-09-09

Family

ID=83130021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210950009.3A Pending CN115035396A (en) 2022-08-09 2022-08-09 Robot sight distance path determining method and device

Country Status (1)

Country Link
CN (1) CN115035396A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168324A (en) * 2017-06-08 2017-09-15 中国矿业大学 A kind of robot path planning method based on ANFIS fuzzy neural networks
JP2018054456A (en) * 2016-09-29 2018-04-05 アイシン・エィ・ダブリュ株式会社 Route guidance system, and route guidance program
CN109405846A (en) * 2018-10-08 2019-03-01 东南大学 A kind of route tracing method of adaptive adjustment forward sight distance and delay parameter
CN114494848A (en) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot sight distance path determining method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018054456A (en) * 2016-09-29 2018-04-05 アイシン・エィ・ダブリュ株式会社 Route guidance system, and route guidance program
CN107168324A (en) * 2017-06-08 2017-09-15 中国矿业大学 A kind of robot path planning method based on ANFIS fuzzy neural networks
CN109405846A (en) * 2018-10-08 2019-03-01 东南大学 A kind of route tracing method of adaptive adjustment forward sight distance and delay parameter
CN114494848A (en) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot sight distance path determining method and device

Similar Documents

Publication Publication Date Title
US20230400567A1 (en) Extended Object Tracking Using RADAR
US11320836B2 (en) Algorithm and infrastructure for robust and efficient vehicle localization
JP7351487B2 (en) Intelligent navigation method and system based on topology map
US11933622B2 (en) Collective vehicle traffic routing
JP2019139762A (en) Method for providing information for vehicle travel
CN108981730A (en) For generating the method and system of reference path for operation automatic driving vehicle
CN112683291B (en) Vehicle turning path planning method and device, vehicle and storage medium
JP2020516880A (en) Method and apparatus for reducing midpoints in a polygon
CN109813332B (en) Method and device for adding virtual guide line
CN111126327A (en) Lane line detection method and system, vehicle-mounted system and vehicle
CN114494848A (en) Robot sight distance path determining method and device
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN115035396A (en) Robot sight distance path determining method and device
US11946757B2 (en) Identifying and displaying smooth and demarked paths
CN113850915A (en) Vehicle tracking method based on Autoware
CN113806607B (en) Track display method and device, electronic equipment and storage medium
CN113534214B (en) Vehicle positioning method and device
CN113923774B (en) Target terminal position determining method and device, storage medium and electronic equipment
CN118010038A (en) Vehicle position correction method, device, equipment and storage medium
CN114241439A (en) Acquisition range determining method and device, electronic equipment and storage medium
CN115859069A (en) Method, device, equipment and storage medium for identifying road object
CN118129756A (en) Track prediction method, vehicle, device and storage medium
CN115588174A (en) Lane line processing method, lane line processing device, computer equipment and storage medium
CN117804480A (en) Method, device, equipment and medium for generating tracking route of automatic driving vehicle
CN115129797A (en) Electronic map processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220909

RJ01 Rejection of invention patent application after publication