CN111142529B - Virtual wall decision method and device and robot - Google Patents

Virtual wall decision method and device and robot Download PDF

Info

Publication number
CN111142529B
CN111142529B CN201911415298.1A CN201911415298A CN111142529B CN 111142529 B CN111142529 B CN 111142529B CN 201911415298 A CN201911415298 A CN 201911415298A CN 111142529 B CN111142529 B CN 111142529B
Authority
CN
China
Prior art keywords
robot
virtual wall
information
neural network
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911415298.1A
Other languages
Chinese (zh)
Other versions
CN111142529A (en
Inventor
徐慎华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Shanghai Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shanghai Robotics Co Ltd filed Critical Cloudminds Shanghai Robotics Co Ltd
Priority to CN201911415298.1A priority Critical patent/CN111142529B/en
Publication of CN111142529A publication Critical patent/CN111142529A/en
Application granted granted Critical
Publication of CN111142529B publication Critical patent/CN111142529B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the invention relates to the technical field of robot navigation, in particular to a virtual wall decision method, a virtual wall decision device, a robot and a computer readable storage medium. The virtual wall decision method comprises the following steps: acquiring current position information of a robot; acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area; inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model; and if the evaluation result is safe, the robot moves out of the virtual wall area. The method reduces the probability of using manpower to solve the problem due to false triggering to enter the virtual wall area, improves the working efficiency and saves the labor cost.

Description

Virtual wall decision method and device and robot
Technical Field
The embodiment of the invention relates to the technical field of robot navigation, in particular to a virtual wall decision method, a virtual wall decision device, a robot and a computer readable storage medium.
Background
With the increasing maturity of robotics, service robots have come into sight of human life, including security patrol robots for communities, cleaning robots for large malls, companion robots for homes, and so forth. In order for a robot to be able to better serve a person, the robot must have powerful autonomous mobility, and the robot needs to know where and which areas can be accessed and which areas cannot be accessed. For areas unsuitable for the robot to enter, such as bushes, pools, ditches and the like, the system can avoid the robot from entering the areas by sending an indication or marking the areas as inaccessible areas on a navigation map.
However, the applicant found that, due to a certain error in navigation positioning accuracy in reality, there is a certain probability that the robot may enter into an area unsuitable for the robot to enter, so that the robot cannot travel according to a planned path, even the robot is blocked and cannot move due to the complex actual road surface, and the robot needs to be manually moved out of the area to continue traveling.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a virtual wall decision method, apparatus, robot, and computer readable storage medium, which overcome or at least partially solve the above-described problems.
According to an aspect of an embodiment of the present invention, there is provided a virtual wall decision method, including:
acquiring current position information of a robot;
acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area;
inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model;
and if the evaluation result is safe, the robot moves out of the virtual wall area.
Further, before executing the method, the method further comprises:
acquiring an environment information set of robot passing and a corresponding passing result set of the robot;
and inputting the traffic environment information set and the traffic result set into the neural network model for training.
Further, the inputting the traffic environment information set and the traffic result set into the neural network model for training includes:
collecting picture data on the robot navigation path;
preprocessing the acquired picture data;
acquiring traffic capacity information of the robot;
and inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, updating the evaluation result to be dangerous, and sending out dangerous information.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, recording the number of times that the robot fails to move out of the virtual wall area;
if the number of times that the robot fails to move out of the virtual wall area exceeds a first threshold value, updating the evaluation result to be dangerous, and sending out dangerous information;
otherwise, continuously acquiring the environment information of the current position of the robot, and sending the environment information to a preset neural network model for evaluation.
Further, the updating the evaluation result is dangerous, and danger information is sent out, and the method further includes:
and acquiring the environment information of the current position of the robot, and inputting the environment information and the passing result into the neural network model.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot moves out of the virtual wall area successfully, the robot searches the nearest path point to the robot and continues to navigate.
The embodiment of the invention also provides a virtual wall decision device, which comprises:
a position information acquisition module: the method comprises the steps of acquiring current position information of a robot;
the environment information acquisition module: the method comprises the steps of acquiring environment information of a current position when the current position is located in a preset virtual wall area;
and an evaluation module: the environment information is input into a preset neural network model, and the safety of the environment information is evaluated through the neural network model;
decision module: and the robot is used for indicating the robot to move out of the virtual wall area when the evaluation result is safe.
The embodiment of the invention also provides a robot, which comprises: the camera, the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the virtual wall decision method.
The embodiment of the invention also provides a computer readable storage medium, wherein at least one executable instruction is stored in the storage medium, and when the executable instruction runs on the virtual wall decision device, the virtual wall decision device executes the virtual wall decision method.
According to the virtual wall decision-making method, the environment recognition capability and the virtual wall are combined, when the robot is mistakenly inserted into the virtual wall area, the surrounding environment is analyzed through the neural network model, whether the robot has the capability of moving out autonomously or not is determined based on the capability of the robot and the surrounding environment, and when the robot has the autonomous capability, the robot is automatically recovered by means of the capability of the robot, in this way, the probability of using manpower to solve due to the fact that the robot is mistakenly triggered to enter the virtual wall area is reduced, the working efficiency is improved, and the labor cost is saved.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and may be implemented according to the content of the specification, so that the technical means of the embodiments of the present invention can be more clearly understood, and the following specific embodiments of the present invention are given for clarity and understanding.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows a flow chart of a virtual wall decision method provided by an embodiment of the invention;
FIG. 2 is a flow chart of a virtual wall decision method according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a virtual wall decision device according to an embodiment of the present invention;
fig. 4 shows a schematic structural diagram of a robot device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a virtual wall decision method which is applied to a robot system, such as: the method can be directly applied to robot equipment, the robot collects environmental information of the current position, and then analysis and decision are carried out through a processor carried by the robot; the system can also run in a robot system formed by a robot and a remote server or a cloud server, the robot collects the environmental information of the current position through equipment such as a camera or a laser radar carried by the robot, the environmental information is sent to the remote server or the cloud server through a network, the remote server or the cloud server analyzes and makes a decision, and a decision result is sent to the robot to instruct the robot to perform corresponding operation.
At present, a robot can navigate by acquiring a navigation map and combining position information of a robot device, wherein the position information can be GPS position information or position information determined by a WiFi or other systems. In the mode of grid map positioning, a virtual wall area is defined for a robot in advance in a preset grid map, the robot usually moves in an area surrounded by the virtual wall area, and when the robot approaches to the virtual wall, the robot can automatically turn to prevent the robot from entering the virtual wall area. The virtual wall may be generally set in advance and then transmitted to the robot; the real-time modification can also be carried out on a server, and then the modification is updated to the robot for navigation.
The application describes with patrol robot as an example, patrol robot passes through the network connection to high in the clouds server, high in the clouds server and patrol robot together operation virtual wall decision-making method. The patrol robot generally operates in a residential district or a factory enterprise, the environment where the patrol robot is located is complex, the area generally comprises trees, flower pools, vehicles, steps, swimming pools and the like, the system firstly acquires map information of the district, and a virtual wall area is marked on the district map, for example: the position of the flower pool on the map is a virtual wall area, the current position of the robot is determined through GPS information, and when the current position of the robot is positioned in the virtual wall area, the robot is guided to be unable to enter the virtual wall area. However, due to inaccurate position positioning or untimely updating of the position information, the robot is often mistakenly inserted into the virtual wall area, so that the robot cannot walk and cannot patrol normally.
The embodiment of the invention provides a virtual wall decision method, as shown in fig. 1, comprising the following steps:
step 101: acquiring current position information of a robot;
the robot acquires the current position information of the robot through the GPS equipment or other positioning equipment carried by the robot, the current position information comprises longitude, latitude and other information, and the system can convert the current position information into position information matched with a map carried by the robot, such as grid information and the like, so as to identify the position of the robot on the current map.
Step 102: acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area;
judging whether the robot is located in a preset virtual wall area according to the current position information of the robot so as to determine whether the robot enters the virtual wall area, wherein the locating includes but is not limited to that the robot is located in the virtual wall area, the position of the robot is overlapped with the virtual wall area, and the like. Of course, the system may also modify the area where the robot is prohibited from entering based on the virtual wall area, for example, may further extend a part of the area outward based on the preset virtual wall area to serve as the prohibited-to-enter area.
When the robot is located within the virtual wall area, the robot will stop traveling to determine the next plan. In the application, the robot automatically starts the camera carried by the robot, photographs or records the environment where the current position of the robot is located, and collects surrounding environment information to judge whether the current environment is safe or not. Of course, the robot can collect the surrounding environment information in other modes, such as scanning by using a laser radar; the collection may also be performed in conjunction with other tools, such as altimeters, etc.
The security also includes various meanings, and the system can be set according to the needs, for example: if the robot is in the grass, the robot judges that the robot can automatically leave the area by virtue of the self-capacity, and the robot is determined to be safe; if the robot is in the pool, even if the robot can drive out of the pool by self capacity, water can be fed into the system or water can be fed into certain parts to cause damage to the system, the system can set the condition as unsafe, namely the safety and danger, the system can be set according to specific conditions, and the unified judging standard cannot be set.
Step 103: inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model;
the robot sends the collected environmental information of the current position to a server, and the server evaluates the safety of the environment through a preset neural network model.
The neural network model may be of various types, such as: neural network systems based on convolution operations, self-encoding neural networks based on multi-layer neurons, and the like. The neural network model needs to be trained in advance, and the training process can be performed through a preset data set, for example, learning is performed through inputting a large amount of preset data according to preset robot passing environment information and a robot passing result under the environment; the robot can also be combined with data acquired by the robot in the daily patrol process to perform autonomous learning, such as: the robot can continuously shoot surrounding environment information in the patrol process, and train by combining the passing result and the passing capacity of the robot.
In the patrol process of the robot, collecting picture data on a navigation path, preprocessing the collected picture data, such as cutting, overturning, mirroring and the like, so that the system can be conveniently identified, and meanwhile, acquiring traffic capacity information of the robot, wherein the traffic capacity information comprises the chassis height, tire strength, wading performance and the like of the robot. After the data are obtained, the preprocessed picture data, traffic capacity information and corresponding traffic results are input into the neural network model to generate an environment information matrix, the neural network model is trained according to the environment information matrix, an evaluation result is output, the evaluation result is usually an evaluation value, a safety threshold is preset in the system, the safety is evaluated when the evaluation value is larger than the safety threshold, and the danger is evaluated when the evaluation value is smaller than the safety threshold. Of course, the training modes and algorithms of different neural network models may be different, and will not be described in detail herein.
Step 104: and if the evaluation result is safe, the robot moves out of the virtual wall area.
The neural network model evaluates whether the robot can pass or not based on the input preprocessed picture data, the passing capability information and the corresponding passing result information, if the evaluation result is safe, the robot has the capability of moving out of the current virtual wall area, and the robot moves out of the virtual wall area; if the evaluation result is dangerous, the robot does not have the capability of moving out of the current virtual wall area, or the robot is damaged by forced moving out, the robot is kept in the original state, and the evaluation result is reported to a service desk or a manual terminal to request manual assistance.
When the evaluation result is safe and the robot moves out successfully, the robot continues to navigate and patrol by searching the nearest path point.
Therefore, as can be appreciated from the above, the virtual wall decision method provided by the embodiment of the application combines the environment recognition capability and the virtual wall, when the robot has an event of mistakenly entering the virtual wall area, the surrounding environment is firstly analyzed through the neural network model, whether the robot has the capability of autonomously moving out or not is determined based on the capability of the robot and the surrounding environment, and when the robot has the capability of autonomously, the robot is automatically recovered by means of the capability of the robot, in this way, the probability of using manpower to solve due to mistakenly triggering to enter the virtual wall area is reduced, the working efficiency is improved, and the labor cost is saved.
Further, another embodiment of the present invention further provides a virtual wall decision method, as shown in fig. 2, which improves and perfects the above embodiment, so that the method is more efficient.
Prior to this step 201 and 202, the robot will perform the conventional steps of the above embodiments, such as: the steps of acquiring the current position information of the robot, judging whether the current position is located in the virtual wall area, and the like are not repeated here, and the embodiment focuses on the steps after the robot enters the virtual wall area.
Step 203: judging whether the evaluation result is safe or not;
the robot inputs the environmental information acquired in step 201 into the neural network model in step 202, which outputs the evaluation result.
When the evaluation result is safe, go to step 205; when the evaluation is dangerous, then step 204 is performed.
Step 204: sending out dangerous information;
if the evaluation result is dangerous, the robot will stop traveling and issue dangerous information. There are various ways to send out the hazard information, such as: after the robot stops continuing to travel, the robot can automatically send out an alarm to inform surrounding staff to perform manual assistance; the evaluation result can also be directly reported to a robot background system, the background program is used for processing after analysis, and the background system is used for notifying related staff to perform manual assistance; the dangerous signal can also be directly sent to the handheld terminal of the nearest staff to request manual assistance.
Step 205: judging whether the robot successfully moves out of the virtual wall or not;
and when the evaluation result is safe, the robot moves out of the virtual wall area autonomously.
If the autonomous removal of the robot from the virtual wall area is successful, step 206 is entered.
However, due to the complexity of the environment and the errors in the analysis of the neural network model, the evaluation result may be safe, but the robot does not actually have the capability of moving out of the virtual wall area autonomously, in which case the system will make a re-judgment, and step 207 is performed;
of course, the robot may also directly update the evaluation result to be dangerous without attempting after the robot fails to try to remove the virtual wall area, and send out dangerous information, and go to step 208.
Step 206: searching the nearest path point for navigation;
and when the robot successfully moves out of the virtual wall area, the robot autonomously searches a path point closest to the robot to navigate, and resumes travelling. The path points are points which are preset by the system and passed by patrol routes or some patrol points, the system can send the points to the robot through a network at the beginning, and the robot can navigate along the patrol routes or with the path points as targets.
Step 207: judging whether the unsuccessful times exceeds a first threshold value;
and when the robot does not successfully move out of the virtual wall area, recording the failure times of the robot to move out of the virtual wall area, and accumulating. Of course, this count is only for this entry into the virtual wall area, and is re-counted when the robot enters other virtual wall areas at other times.
If the number of failures of the robot to move out of the virtual wall area exceeds the first threshold, updating the evaluation result to be dangerous, and sending out dangerous information, which indicates that the robot still cannot move out of the virtual wall area after multiple attempts, and manual assistance is needed, at this time, the robot updates the evaluation result to be dangerous, and reports the result to the manual assistance, and step 208 is performed.
If the number of failures of the robot to move out of the virtual wall area does not exceed the first threshold, the robot may try again, go to step 201, and re-analyze the environment. The robot shoots the surrounding environment of the current position again, sends the shooting result to the neural network model, and carries out re-analysis and evaluation by the neural network model and corresponding operation according to the evaluation result.
Step 208: updating the evaluation result and sending out dangerous information;
and under the condition that the evaluation result is safe, after multiple attempts, the robot still has no way to move out of the virtual wall area, the error exists in the evaluation of the neural network, and the robot updates the evaluation result to be dangerous, reports the dangerous information. The manner in which the hazard information is sent is as described above and will not be described in detail herein.
As can be appreciated from the above, in the embodiment of the invention, when the evaluation result is safe, the next action of the robot is analyzed to determine whether the robot really has the capability of moving out independently, and the robot is given multiple attempts, thereby furthest playing the capability of the robot, further reducing the probability of using manpower to solve because of false triggering to enter the virtual wall area, improving the working efficiency and saving the labor cost.
Another embodiment of the present invention further provides a virtual wall decision device 300, as shown in fig. 3, including:
the position information acquisition module 301: the method comprises the steps of acquiring current position information of a robot;
the environmental information acquisition module 302: the method comprises the steps of acquiring environment information of a current position when the current position is located in a preset virtual wall area;
the evaluation module 303: the method comprises the steps of inputting the environmental information into a preset neural network model, evaluating the safety of the environmental information through the neural network model, and outputting an evaluation result;
decision module 304: and the robot is used for indicating the robot to move out of the virtual wall area when the evaluation result is safe.
Further, before performing the evaluation, the evaluation module 303 is configured to acquire an environment information set of the robot passing and a corresponding passing result set of the robot;
and inputting the traffic environment information set and the traffic result set into the neural network model for training.
Further, the evaluation module 303 is configured to collect picture data on the navigation path of the robot before performing the evaluation;
preprocessing the acquired picture data;
acquiring traffic capacity information of the robot;
and inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training.
Further, the decision module 304 is configured to update the evaluation result to be dangerous and send out dangerous information when the robot fails to move out of the virtual wall area.
Further, the decision module 304 is configured to record the number of times that the robot fails to move out of the virtual wall area when the robot fails to move out of the virtual wall area;
if the number of times that the robot fails to move out of the virtual wall area exceeds a first threshold value, updating the evaluation result to be dangerous, and sending out dangerous information;
otherwise, continuously acquiring the environment information of the current position of the robot, and sending the environment information to a preset neural network model for evaluation.
Further, the evaluation module 303 is further configured to update the evaluation result to be dangerous, send out dangerous information, obtain environmental information of the current position of the robot, and input the environmental information and the traffic result into the neural network model.
Further, the virtual wall decision device 300 further includes a navigation module 305, configured to continue navigation when the robot successfully moves out of the virtual wall area, and the robot searches for a closest path point.
Therefore, as can be appreciated from the above, the virtual wall decision method provided by the embodiment of the application combines the environment recognition capability and the virtual wall, when the robot has an event of mistakenly entering the virtual wall area, the surrounding environment is firstly analyzed through the neural network model, whether the robot has the capability of autonomously moving out or not is determined based on the capability of the robot and the surrounding environment, and when the robot has the capability of autonomously, the robot is automatically recovered by means of the capability of the robot, in this way, the probability of using manpower to solve due to mistakenly triggering to enter the virtual wall area is reduced, the working efficiency is improved, and the labor cost is saved.
Embodiments of the present invention provide a computer-readable storage medium storing at least one executable instruction for performing the virtual wall decision method of any of the above method embodiments.
The executable instructions may be particularly useful for causing a processor to:
acquiring current position information of a robot;
acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area;
inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model;
and if the evaluation result is safe, the robot moves out of the virtual wall area.
Further, before executing the method, the method further comprises:
acquiring an environment information set of robot passing and a corresponding passing result set of the robot;
and inputting the traffic environment information set and the traffic result set into the neural network model for training.
Further, the inputting the traffic environment information set and the traffic result set into the neural network model for training includes:
collecting picture data on the robot navigation path;
preprocessing the acquired picture data;
acquiring traffic capacity information of the robot;
and inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, updating the evaluation result to be dangerous, and sending out dangerous information.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, recording the number of times that the robot fails to move out of the virtual wall area;
if the number of times that the robot fails to move out of the virtual wall area exceeds a first threshold value, updating the evaluation result to be dangerous, and sending out dangerous information;
otherwise, continuously acquiring the environment information of the current position of the robot, and sending the environment information to a preset neural network model for evaluation.
Further, updating the evaluation result as dangerous and issuing dangerous information, further comprising:
and acquiring the environment information of the current position of the robot, and inputting the environment information and the passing result into the neural network model.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot moves out of the virtual wall area successfully, the robot searches the nearest path point to the robot and continues to navigate.
The computer readable storage medium provided by the embodiment of the invention can execute the virtual wall decision method, by combining the environment recognition capability and the virtual wall, when the robot has an event of entering the virtual wall area by mistake, the surrounding environment is firstly analyzed through the neural network model, whether the robot has the capability of moving out autonomously or not is determined based on the capability of the robot and the surrounding environment, and when the robot has the capability of moving out autonomously, the robot is automatically recovered by means of the capability of the robot, in this way, the probability of using manpower to solve because the robot is triggered into the virtual wall area by mistake is reduced, the working efficiency is improved, and the labor cost is saved.
Fig. 4 shows a schematic structural diagram of an embodiment of the robot provided by the present invention, and the specific embodiment of the present invention is not limited to the specific implementation of the robot.
As shown in fig. 4, the robot may include: a camera 401, a processor 402, a communication interface 404, a memory 406, and a communication bus 408.
Wherein: the camera 401, the processor 402, the communication interface 404, and the memory 406 perform communication with each other through the communication bus 408. A camera 401 for capturing video or images. The processor 402 is configured to execute the program 410, and may specifically perform relevant steps in the above-described embodiment of the virtual wall decision method.
In particular, program 410 may include program code including computer-operating instructions.
The processor 402 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors comprised by the robot may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 406 for storing programs 410. Memory 406 may comprise high-speed RAM memory or may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Program 410 may be specifically operable to cause processor 402 to:
acquiring current position information of a robot;
acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area;
inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model;
and if the evaluation result is safe, the robot moves out of the virtual wall area.
Further, before executing the method, the method further comprises:
acquiring an environment information set of robot passing and a corresponding passing result set of the robot;
and inputting the traffic environment information set and the traffic result set into the neural network model for training.
Further, the inputting the traffic environment information set and the traffic result set into the neural network model for training includes:
collecting picture data on the robot navigation path;
preprocessing the acquired picture data;
acquiring traffic capacity information of the robot;
and inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, updating the evaluation result to be dangerous, and sending out dangerous information.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, recording the number of times that the robot fails to move out of the virtual wall area;
if the number of times that the robot fails to move out of the virtual wall area exceeds a first threshold value, updating the evaluation result to be dangerous, and sending out dangerous information;
otherwise, continuously acquiring the environment information of the current position of the robot, and sending the environment information to a preset neural network model for evaluation.
Further, updating the evaluation result as dangerous and issuing dangerous information, further comprising:
and acquiring the environment information of the current position of the robot, and inputting the environment information and the passing result into the neural network model.
Further, the robot moves out of the virtual wall area, further comprising:
if the robot moves out of the virtual wall area successfully, the robot searches the nearest path point to the robot and continues to navigate.
According to the robot provided by the embodiment of the invention, the environment recognition capability and the virtual wall are combined, when the robot has an event of entering the virtual wall region by mistake, the surrounding environment is analyzed through the neural network model, whether the robot has the capability of moving out autonomously or not is determined based on the capability of the robot and the surrounding environment, and when the robot has the capability of moving out autonomously, the robot is automatically recovered by means of the capability of the robot, so that the probability of using manpower to solve the problem due to entering the virtual wall region by mistake triggering is reduced, the working efficiency is improved, and the labor cost is saved.
The embodiment of the invention provides a virtual wall decision device which is used for executing the virtual wall decision method.
Embodiments of the present invention provide a computer program that may be invoked by a processor to cause a robot to perform the virtual wall decision method of any of the method embodiments described above.
Embodiments of the present invention provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when run on a computer, cause the computer to perform the virtual wall decision method of any of the method embodiments described above.
The algorithms or displays presented herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general-purpose systems may also be used with the teachings herein. The required structure for a construction of such a system is apparent from the description above. In addition, embodiments of the present invention are not directed to any particular programming language. It will be appreciated that the teachings of the present invention described herein may be implemented in a variety of programming languages, and the above description of specific languages is provided for disclosure of enablement and best mode of the present invention.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the above description of exemplary embodiments of the invention, various features of the embodiments of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specifically stated.

Claims (8)

1. A virtual wall decision method, comprising:
acquiring current position information of a robot;
acquiring environment information of the current position of the robot if the current position of the robot is located in a preset virtual wall area;
inputting the environmental information into a preset neural network model, and evaluating the safety of the environmental information through the neural network model;
if the evaluation result is safe, the robot moves out of the virtual wall area;
before the environmental information is input into a preset neural network model and the safety of the environmental information is evaluated through the neural network model, the method comprises the following steps:
acquiring a passing environment information set of the robot and a corresponding passing result set of the robot;
inputting the traffic environment information set and the traffic result set into the neural network model for training;
the step of inputting the traffic environment information set and the traffic result set into the neural network model for training comprises the following steps:
collecting picture data on the robot navigation path;
preprocessing the acquired picture data;
acquiring traffic capacity information of the robot;
and inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training.
2. The virtual wall decision method of claim 1, wherein the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, updating the evaluation result to be dangerous, and sending out dangerous information.
3. The virtual wall decision method of claim 1, wherein the robot moves out of the virtual wall area, further comprising:
if the robot fails to move out of the virtual wall area, recording the number of times that the robot fails to move out of the virtual wall area;
if the number of times that the robot fails to move out of the virtual wall area exceeds a first threshold value, updating the evaluation result to be dangerous, and sending out dangerous information;
otherwise, continuously collecting the environment information of the current position of the robot, and sending the environment information to the preset neural network model for evaluation.
4. A virtual wall decision method as claimed in claim 2 or 3, wherein updating the assessment result as dangerous and issuing danger information further comprises:
and acquiring the environment information of the current position of the robot, and inputting the environment information and the passing result into the neural network model.
5. The virtual wall decision method of claim 1, wherein the robot moves out of the virtual wall area, further comprising:
if the robot moves out of the virtual wall area successfully, the robot searches the nearest path point to the robot and continues to navigate.
6. A virtual wall decision making apparatus, comprising:
a position information acquisition module: the method comprises the steps of acquiring current position information of a robot;
the environment information acquisition module: the method comprises the steps of acquiring environment information of a current position when the current position is located in a preset virtual wall area;
and an evaluation module: the environment information is input into a preset neural network model, and the safety of the environment information is evaluated through the neural network model; before inputting the environmental information into a preset neural network model and evaluating the safety of the environmental information through the neural network model, acquiring an environmental information set for passing the robot and a corresponding passing result set of the robot; the method comprises the steps of inputting the traffic environment information set and the traffic result set into the neural network model for training; the inputting the traffic environment information set and the traffic result set into the neural network model for training comprises the following steps: the image acquisition module is used for acquiring image data on the robot navigation path; the method comprises the steps of preprocessing acquired picture data; the method comprises the steps of acquiring traffic capacity information of the robot; the method comprises the steps of inputting the preprocessed picture data, traffic capacity information and traffic results into the neural network model for training;
decision module: and the robot is used for indicating the robot to exit the virtual wall area when the evaluation result is safe.
7. A robot, comprising: the camera, the processor, the memory and the communication interface complete communication with each other through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the virtual wall decision method of any one of claims 1-5.
8. A computer readable storage medium having stored therein at least one executable instruction that, when run on a virtual wall decision device, causes the virtual wall decision device to perform the virtual wall decision method of any of claims 1-5.
CN201911415298.1A 2019-12-31 2019-12-31 Virtual wall decision method and device and robot Active CN111142529B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911415298.1A CN111142529B (en) 2019-12-31 2019-12-31 Virtual wall decision method and device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911415298.1A CN111142529B (en) 2019-12-31 2019-12-31 Virtual wall decision method and device and robot

Publications (2)

Publication Number Publication Date
CN111142529A CN111142529A (en) 2020-05-12
CN111142529B true CN111142529B (en) 2023-06-09

Family

ID=70522746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911415298.1A Active CN111142529B (en) 2019-12-31 2019-12-31 Virtual wall decision method and device and robot

Country Status (1)

Country Link
CN (1) CN111142529B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8355818B2 (en) * 2009-09-03 2013-01-15 Battelle Energy Alliance, Llc Robots, systems, and methods for hazard evaluation and visualization
CN104407616B (en) * 2014-12-03 2017-04-26 沈阳工业大学 Dynamic path planning method for mobile robot based on immune network algorithm
DE102016125408A1 (en) * 2016-12-22 2018-06-28 RobArt GmbH AUTONOMOUS MOBILE ROBOT AND METHOD FOR CONTROLLING AN AUTONOMOUS MOBILE ROBOT
CN108007464B (en) * 2017-12-04 2021-03-16 国网智能科技股份有限公司 Autonomous navigation method and system for line patrol robot of high-voltage transmission line
CN110347152B (en) * 2019-06-11 2022-08-16 深圳拓邦股份有限公司 Virtual wall setting method, system and device

Also Published As

Publication number Publication date
CN111142529A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN111081064B (en) Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
CN112152129B (en) Intelligent safety management and control method and system for transformer substation
EP3686779B1 (en) Method and device for attention-based lane detection without post-processing by using lane mask and testing method and testing device using the same
CN106341661B (en) Patrol robot
DE112018006578T5 (en) ROUTE PLANNING FOR AUTONOMOUS MOVING DEVICES
CN109740462B (en) Target identification following method
CN110889339B (en) Head and shoulder detection-based dangerous area grading early warning method and system
CN109885060B (en) Path management system and management method thereof
CN109377694B (en) Monitoring method and system for community vehicles
DE102020117732A1 (en) Independent alignment of a vehicle and a wireless charging device
CN111988524A (en) Unmanned aerial vehicle and camera collaborative obstacle avoidance method, server and storage medium
DE102018108361A1 (en) LAW RECORDING TRAINING SYSTEMS AND METHODS
CN113065427A (en) Vehicle parking state determination method, device, equipment and storage medium
CN115294544A (en) Driving scene classification method, device, equipment and storage medium
CN112487894A (en) Automatic inspection method and device for rail transit protection area based on artificial intelligence
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
CN115366885A (en) Method for assisting a driving maneuver of a motor vehicle, assistance device and motor vehicle
CN108877228B (en) A unmanned aerial vehicle for scenic spot guides
CN111142529B (en) Virtual wall decision method and device and robot
WO2022061632A1 (en) Obstacle detection method and apparatus, and unmanned aerial vehicle and storage medium
CN116389695B (en) Building site monitoring method and device, building site inspection equipment and storage medium
CN109902647B (en) Portable online bird nest intelligent identification method and system
CN116740833A (en) Line inspection and card punching method based on unmanned aerial vehicle
CN110989623A (en) Ground unmanned operation equipment, method and device for controlling movement of ground unmanned operation equipment, and storage medium
Bhandari et al. Fullstop: A camera-assisted system for characterizing unsafe bus stopping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210129

Address after: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200000 second floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

GR01 Patent grant
GR01 Patent grant