CN108829095B - Geo-fence setting method and method for limiting robot movement - Google Patents

Geo-fence setting method and method for limiting robot movement Download PDF

Info

Publication number
CN108829095B
CN108829095B CN201810453657.1A CN201810453657A CN108829095B CN 108829095 B CN108829095 B CN 108829095B CN 201810453657 A CN201810453657 A CN 201810453657A CN 108829095 B CN108829095 B CN 108829095B
Authority
CN
China
Prior art keywords
information
geo
fence
virtual wall
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810453657.1A
Other languages
Chinese (zh)
Other versions
CN108829095A (en
Inventor
李畅
张峻彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunjing Intelligent Innovation Shenzhen Co ltd
Original Assignee
Yunjing Intelligence Technology Dongguan Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunjing Intelligence Technology Dongguan Co Ltd filed Critical Yunjing Intelligence Technology Dongguan Co Ltd
Priority to CN201810453657.1A priority Critical patent/CN108829095B/en
Publication of CN108829095A publication Critical patent/CN108829095A/en
Application granted granted Critical
Publication of CN108829095B publication Critical patent/CN108829095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips

Abstract

The present disclosure provides a geo-fence setting method, including: acquiring geo-fence information; marking a geo-fence area on an electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used for controlling the movement range of the robot; the obtaining geo-fence information includes obtaining geo-fence information from a robot, the robot obtaining geo-fence information by: the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence; and responding to the virtual wall detection device to detect the virtual wall, recording position information of the virtual wall detection device as the geo-fence information, or obtaining graphic information or picture information according to the position information, and using the graphic information or the picture information as the geo-fence information. The present disclosure also provides a geofence setup system.

Description

Geo-fence setting method and method for limiting robot movement
Technical Field
The invention relates to the technical field of electronics, in particular to a method for setting a geographic fence and a method for limiting the movement of a robot
Background
The intelligent robot has map construction and storage capabilities and performs work based on the map, for example, the intelligent cleaning robot can construct a home map and perform work based on the map. However, in the working process of the intelligent robot, there are a class of working conditions as follows: there are some areas where the user does not want the robot to enter, the user needs to set a virtual wall to prevent the robot from entering the area, and there is also a kind of work where the user wants to restrict the robot from moving in a certain area.
The existing method for setting the virtual wall includes setting an infrared emission device, a magnetic stripe, an electronic tag and the like, and in the method, a signal emission device or a passive signal emitter is required to be placed near an area where a geo-fence is required to be set, and after the robot senses the existence of the virtual wall, the robot makes a response action. However, such a method has a disadvantage that the virtual wall needs to be set for a long time, and has the following disadvantages: the damage to the environment (such as the arrangement of magnetic strips around the carpet can damage the original decoration effect of the household carpet), the obstruction or the interference of the action of people (such as the placement of an infrared emitting device at the door opening, the trip of people when passing), and the need of additional equipment or devices. The method of using visual beacons to construct virtual walls requires additional cameras and cannot be placed on lower objects, such as the carpet around. The method using the infrared emission device cannot form a closed geo-fence, and is easy to be knocked down by a robot, move to cause the geo-fence to be invalid and the like.
Disclosure of Invention
One aspect of the present disclosure provides a geo-fence setting method, including: acquiring geo-fence information; marking a geo-fence area on an electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used for controlling the movement range of the robot; the obtaining geo-fence information includes obtaining geo-fence information from a robot, the robot obtaining geo-fence information by: the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence; and responding to the virtual wall detection device to detect the virtual wall, recording position information of the virtual wall detection device as the geo-fence information, or obtaining graphic information or picture information according to the position information, and using the graphic information or the picture information as the geo-fence information.
In an embodiment of the present disclosure, the virtual wall includes a signal generating device or a contact feedback device, and the detecting of the virtual wall by the virtual wall detecting device includes: sensing a signal sent by the signal generating device by the virtual wall detecting device; or the signal magnitude sensed by the virtual wall detection device exceeds a threshold value; or the virtual wall detection device obtains the feedback information of the contact feedback device.
In an embodiment of the present disclosure, the signal generating device or the contact feedback device includes: the system comprises an infrared transmitter, a magnetic stripe, a radio wave transmitter, an electronic tag, a color card and a metal pole piece; the virtual wall detection device includes: the device comprises an infrared receiving tube, a magnetic sensor, a radio wave receiver, a Hall sensor, an image capturing device, a metal contact point and a pressure deformation sensor; the virtual wall detection device can sense signals sent by the virtual wall or sense that the virtual wall is contacted.
In an embodiment of the present disclosure, the robot acquiring geo-fence information further comprises: stopping recording the position information in response to the robot moving along the virtual wall for one circle; or stopping recording the position information in response to the robot moving to the end point of the virtual wall along the virtual wall; or stopping recording the position information in response to the movement time of the robot reaching a preset time length.
In an embodiment of the present disclosure, the geofence setting method includes obtaining geofence information from an input module of a first device.
In an embodiment of the disclosure, obtaining geofence information from an input module of the first device comprises: obtaining a plurality of input points from an input module of the first device; connecting the plurality of input points to form a graph or a picture as the geo-fence information.
In an embodiment of the present disclosure, the input module of the first device includes a plurality of preset graphics; obtaining geofence information from an input module of the first device includes: and acquiring one or more preset graphs from an input module of the first device as the geo-fence information.
In an embodiment of the disclosure, obtaining geofence information from an input module of the first device comprises: acquiring a starting point and an end point from an input module of the first device; and obtaining a graph or a picture as the geo-fence information according to the starting point, the end point and the grid information on the electronic map.
In an embodiment of the present disclosure, the geofence setting method further comprises: adjusting the geo-fence information; the adjusting the geo-fence information comprises: and performing moving, scaling, stretching or rotating operations on the position information, the graphic information or the picture information.
In an embodiment of the present disclosure, the geo-fence information includes one or more of location information, graphical information, and picture information; marking the geofence information on an electronic map comprises: projecting the geo-fence information onto an electronic map, the electronic map being a grid map; marking a grid located inside the picture projection as a geo-fenced area; or marking a grid outside the picture projection as a geo-fenced area; or marking the grid occupied by the graphic projection and the grids nearby as the geo-fence area; or the grid occupied by the position point projection and the grids nearby are marked as the geo-fence area.
In an embodiment of the present disclosure, the geofence setting method further comprises: acquiring an electronic map marked with a geo-fenced area; positioning the position of the robot in the moving process of the robot; and under the condition that the position of the robot is overlapped with the geo-fence area, the robot makes a preset behavior.
In an embodiment of the disclosure, the robot position overlapping the geo-fenced area includes: the grid occupied by the projection of the robot position on the map overlaps the grid marked as a geofenced area on the map.
In an embodiment of the present disclosure, the preset behavior includes: one or more of stop motion, retard motion, transition motion direction, move away from the geofenced area, move along the geofenced boundary.
Another aspect of the present disclosure provides a geo-fencing setup system comprising: the equipment is fixed with a virtual wall detection device and used for recording the geo-fence information; a geofence setup system to obtain geofence information from the device, and mark a geofence region on an electronic map according to the geofence information.
Yet another aspect of the present disclosure provides a geo-fencing setup system comprising: one or more processors; a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to obtain geo-fence information and mark a geo-fence area on an electronic map according to the geo-fence information.
Yet another aspect of the disclosure provides a computer-readable medium having stored thereon executable instructions that, when executed by a processor, cause the processor to: acquiring geo-fence information; marking a geo-fenced area on an electronic map according to the geo-fenced information.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
fig. 1 schematically illustrates an application scenario of a setup geofencing method according to an embodiment of the present disclosure;
fig. 2 schematically shows a flow chart of a geofence setting method, in accordance with an embodiment of the present disclosure;
FIG. 3 schematically illustrates a grid map according to an embodiment of the disclosure;
FIG. 4 schematically illustrates a system block diagram of a robot in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a graph showing results of a display of sensing points and inflection points according to an embodiment of the present disclosure;
fig. 6 schematically shows a geo-fence information schematic according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a block diagram of a robot in accordance with an embodiment of the present disclosure;
fig. 8 schematically illustrates geofence recording points, in accordance with an embodiment of the present disclosure;
FIG. 9 schematically illustrates a schematic diagram of an input module including preset graphics, in accordance with an embodiment of the present disclosure;
FIG. 10 schematically illustrates a first schematic of a starting point and an ending point according to an embodiment of the disclosure;
FIG. 11 schematically illustrates a second schematic of a starting point and an ending point according to an embodiment of the disclosure;
FIG. 12 schematically illustrates a responsive action of a robot entering a geo-fenced area, in accordance with an embodiment of the present disclosure;
fig. 13 schematically illustrates a geofence setting system block diagram, in accordance with an embodiment of the present disclosure;
fig. 14 schematically illustrates a block diagram of a computer system for geofence settings, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
An embodiment of the present disclosure provides a geo-fence setting method, including: acquiring geo-fence information; marking a geo-fence area on an electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used for controlling the movement range of the robot; the obtaining geo-fence information includes obtaining geo-fence information from a robot, the robot obtaining geo-fence information by: the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence; and responding to the virtual wall detection device to detect the virtual wall, recording position information of the virtual wall detection device as the geo-fence information, or obtaining graphic information or picture information according to the position information, and using the graphic information or the picture information as the geo-fence information.
In the embodiment of the disclosure, by marking the geo-fence area on the electronic map, not only can an extra signal transmitting device or a passive signal emitter and the like be avoided from being arranged in a real scene for a long time, but also the path of the robot can be planned better based on the electronic map marked with the geo-fence area, so that the time overhead brought by the response action of the robot on the geo-fence boundary is reduced, and the use cost of a user is reduced.
Fig. 1 schematically illustrates an application scenario of the geofencing method according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a geofence setting method to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be used in other apparatuses, environments or scenarios.
As shown in fig. 1, the method can be used for setting a geo-fence of a household cleaning robot 100, a household residence space is generally divided into a living room 101, a bedroom 102, a kitchen 103, a bathroom 104 and other subareas, and various furniture, electric appliances and other household articles are placed in each subarea. There are some areas of the household cleaning robot that the user does not want the robot to enter during the cleaning process, such as: during the operation of the robot, the user does not want the robot to enter areas such as toilets, child activity areas, carpets, etc. Another situation is where the user wishes to restrict the robot to a certain area, such as: the user wants the robot to work only in the restaurant, and needs to set a geographic fence to surround the restaurant area to prevent the robot from leaving the restaurant area.
The household cleaning robot includes but is not limited to: vacuum cleaning robots, mopping and sweeping integrated robots, and the like.
It is understood that the application scenario in fig. 1 is only an example, and besides the application scenario in a household cleaning robot, the application scenario can also be used in other scenarios where a geo-fence needs to be set, such as a handling robot, a patrol robot, an agricultural robot on a farm, an underwater robot, an aerial robot, and the like in a factory, a mall, and the like.
Fig. 2 schematically shows a flow chart of a geofence setting method, in accordance with an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S200 to S300.
In operation S200, geo-fence information is acquired.
According to the embodiment of the disclosure, the geo-fence information may be information such as a position, an orientation, a shape, a picture, etc. capable of describing an actual position and a range of the geo-fence, and the information may be information directly input by a user on an external device or information obtained by measuring in-situ by a detection device.
In operation S300, a geo-fence area is marked on the electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used to control a range of motion of the robot.
In the embodiment of the present disclosure, the electronic map may be a map in which obstacle information is stored, for example, an electronic map for an intelligent cleaning robot stores room area division information, room contour information, obstacle information, a walkable area, and the like, and the electronic map includes, but is not limited to, a grid map, a tree map, and the like, which are storage structures capable of storing the above information.
Fig. 3 schematically illustrates a grid map according to an embodiment of the present disclosure.
As shown in fig. 3, the grid map includes a plurality of grids, each of which is expressed as a square grid point (e.g., a square grid point of 5cm × 5 cm) with a certain resolution, and each grid point can store information of a multi-layer map, including an obstacle map, a room area division map, a geo-fence map, and the like. The obstacle map includes 2 different grids: a. a walkable grid; b. an obstacle grid. As shown in fig. 3, the walkable grid is represented in white and the barrier grid is represented in black, including walls and room interior barriers such as toy dump areas.
Marking a geo-fenced area on the electronic map according to the geo-fence information may be mapping the geo-fence information onto the electronic map and then marking a related area according to the mapping result. The marking process can be carried out in the map construction process or on the constructed known map.
The geo-fenced area marked on the electronic map can control the range of motion of the robot as it moves the job, e.g., the robot can respond by detecting the geo-fenced area on the electronic map while the robot is working, whether there is a physical device near the geo-fenced boundary, and the responsive action can be, for example, a change in direction.
Acquiring the geo-fence information in operation S200 may include operation S210:
in operation S210, geo-fence information is acquired by a robot. The robot may acquire geo-fence information through operations S211 to S212:
in operation S21l, a robot having a virtual wall detection apparatus mounted thereon is moved near a virtual wall, which is disposed in an area where a geofence needs to be established.
The virtual wall is arranged in an area needing to be provided with the geo-fence, and the virtual wall can be arranged at the boundary of the geo-fence, for example, if a user does not want a robot to enter a carpet for operation, the virtual wall is arranged around the carpet.
Fig. 4 schematically illustrates a system block diagram of a robot according to an embodiment of the present disclosure.
As shown in fig. 4, the robot includes a robot body 301, and a control unit 302, a driving unit 303 and a virtual wall detection device 304 mounted on the robot body, where the control unit 302 may be a microprocessor MCU, and the driving unit 303 may be composed of a pair of wheels and their associated motors and driving chips. The control unit 302 sends a control instruction to the driving unit 303, the driving unit 303 drives the robot body 301 to move according to the control instruction, the moving speed, the moving direction and the like of the robot body 301 are changed, the virtual wall detection device 304 is used for sensing information such as the position, the moving direction and the like of a virtual wall and transmitting the sensed information to the control unit 302, the control unit 302 collects signals returned by the virtual wall detection device 304, processes the signals, and extracts a distance relation between the virtual wall and the virtual wall or a threshold signal based on the distance relation.
The robot may also include a display and input unit 305 for inputting user instructions, controlling the robot to be near the virtual wall, or enabling the robot to start a recording function during normal operation. After the recording is completed or in the recording process, the recording result is displayed on the display and input unit 305, the user can adjust the recording result, then the robot can execute the geofence setting method according to the recorded geofence information, or can utilize an external device to perform instruction input and result display, the robot can send the recorded position information to the external device through a network for display, the network includes WiFi, bluetooth and other modes, and the external device can execute the geofence setting method, and the external device can be a mobile phone, a computer, a server and the like.
In operation S212, in response to the virtual wall detection apparatus detecting the virtual wall, position information of the virtual wall detection apparatus is recorded as geo-fence information, or graphic information or picture information is obtained from the position information as the geo-fence information.
The robot detects the position of the virtual wall through the virtual wall detection device 304, when the virtual wall is detected, the control unit controls the robot to move towards the direction far away from the virtual wall, and after a period of time or after the virtual wall detection device 304 cannot sense the virtual wall, the robot returns to move towards the direction of the virtual wall, and the operation is repeated, so that the effect of moving along the outer side of the virtual wall is achieved.
The robot is also equipped with a positioning device 306, which may be an encoder-based odometer, a visual odometer, laser positioning, etc. that gives the position of the robot relative to a fixed coordinate point. The positioning device 306 may be fixed near the virtual wall detection device 304, and when the virtual wall detection device 304 detects a virtual wall, the control unit 302 obtains the position information of the virtual wall detection device 304 from the positioning device 306, and records the position information as the geo-fence information, and the position information is recorded as the sensing point Ki. If the accuracy requirements for the geo-fenced area are not very high, the positioning device can be fixed at any position of the robot.
The process of acquiring geo-fence information by the robot may further include operation S213:
in operation S213, in response to the turning angle of the robot movement direction exceeding a preset angle threshold, position information of the virtual wall detection apparatus is recorded as geo-fence information.
Specifically, when there is a corner at the boundary of the geo-fenced area, the direction of motion of the robot is also changed. When the orientation of the robot changes greatly, that is, the turning angle of the moving direction exceeds a certain threshold, the control unit 302 records the position information of the virtual wall detection device 304 or the position information of the robot at that time as the geo-fence information, and the position information is denoted as an inflection point Ci.
In the embodiment of the disclosure, the turning point of the robot is measured and recorded, and the boundary turning angle of the geo-fence area is displayed, so that the boundary characteristics of the geo-fence area can be more accurately extracted.
Fig. 5 schematically illustrates a display result diagram of sensing points and inflection points according to an embodiment of the present disclosure.
As shown in fig. 5, after the recording is stopped, the respective sensing points Ki and inflection points Ci of the recording are displayed by the input and display unit 305, or transmitted to another device having an input and display function for display. The user can check whether the setting result of the geo-fence area is satisfactory or not, if the setting result is not satisfactory, the step S211 can be returned to record again, the user can also adjust points with larger deviation on the display screen according to the display result, and after confirming each recording point, the setting of the geo-fence can be completed by the robot or the external device.
In the embodiment of the present disclosure, the recorded location information may be used as geo-fence information, or graphic information or picture information may be obtained according to a plurality of pieces of location information, and the graphic information or the picture information may be used as the geo-fence information.
The geo-fence information may include: one or more of position information, graphic information, and picture information.
Fig. 6 schematically illustrates geofence information, in accordance with an embodiment of the present disclosure.
As shown in fig. 6, the geo-fence information may be location information, such as the respective sensing points Ki and inflection points Ci obtained through the above steps, such as the points 501 shown in fig. 6.
The geofence information may also be graphic information, for example, feature extraction is performed on the recorded sensing points and inflection points, a straight line and a shape are extracted, and a closed graphic 502 or a non-closed graphic 503 is constructed and formed; or the track graph of the movement of the robot along the virtual wall is directly recorded by the robot; or graphics drawn by the user directly on the display device.
The geofence information may also be picture information, e.g., a piece of shaded area 504 enclosed by a certain closed figure.
The resulting location points, graphics and pictures can be displayed on the display and input unit 305 of the robot or an external device, and the user can zoom, rotate, fine tune, etc. the geo-fence information.
In the embodiment of the disclosure, different geometric forms (points, lines and surfaces) can be selected as the geo-fence information marks on the electronic map according to the geometric features of the geo-fence area, so that the area setting is more accurate and convenient.
In addition, if the user wishes to use a certain room as a restricted area that the robot cannot enter, the door of the room may be directly closed to start the first work or search for the map, and in this case, the door is marked as an obstacle area, the room is surrounded by the obstacle area, and the robot does not enter the room at the time of the next work.
In the embodiment of the disclosure, the robot provided with the virtual wall detection device is used for measuring the position of the geo-fence in the field, so that the geo-fence area can be marked on the map more accurately, the robot can make more accurate judgment in the following cleaning operation, and the method for acquiring the geo-fence information through the robot can be used in the situation that the position of the geo-fence area on the map cannot be accurately determined. In addition, the first time the robot works, a room map needs to be built, and the method can mark the geo-fence area in the process of building the map.
According to this disclosed embodiment, virtual wall includes signal generation device or contact feedback device, and virtual wall detection device detects virtual wall and includes: sensing a signal sent by the signal generating device by the virtual wall detecting device; or the signal magnitude sensed by the virtual wall detection device exceeds a threshold value; or the virtual wall detection device obtains the feedback information of the contact feedback device.
The virtual wall includes a signal generating device or a contact feedback device, including: infrared transmitter, magnetic stripe, radio wave transmitter, electronic tags, colour card, metal pole piece etc.. The virtual wall detection device 304 may be a virtual wall detection sensor, including: infrared receiving tubes, magnetic sensors, radio wave receivers, hall sensors, image capturing devices, metal contact points, pressure deformation sensors, and the like. The virtual wall detection device 304 can sense a signal emitted from the virtual wall or sense contact with the virtual wall.
Fig. 7 schematically shows a block diagram of a robot according to an embodiment of the disclosure.
As shown in fig. 7, the robot may fix a plurality of virtual wall detection sensors, and uniformly arrange the virtual wall detection sensors in the edge area of the robot, and correspondingly, the positioning device 306 may also be provided in a plurality. The virtual wall detection sensor is used for sensing the signal that signal generating device sent, or receives the feedback signal of contact feedback device, and physical signal conversion that will receive or sense is electric signal or digital signal transmission to the control unit 302, when the control unit 302 judges that virtual wall detection sensor senses the virtual wall, obtain and record virtual wall detection sensor's positional information from positioner 306, when virtual wall detection sensor has a plurality ofly, use the sensor that senses the signal at first as the standard, or use the signal that senses to exceed threshold value at first as the standard.
For example, if the virtual wall is an infrared transmitter, the virtual wall detection device 304 is an infrared receiver tube, and when the infrared receiver tube senses an infrared signal transmitted by the infrared transmitter, the robot is characterized to enter the virtual wall area, the infrared receiver tube sends an electrical signal to the control unit 302, the control unit 302 records the position of the infrared receiver tube at that moment, and controls the driving unit 303 to move the robot in a direction away from the virtual wall.
If the virtual wall is a magnetic stripe, the virtual wall detection device 304 is a magnetic sensor, which can detect the magnetic field intensity of the magnetic stripe. The distance between the robot and the virtual wall is represented by magnetic field intensity, the magnetic sensor converts the sensed magnetic field intensity into an electric signal and transmits the electric signal to the control unit 302, the control unit 302 compares the electric signal with a threshold value, and when the electric signal exceeds a certain threshold value, the robot enters the virtual wall area, or the magnetic sensor compares the electric signal with the threshold value and then sends high level 1 or low level 0 to the control unit (the electric signal exceeds the threshold value and outputs high level 1 and is lower than the threshold value and outputs low level 0). When the electric signal exceeds the threshold value, the control unit 302 records the position of the magnetic sensor, and controls the driving unit 303 to move the robot in the direction away from the virtual wall, and when the electric signal is smaller than the threshold value, the control unit 302 can control the robot to move near the virtual wall according to the analog quantity of the sensor.
In the embodiment of the disclosure, a suitable virtual wall and a virtual wall detection device can be selected according to different site conditions and precision requirements, and the signal generation device or the contact feedback device can be reused without being installed and fixed, so that the operation is simpler and more convenient.
According to an embodiment of the present disclosure, the process of acquiring geo-fence information by a robot may further include operation S214:
in operation S214, stopping recording the position information in response to the robot moving along the virtual wall for one turn; or stopping recording the position information in response to the robot moving to the end point of the virtual wall along the virtual wall; or stopping recording the position information in response to the movement time of the robot reaching a preset time length.
Specifically, in the running process of the robot, it is necessary to determine whether the geofence information is recorded completely, so as to stop the recording function of the robot and start executing the geofence setting method.
If the geofence boundary is a closed graph, when the robot moves from the starting point along the virtual wall for a circle and returns to the starting point, the geofence information recording is considered to be finished, and the recording function is closed; the recording process end condition may also be configured to be a predetermined length of time after the robot enters the movement along the outside of the virtual wall, or when the virtual wall is an unclosed figure, for example, a straight line, in which case the recording process end condition may also be configured to move along the virtual wall to the end point of the virtual wall, for example, to the end point of the straight line.
According to an embodiment of the present disclosure, the acquiring of the geo-fence information in operation S200 may further include operation S220:
in operation S220, geo-fence information is acquired from an input module of a first device. I.e., geo-fence information can be obtained from user input.
In particular, the first device may be the robot described above, provided that the robot has an input module, such as the display and input unit 305 described above, but also any electronic device having an input module, such as a computer having an input device, a mobile phone having an input screen, etc.
The first device includes an input module, which can directly collect information input by a user, that is, the user can directly input geofence information at the input module of the first device, specifically, the user can input several feature points or several feature lines on a known map, and then the input module can transmit the information to other modules of the first device to mark the geofence information. Or the input module of the first device may transmit the acquired geo-fence information to a second device connected thereto for marking the geo-fence information, where the second device may be, for example, a server, and the like, and for example, after the mobile phone acquires the user input, the information is transmitted to the server, and the server marks the geo-fence information on the map.
The method for acquiring the geo-fence information through the user input can be used for the situation that a determined geo-fence area is displayed on a map, for example, toy stacking points, namely dense barrier grids, are displayed on the map, and the user can determine that part of grids are toy stacking points, namely, a recording point can be input by taking the part of grids as a center. Through the mode, the geo-fence area is marked on the known map, the robot does not need to move around the virtual wall in advance and record induction points along the way, information such as some points or lines can be directly input by a user on an input module of the equipment according to the display content of the map, the geo-fence information can be acquired, and the method is more convenient and faster.
According to an embodiment of the present disclosure, operation S220 may acquire a plurality of recording points from a user input, for example, that is, the user may add a plurality of recording points on a map and then use the plurality of recording points as geo-fence information, or acquire the geo-fence information in a manner that the plurality of recording points are connected to form a graphic or a picture. Specifically, operation S220 may include operations S221 to S223:
in operation S221, a known map is acquired.
The input and display unit 305 of the robot displays the map or transmits the map to an external device, which may be a tablet computer, for display, in a WiFi or bluetooth transmission manner. Wherein the map is a known map which is constructed completely.
The robot converts the map information into a grid form, each grid is expressed as a square grid point (for example, a square grid point of 5cm × 5 cm) with a certain resolution, and each grid point can store information of a multi-layer map, including an obstacle map, a room area segmentation map, a geo-fence map, and the like. The obstacle map includes 2 different grids: a. a walkable grid; b. an obstacle grid. The walkable grid is represented in white and the barrier grid is represented in black, including walls and obstacles in a room, such as a toy dump.
In operation S222, a geo-fence recording point is added.
Fig. 8 schematically illustrates geofence recording points, in accordance with an embodiment of the present disclosure.
As shown in fig. 8, the user can add respective recording points Ri near the obstacle according to the display result of the map. For example, the user does not want the robot to approach the area where the toy is stacked, and the recording points Ri may be added around the area where the toy is stacked. As another example, the user does not want the robot to enter the rightmost room as shown in fig. 8, and may appropriately select two or more recording points Ri at the entrance of the room. When adding the record point, the add corner button can be clicked on an external device or a robot, and the position of the geo-fence record point can be dragged.
In operation S223, if the addition is to be continued, operation S222 is repeated, and if the addition is completed, confirmation is selected. The recorded individual recording points Ri are displayed by the input and display unit 305 of the robot or an external device, and the user can adjust the points having a large deviation according to the display result.
The plurality of recording points Ri may be marked on the electronic map as the geo-fence information, or the plurality of recording points Ri may be connected to form a graphic or a picture, and the obtained graphic or picture may be marked on the electronic map as the geo-fence information.
According to an embodiment of the present disclosure, operation S220 may also be that the input module of the first device may include a plurality of preset graphics, for example; obtaining geofence information from an input module of a first device includes: one or more preset graphics are obtained from an input module of the first device as the geo-fence information.
In particular, a plurality of selectable graphics or pictures may be provided at the input module of the first device.
Fig. 9 schematically illustrates a schematic diagram of an input module including a preset graphic according to an embodiment of the present disclosure.
As shown in fig. 9, the input module of the first device may preset a plurality of selectable graphics or pictures, and the graphics may be in any shape, such as a circle, a rectangle, a triangle, an ellipse, or an irregular polygon. When the user sets the geofence, the user may select a suitable graphic or picture as the geofence information according to actual situations, and operation S220 may include operations S224 to S225:
in operation S224, a known map is acquired. Referring to operation S221 specifically, details are not repeated here.
In operation S225, a preset graphic is acquired as geo-fence information from an input module of the first device.
Specifically, the user may select a geometric shape on the input module and drag it to the area where it is desired to be placed. If the addition is needed to be continued, the selection steps are repeated, and if the addition is finished, the confirmation is selected. The user can also zoom, rotate, fine tune the geofence, and record the geofence on the map after confirmation. The map containing the geofence information may then be transmitted to the robot ontology, which stores the map in a storage medium, in this example FLASH. According to an embodiment of the present disclosure, obtaining geofence information from an input module of a first device includes: acquiring a starting point and an end point from an input module of first equipment; and obtaining a graph or a picture as the geo-fence information according to the starting point, the end point and the grid information on the electronic map.
According to an embodiment of the present disclosure, the user may further obtain the geo-fence information by setting a start point and an end point, and operation S220 may include operations S226 to S228:
in operation S226, a known map is acquired. Referring to operation S221 specifically, details are not repeated here.
In operation S227, a start point and an end point are acquired from an input module of the first device.
Specifically, the user may add a start point and an end point at an input module of the first device.
Fig. 10 schematically shows a first schematic of a starting point and an ending point according to an embodiment of the disclosure.
As shown in fig. 10, if the user does not want the robot to enter the room located at the lower left corner shown in fig. 10, the user can click both ends of the door of the room on the map as the start point Bi and the end point Ei.
Fig. 11 schematically illustrates a second schematic of a start point and an end point according to an embodiment of the disclosure.
As shown in fig. 11, if the geo-fence is to be set in a corner area, a start point Bi and an end point Ei may be set on both side walls of the corner.
Of course, the setting of the start point Bi and the end point Ei is not limited to the two manners, and other setting manners may be adopted.
In operation S228, a graphic or picture is obtained as geo-fence information according to the start point, the end point, and the grid information on the electronic map.
Specifically, a graph or picture is formed as the geo-fence information according to the positions of the starting point and the ending point and the grid information on the electronic map, and the geo-fence and the barrier grid form a closed area. As shown in fig. 10, the starting point Bi and the ending point Ei are connected, the straight line graph can be used as the geo-fence information, and the straight line graph and the wall of the lower right corner room form a closed area, so that the robot cannot pass through the closed area. As shown in fig. 11, connecting the start point Bi and the end point Ei, the straight line pattern can be used as the geo-fence information, and the straight line forms a closed pattern with the two side walls, so that the robot cannot enter the corner. Then, the user can zoom, stretch, and fine-tune the geo-fence, record the geo-fence on the map after confirmation, transmit the map containing the geo-fence information to the robot body, and the robot stores the map in a storage medium, in this example, the storage medium is FLASH.
According to an embodiment of the present disclosure, the geofence setting method may further include operation S400: adjusting the geo-fence information;
the acquired information can be modified and adjusted after the geo-fence information is acquired. For example, if the geo-fence information is obtained through robot walking detection, the robot displays the recording result in the display and input unit 305 after or during the recording, and the user can adjust the recording result, or the robot sends the recorded position information to the mobile phone through the network to display after or during the recording, and the user can adjust the recording result. In the case of obtaining the geo-fence information through the user input, if the first device includes the display module and the input module, and can display the obtained geo-fence information, the modification and adjustment may be performed on the first device, for example, the first device is a mobile phone, and after the mobile phone obtains the geo-fence information input by the user, the geo-fence information is displayed on a display screen of the mobile phone, and the user may modify and adjust the geo-fence information on an input area such as the display screen according to a display result.
The adjustment includes moving, zooming, stretching, and rotating points, lines, or planes of the geofence information.
According to an embodiment of the present disclosure, the geo-fence information includes one or more of location information, graphical information, and picture information; marking the geo-fence information on the electronic map in operation S300 may include at least one of operations S310 to S340:
in operation S310, projecting the geo-fence information onto an electronic map, the electronic map being a grid map;
in operation S320: marking a grid located inside the picture projection as a geo-fenced area;
in operation S330: marking a grid located outside the picture projection as a geo-fenced area;
in operation S340: marking the grid occupied by the graphic projection and the grids nearby as the geo-fence area;
in operation S350: the grid occupied by the position point projection and the grids in the vicinity thereof are marked as a geo-fenced area.
As can be seen from the above description, the geo-fence information may be a point, a line, or a plane, and may be, for example, a sensing point Ki and an inflection point Ci acquired by a robot, or a recording point Ri, a figure, or a closed area acquired from an input module of the first device, or a figure, a picture, or the like formed by connecting the sensing point Ki, the inflection point Ci, or the recording point Ri.
In an embodiment of the present disclosure, the geofence may be a piece of motion-restricted area, and the relationship of the geofence to the robot may be: the robot cannot enter the region to work or the working region of the robot is limited in the geo-fence, namely the robot body cannot cross the boundary of the geo-fence to work, and in order to meet different requirements of users, a marking method can be selected according to the characteristics of the geo-fence region.
The geofence information is projected onto a grid map as shown in fig. 3.
If the geo-fence information is picture information representing a slice of area, and the inside of the slice of area is set as a forbidden area, and the robot cannot enter the forbidden area for operation, marking a grid covered by the projection of the picture on a map as the geo-fence area; and if the area outside the area is taken as a forbidden area, marking the grid outside the grid covered by the projection of the picture on the map as a geo-fence area.
If the geo-fence information is graphic information, that is, line information, the line is widened, the width of the line can be equivalent to the radius of the robot, the grid covered by the widened line is marked as a geo-fence area, and the robot cannot perform work across the graphic.
And if the geo-fence information is position point information, marking the grid covered by the position point projection or the adjacent grid as a geo-fence area.
The geo-fence grid may be overlaid on top of the barrier grid, which then has both barrier and geo-fence attributes.
According to an embodiment of the present disclosure, the method may further include operations S500 to S700.
In operation S500, an electronic map marked with a geo-fenced area is acquired.
When the electronic map marked with the geo-fence information is obtained, the robot reads out the electronic map and the geo-fence information, and starts the operation.
In operation S600, a robot position is located during the movement of the robot.
The robot is controlled by the control unit 302 during the operation by acquiring the position information of the robot in real time.
In operation S700, the robot makes a preset behavior in case that the robot position overlaps the geo-fenced area.
The method comprises the steps of projecting position information of the robot to an electronic map in real time, detecting whether the projected position of the robot is overlapped with a marked geo-fence area, if so, determining that the robot enters the geo-fence area, and reacting according to preset actions.
In the embodiment of the disclosure, the robot can plan a path better by using the electronic map marked with the geo-fence area, time overhead caused by response actions on the geo-fence boundary is reduced, more accurate judgment can be made during operation, the operation speed is increased, and the operation efficiency is improved.
According to an embodiment of the present disclosure, the overlapping of the robot position with the geo-fenced area includes: the grid occupied by the projection of the robot position on the map overlaps the grid marked as a geofenced area on the map.
Specifically, the robot projects the position information to an electronic map, acquires a grid covered by the position information, detects whether the grid overlaps with a grid marked as a geo-fenced area, and if so, considers that the robot enters the geo-fenced area.
In the embodiment of the disclosure, corresponding to the setting method of the geo-fence, the real-time detection process of the robot during operation is also specific to each grid, so that the detection result is more sensitive, and the robot reacts more quickly when encountering the geo-fence area.
According to an embodiment of the present disclosure, the preset action includes: one or more of stop motion, retard motion, transition motion direction, move away from the geofenced area, move along the geofenced boundary.
Fig. 12 schematically illustrates a responsive action of a robot entering a geo-fenced area in accordance with an embodiment of the present disclosure.
As shown in fig. 12, when the robot enters the geo-fence area, the robot decelerates and turns around, and makes avoidance, distance, and detour movements for the geo-fence, and in addition, the following may be set: the robot stops moving when entering the geofenced area, waits for further instructions and then performs work, or decelerates when approaching the geofence, for example, setting the robot to decelerate when approaching a toy dump area to avoid collisions, or setting the robot to move along a geofenced boundary location.
The user can set the response action of the robot when encountering the geo-fenced area according to specific conditions, so that the robot is more intelligent.
Another embodiment of the present disclosure provides a geo-fencing setup system, comprising: the equipment is fixed with a virtual wall detection device and used for recording the geo-fence information; a geofence setup system to obtain geofence information from the device, and mark a geofence region on the electronic map according to the geofence information.
Specifically, the apparatus may be a robot, and the robot is configured to record geofence information and includes a robot body, and a control unit, a driving unit, and a virtual wall detection device installed on the robot body, according to an embodiment of the present disclosure, the robot structure and the system framework to which the virtual wall detection device is fixed may refer to fig. 1 to 12 and the above description about the corresponding drawings, and may perform operations S200 to S700 described above, which are not described herein again.
Fig. 13 schematically illustrates a geofence setting system block diagram, in accordance with an embodiment of the present disclosure.
As shown in fig. 13, the geofence setting system 900 includes a features module 910, a build module 920, and a map tagging module 930.
The feature module 910 is configured to obtain the geo-fence information, that is, obtain the feature points, graphs, or pictures representing geometric features of the geo-fence, according to an embodiment of the present disclosure, the geo-fence information may be obtained from the robot as described above, or information input by a user on an external device, and the feature module 910 may perform the operation S200 described above, for example, and details are not described here again.
The construction module 920 is used to construct the position information into graphic or picture information. If the geo-fence information is location point information, it can be constructed into a closed graph or a non-closed graph through the construction module 920, and the graph is widened. The area enclosed by the closed figure can also be constructed as picture information.
The map marking module 930 is configured to mark the geo-fence information on a map, map the geo-fence information on an electronic map according to the location information, the graphic information, and the picture information obtained by the feature module 910 or the construction module 920, and mark a corresponding grid as a geo-fence area. The map marking module 930 may perform the operation S300 described above, for example, and is not described herein again.
It is to be appreciated that the feature module 910, the construction module 920, and the map labeling module 930 may be combined in one module for implementation, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the feature module 910, the build module 920, and the map marking module 930 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in a suitable combination of three implementations of software, hardware, and firmware. Alternatively, at least one of the feature module 910, the construction module 920 and the map marking module 930 may be implemented at least partially as a computer program module, which when executed by a computer may perform the functions of the respective module.
Fig. 14 schematically illustrates a block diagram of a computer system for geofence settings, in accordance with an embodiment of the present disclosure. The computer system illustrated in FIG. 14 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 14, a computer system 1000 according to an embodiment of the present disclosure includes a processor 1001 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. Processor 1001 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 1010 may also include on-board memory for caching purposes. Processor 1010 may include a single processing unit or multiple processing units for performing different acts of the method flows described with reference to fig. 2 in accordance with embodiments of the present disclosure.
In the RAM 1003, various programs and data necessary for the operation of the system 1000 are stored. The processor 1001, ROM 1002, and RAM 1003 are connected to each other by a bus 1004. The processor 1001 performs various operations of the geo-fence setting method described above with reference to fig. 2 by executing programs in the ROM 1002 and/or the RAM 1003. Note that the program may be stored in one or more memories other than the ROM 1002 and the RAM 1003. The processor 1501 can also perform various operations of the geofence setting method described above with reference to fig. 2 by executing programs stored in the one or more memories.
System 1000 may also include an input/output (I/O) interface 1005, the input/output (I/O) interface 1005 also being connected to bus 1004, according to an embodiment of the present disclosure. The system 1000 may also include one or more of the following components connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output section 1007 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the I/O interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
According to an embodiment of the present disclosure, the method described above with reference to the flow chart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part 1009 and/or installed from the removable medium 1011. The computer program performs the above-described functions defined in the system of the embodiment of the present disclosure when executed by the processor 1001. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing. According to embodiments of the present disclosure, a computer-readable medium may include one or more memories other than the ROM 1002 and/or the RAM 1003 and/or the ROM 1002 and the RAM 1003 described above.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present disclosure also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform: acquiring geo-fence information; marking a geo-fence area on the electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used to control a range of motion of the robot; acquiring the geo-fence information includes acquiring the geo-fence information from the robot, the robot acquiring the geo-fence information by: the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence; and responding to the virtual wall detection device detecting the virtual wall, recording position information of the virtual wall detection device as the geo-fence information, or obtaining graphic information or picture information according to the position information, and taking the graphic information or the picture information as the geo-fence information.
According to the embodiment of the present disclosure, the virtual wall includes a signal generating device or a contact feedback device, and the virtual wall detecting device detects the virtual wall including: sensing a signal sent by the signal generating device by the virtual wall detecting device; or the signal magnitude sensed by the virtual wall detection device exceeds a threshold value; or the virtual wall detection device obtains the feedback information of the contact feedback device.
According to an embodiment of the present disclosure, a signal generation device or a contact feedback device includes: the system comprises an infrared transmitter, a magnetic stripe, a radio wave transmitter, an electronic tag, a color card and a metal pole piece; the virtual wall detection device includes: the device comprises an infrared receiving tube, a magnetic sensor, a radio wave receiver, a Hall sensor, an image capturing device, a metal contact point and a pressure deformation sensor; the virtual wall detection device can sense a signal sent by the virtual wall or sense that the virtual wall is contacted.
According to an embodiment of the present disclosure, further comprising the operations of: stopping recording the position information in response to the robot moving along the virtual wall for a circle; or stopping recording the position information in response to the robot moving to the end point of the virtual wall along the virtual wall; or stopping recording the position information in response to the movement time of the robot reaching a preset time length.
According to an embodiment of the present disclosure, further comprising: geofence information is adjusted. Adjusting the geo-fence information includes: and performing moving, scaling, stretching or rotating operations on the position information, the graphic information or the picture information.
According to an embodiment of the present disclosure, the geo-fence information includes one or more of location information, graphical information, and picture information; tagging geofence information on an electronic map includes: projecting the geo-fence information onto an electronic map, wherein the electronic map is a grid map; marking a grid located inside the picture projection as a geo-fenced area; or marking a grid located outside the picture projection as a geo-fenced area; or marking the grid occupied by the graphic projection and the grids nearby as the geo-fenced area; or the grid occupied by the position point projection and the grids nearby are marked as the geo-fence area.
According to an embodiment of the present disclosure, further comprising: acquiring an electronic map marked with a geo-fenced area; positioning the position of the robot in the moving process of the robot; and under the condition that the position of the robot is overlapped with the geo-fence area, the robot makes a preset behavior.
According to an embodiment of the present disclosure, the overlapping of the robot position with the geo-fenced area includes: the grid occupied by the projection of the robot position on the map overlaps the grid marked as a geofenced area on the map.
According to an embodiment of the present disclosure, the preset actions include: one or more of stop motion, retard motion, transition motion direction, move away from the geofenced area, move along the geofenced boundary.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (10)

1. A geo-fencing setup method comprising:
acquiring geo-fence information, wherein the geo-fence information comprises one or more of graphic information and picture information;
marking a geo-fence area on an electronic map according to the geo-fence information, wherein the geo-fence area marked on the electronic map can be used for controlling the movement range of the robot;
the obtaining geo-fence information includes obtaining geo-fence information from a robot, the robot obtaining geo-fence information by:
the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence;
responding to the virtual wall detection device to detect the virtual wall, recording position information of the virtual wall detection device, obtaining graph information or picture information according to the position information, and taking the graph information or the picture information as geo-fence information;
the taking the graphic information or the picture information as the geo-fence information includes:
acquiring sensing points obtained when the virtual wall detection device detects a virtual wall, inflection points detected when the change angle of the motion direction of the robot exceeds a threshold value, and recording points added by a user on an electronic map;
and connecting the induction points, the inflection points and the recording points to form a graph or a picture.
2. The method of claim 1, wherein the virtual wall comprises a signal generation device or a contact feedback device, and the virtual wall detection device detecting the virtual wall comprises:
sensing a signal sent by the signal generating device by the virtual wall detecting device; or
The signal magnitude sensed by the virtual wall detection device exceeds a threshold value; or
The virtual wall detection device obtains feedback information of the contact feedback device.
3. The method of claim 2, wherein:
the signal generating device or the contact feedback device includes: the system comprises an infrared transmitter, a magnetic stripe, a radio wave transmitter, an electronic tag, a color card and a metal pole piece;
the virtual wall detection device includes: the device comprises an infrared receiving tube, a magnetic sensor, a radio wave receiver, a Hall sensor, an image capturing device, a metal contact point and a pressure deformation sensor;
the virtual wall detection device can sense signals sent by the virtual wall or sense that the virtual wall is contacted.
4. The method of claim 1, further comprising the operations of:
stopping recording the position information in response to the robot moving along the virtual wall for one circle; or
Stopping recording the position information in response to the robot moving to the end point of the virtual wall along the virtual wall; or
And stopping recording the position information in response to the movement time of the robot reaching a preset time length.
5. The method of any of claims 1 to 4, further comprising: adjusting the geo-fence information;
the adjusting the geo-fence information comprises: and performing moving, scaling, stretching or rotating operations on the position information, the graphic information or the picture information.
6. The method of claim 1, wherein,
the geo-fence information comprises one or more of location information, graphical information, and picture information;
marking the geofence information on an electronic map comprises:
projecting the geo-fence information onto an electronic map, the electronic map being a grid map;
marking a grid located inside the picture projection as a geo-fenced area; or
Marking a grid located outside the picture projection as a geo-fenced area; or
Marking the grid occupied by the graphic projection and grids nearby as a geo-fenced area; or
The grid occupied by the position point projection and the grids nearby are marked as a geo-fence area.
7. The method of claim 1, further comprising:
acquiring an electronic map marked with a geo-fenced area;
positioning the position of the robot in the moving process of the robot;
and under the condition that the position of the robot is overlapped with the geo-fence area, the robot makes a preset behavior.
8. The method of claim 7, wherein the robot position overlapping with a geo-fenced area comprises:
the grid occupied by the projection of the robot position on the map overlaps the grid marked as a geofenced area on the map.
9. The method of claim 7, wherein the preset behavior comprises: one or more of stop motion, retard motion, transition motion direction, move away from the geofenced area, move along the geofenced boundary.
10. A geo-fencing setup system comprising:
the device is fixed with a virtual wall detection device and used for recording geo-fence information, wherein the geo-fence information comprises one or more of graphic information and picture information;
a geofence setup system to obtain geofence information from a device and mark a geofence area on an electronic map according to the geofence information;
the obtaining geo-fence information includes obtaining geo-fence information from a robot, the robot obtaining geo-fence information by:
the robot provided with the virtual wall detection device moves near a virtual wall, and the virtual wall is arranged in an area needing to establish a geo-fence;
responding to the virtual wall detection device to detect the virtual wall, recording position information of the virtual wall detection device, obtaining graph information or picture information according to the position information, and taking the graph information or the picture information as geo-fence information;
the taking the graphic information or the picture information as the geo-fence information includes:
acquiring sensing points obtained when the virtual wall detection device detects a virtual wall, inflection points detected when the change angle of the motion direction of the robot exceeds a threshold value, and recording points added by a user on an electronic map;
and connecting the induction points, the inflection points and the recording points to form a graph or a picture.
CN201810453657.1A 2018-05-11 2018-05-11 Geo-fence setting method and method for limiting robot movement Active CN108829095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810453657.1A CN108829095B (en) 2018-05-11 2018-05-11 Geo-fence setting method and method for limiting robot movement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810453657.1A CN108829095B (en) 2018-05-11 2018-05-11 Geo-fence setting method and method for limiting robot movement

Publications (2)

Publication Number Publication Date
CN108829095A CN108829095A (en) 2018-11-16
CN108829095B true CN108829095B (en) 2022-02-08

Family

ID=64148025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810453657.1A Active CN108829095B (en) 2018-05-11 2018-05-11 Geo-fence setting method and method for limiting robot movement

Country Status (1)

Country Link
CN (1) CN108829095B (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109571469B (en) * 2018-11-29 2021-01-08 深圳市优必选科技有限公司 Control circuit for robot obstacle avoidance, robot and robot obstacle avoidance method
CN116907496A (en) * 2018-12-14 2023-10-20 广州极飞科技股份有限公司 Method and control device for setting a working route for a working device
CN110236455B (en) * 2019-01-08 2021-04-16 云鲸智能科技(东莞)有限公司 Control method, device and equipment of mopping robot and storage medium
CN112208415A (en) * 2019-07-09 2021-01-12 深圳市安泽智能机器人有限公司 Robot and AGV trolley based carrying method and robot
CN110597253B (en) * 2019-09-05 2022-12-09 珠海一微半导体股份有限公司 Robot control method, chip and laser type cleaning robot
CN112716392B (en) * 2019-10-28 2022-08-23 深圳拓邦股份有限公司 Control method of cleaning equipment and cleaning equipment
CN111060116B (en) * 2019-12-04 2023-07-18 江西洪都航空工业集团有限责任公司 Independent grassland map building system based on vision
CN111240322B (en) * 2020-01-09 2020-12-29 珠海市一微半导体有限公司 Method for determining working starting point of robot movement limiting frame and motion control method
CN113552866B (en) * 2020-04-17 2023-05-05 苏州科瓴精密机械科技有限公司 Method, system, robot and readable storage medium for improving traversal equilibrium performance
CN112050813B (en) * 2020-08-08 2022-08-02 浙江科聪控制技术有限公司 Laser navigation system for anti-riot one-zone mobile robot
CN112363516A (en) * 2020-10-26 2021-02-12 深圳优地科技有限公司 Virtual wall generation method and device, robot and storage medium
CN112596654B (en) * 2020-12-25 2022-05-17 珠海格力电器股份有限公司 Data processing method, data processing device, electronic equipment control method, device, equipment and electronic equipment
CN112802302B (en) * 2020-12-31 2022-11-18 国网浙江省电力有限公司双创中心 Electronic fence method and system based on multi-source algorithm
US11540084B2 (en) * 2021-03-19 2022-12-27 Ford Global Technologies, Llc Dynamic geofencing hysteresis
CN113465588A (en) * 2021-06-09 2021-10-01 丰疆智能科技股份有限公司 Automatic generation method and device of navigation virtual wall, electronic equipment and storage medium
CN113784289A (en) * 2021-07-21 2021-12-10 中南大学湘雅医院 Hospital patient auxiliary nursing method
CN114253266A (en) * 2021-09-26 2022-03-29 深圳市商汤科技有限公司 Mobile robot, edgewise moving method thereof and computer storage medium
CN113997281B (en) * 2021-09-30 2023-08-08 云鲸智能(深圳)有限公司 Robot, control method and control device thereof, and computer-readable storage medium
CN115239905B (en) * 2022-09-23 2022-12-20 山东新坐标智能装备有限公司 Electronic fence and virtual wall generation method
CN116300960A (en) * 2023-03-31 2023-06-23 深圳森合创新科技有限公司 Robot and map construction and positioning method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102866706B (en) * 2012-09-13 2015-03-25 深圳市银星智能科技股份有限公司 Cleaning robot adopting smart phone navigation and navigation cleaning method thereof
CN106537169B (en) * 2015-01-22 2018-10-30 广州艾若博机器人科技有限公司 Positioning based on color lump label and map constructing method and its device
CN106774294A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 A kind of mobile robot virtual wall method for designing
CN106272420B (en) * 2016-08-30 2019-07-02 北京小米移动软件有限公司 Robot and robot control method
CN106990779A (en) * 2017-03-24 2017-07-28 上海思岚科技有限公司 Make the implementation method of mobile robot progress virtual wall avoidance by computer client
CN106997202A (en) * 2017-03-24 2017-08-01 上海思岚科技有限公司 The implementation method of virtual wall is set by mobile applications

Also Published As

Publication number Publication date
CN108829095A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108829095B (en) Geo-fence setting method and method for limiting robot movement
US11289192B2 (en) Interfacing with a mobile telepresence robot
CN108247647B (en) Cleaning robot
US11407116B2 (en) Robot and operation method therefor
US11350809B2 (en) Cleaning robot and method of performing task thereof
CN111492403A (en) Lidar to camera calibration for generating high definition maps
WO2021047348A1 (en) Method and apparatus for establishing passable area map, method and apparatus for processing passable area map, and mobile device
US20200257821A1 (en) Video Monitoring Method for Mobile Robot
CN114942638A (en) Robot working area map construction method and device
US10657691B2 (en) System and method of automatic room segmentation for two-dimensional floorplan annotation
JP6636260B2 (en) Travel route teaching system and travel route teaching method for autonomous mobile object
JP2017054475A (en) Teleoperation device, method and program
US10546419B2 (en) System and method of on-site documentation enhancement through augmented reality
CN114391777B (en) Obstacle avoidance method and device for cleaning robot, electronic equipment and medium
US10447991B1 (en) System and method of mapping elements inside walls
JP7160257B2 (en) Information processing device, information processing method, and program
US20220334587A1 (en) Method for processing map of closed space, apparatus, and mobile device
CA3201066A1 (en) Collaborative augmented reality measurement systems and methods
CN112197777A (en) Blind person navigation method, server and computer readable storage medium
US20220329737A1 (en) 3d polygon scanner
WO2023204025A1 (en) Movement management system, movement management method, and program
JP2021131713A (en) Own position estimating system and moving body
JP2022003437A (en) Control system and control method of mobile body
CN117193280A (en) Navigation map creation method, device and readable storage medium
TW202217241A (en) Floor positioning and map construction system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 508, Unit 1, Building 17, No. 4, Xinzhu Road, Songshan Lake Park, Dongguan City, Guangdong Province

Applicant after: YUNJING INTELLIGENCE TECHNOLOGY (DONGGUAN) Co.,Ltd.

Address before: 523039 Room 501, Area A, 1st Building, 1st Unit, 5th Floor, 17 Xinzhuyuan Building, No. 4 Xinzhu Road, Songshan Lake Hi-tech Industrial Development Zone, Dongguan City, Guangdong Province

Applicant before: YUNJING INTELLIGENCE TECHNOLOGY (DONGGUAN) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518000, Building 1, Yunzhongcheng A2901, Wanke Yuncheng Phase 6, Dashi Er Road, Xili Community, Xishan District, Shenzhen City, Guangdong Province

Patentee after: Yunjing Intelligent Innovation (Shenzhen) Co.,Ltd.

Address before: 523039 room 508, unit 1, building 17, No. 4, Xinzhu Road, Songshanhu Park, Dongguan City, Guangdong Province

Patentee before: YUNJING INTELLIGENCE TECHNOLOGY (DONGGUAN) Co.,Ltd.