CN111860123A - Method for identifying working area boundary - Google Patents

Method for identifying working area boundary Download PDF

Info

Publication number
CN111860123A
CN111860123A CN202010499982.9A CN202010499982A CN111860123A CN 111860123 A CN111860123 A CN 111860123A CN 202010499982 A CN202010499982 A CN 202010499982A CN 111860123 A CN111860123 A CN 111860123A
Authority
CN
China
Prior art keywords
boundary
working area
neural network
network model
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010499982.9A
Other languages
Chinese (zh)
Other versions
CN111860123B (en
Inventor
焦新涛
苏霖锋
陈伟钦
付帅
赖金翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202010499982.9A priority Critical patent/CN111860123B/en
Publication of CN111860123A publication Critical patent/CN111860123A/en
Application granted granted Critical
Publication of CN111860123B publication Critical patent/CN111860123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a method for identifying the boundary of a working area, which comprises the following steps: acquiring an environment image of the position of the intelligent mobile robot; extracting a boundary contour from the environment image by using an edge detection algorithm; segmenting the boundary contour, and determining sub-regions to be detected in each segment, wherein the sub-regions to be detected comprise a plurality of basic image units; detecting each subarea to be detected by using a neural network model, and judging whether each basic image unit in the subarea to be detected belongs to a working area or not by using the neural network model; and judging the basic image units of the working area as the neural network model to be sequentially connected along the length direction of the boundary outline to serve as the current working area boundary of the intelligent mobile robot. Compared with the prior art, the method can ensure the normal work of the robot under the condition that the working boundary of the robot cannot be artificially constructed in advance, and can accurately acquire the working area boundary of the robot.

Description

Method for identifying working area boundary
Technical Field
The invention relates to the field of work area identification of intelligent mobile robots, in particular to a method for identifying work area boundaries.
Background
The intelligent mobile robot is an automatic intelligent product and can automatically complete specific tasks in a specified area according to requirements. In order to ensure the safety of the robot during the operation, the robot needs to be limited in a certain working area. If the robot moves out of the safe area during operation, damage and even accidental injury to the user may occur to the robot. The guarantee that the robot works in the specified area is an important condition for guaranteeing the working safety of the robot, and whether the boundary of the working area can be accurately and effectively identified is an important performance evaluation index of the intelligent mobile robot.
In the prior art, the boundary identification methods of various intelligent mobile robots mainly comprise the following steps: the artificial boundary is constructed by the technologies of sensing the inherent natural boundary of the environment such as a wall and the like by using a contact or non-contact sensor, determining the boundary by coordinate positioning, artificially constructing the boundary of a working area by using a live wire and the like. The method for determining the working environment through the environment inherent boundary has clear limitation on the working environment of the robot, and requires the environment to have the inherent boundary beneficial to the robot identification, however, if the robot works in an open scene, the environment often does not have an obvious natural boundary beneficial to the robot identification, and the method cannot be applied; the boundary of the robot is determined by coordinate positioning and other modes, and the boundary can not be accurately described due to obvious errors of various existing positioning technologies, so that the safety and the effectiveness of the robot in working can not be ensured. The working area boundary is constructed by the electrified conducting wire, and the mode has the defects of large wiring workload, easy interference of environmental electromagnetic signals and the like. Based on the analysis, in the prior art, for an open scene in which the working boundary of the robot cannot be artificially constructed in advance, no method for identifying the boundary of the working area of the robot can be accurately identified.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a method for identifying the boundary of a working area, which realizes the identification of the boundary of the working area based on image identification and can accurately identify the boundary of the working area of a robot under the condition that the working boundary of the robot cannot be manually constructed in advance.
The method for identifying the boundary of the working area comprises the following steps: acquiring an environment image of the position of the intelligent mobile robot; extracting a boundary contour from the environment image by using an edge detection algorithm; segmenting the boundary contour, and determining sub-regions to be detected in each segment, wherein the sub-regions to be detected comprise a plurality of basic image units; detecting each subarea to be detected by using a neural network model, and judging whether each basic image unit in the subarea to be detected belongs to a working area; and judging the basic image units of the working area as the neural network model to be sequentially connected along the length direction of the boundary outline to serve as the current working area boundary of the intelligent mobile robot.
Compared with the prior art, the method for identifying the working area boundary can identify the working area boundary by acquiring the image of the working environment, can still ensure the normal work of the robot under the condition that the working boundary of the robot cannot be artificially constructed in advance, and can accurately acquire the working area boundary of the robot in real time and ensure the safe operation of the robot.
Further, the method for determining the sub-region to be detected in each segment comprises: covering the basic image unit along the boundary contour to form a minimum area covering the boundary contour; in each segment, the minimum region is subjected to region extension covering along two sides of the normal direction of the boundary contour, and sub-regions to be detected of each segment on the boundary contour are formed.
Further, when the boundary contour is segmented, equal segmentation is performed along the length direction of the boundary contour.
Furthermore, the neural network model sequentially comprises a convolutional layer, a pooling layer, a convolutional layer, a module group and a classification output module; the neural network model is trained by adopting a sample set comprising working area pictures and non-working area pictures of a plurality of robots, and the recognition effect of the neural network model is verified by adopting a verification set comprising the working area pictures and the non-working area pictures of the plurality of robots.
Further, the general convolution of the fully connected layers of the neural network model all turn into sparse connections.
Furthermore, the convolution layer of the neural network model adopts 3 × 3 convolution kernels.
Further, the pooling layer of the neural network model takes a maximal pooling approach.
Further, the edge detection algorithm adopts a sobel operator.
Further, before extracting a boundary contour from the environment image by using an edge detection algorithm, preprocessing the environment image; the preprocessing comprises Gaussian filtering, binarization and opening operation on the real-time image.
Further, when forming the sub-region, the minimum region of each segment extends in a region along a normal direction of the inner boundary profile of the segment, and the distance extending up and down is 50% of the height of the minimum region.
Drawings
FIG. 1 is a flow chart of a method of identifying boundaries of a work area in accordance with the present invention;
FIG. 2 is an exemplary illustration of an environment image of a boundary location in a robot work area;
FIG. 3 is a schematic diagram of a boundary contour obtained by an edge detection algorithm in an environmental image;
FIG. 4 is a schematic illustration of segmenting a boundary profile;
FIG. 5 is a schematic view of the sub-regions to be detected in the segmentation of the boundary profile;
FIG. 6 is a flow chart of an algorithm for a neural network model employed in the present method;
FIG. 7 is an example of an image of a sub-region to be detected;
FIG. 8 is a schematic diagram of a working area boundary obtained by connecting judgment results based on a neural network model in a sub-area to be detected;
Fig. 9 is a schematic diagram of a working area boundary obtained by connecting the judgment results based on the neural network model in the environment image.
Detailed Description
The invention provides a method for identifying a working area boundary, which is used for acquiring an environment image of a position where an intelligent mobile robot is located, acquiring a fuzzy boundary based on a traditional edge detection algorithm, further dividing the identified fuzzy boundary into areas, identifying each divided small area by using a neural network model, judging whether the small areas are working areas, and finally connecting all the working areas in sequence to obtain the boundary of the working area, thereby achieving the purposes of identifying the boundary of the working area and improving the accuracy of the identified boundary.
Referring to fig. 1, the method for identifying the boundary of the working area includes the following steps:
step 1: and acquiring an environment image of the position of the intelligent mobile robot.
In this step, according to the requirement that the robot needs to continuously determine the boundary of the working area of the new environment during continuous movement, the method dynamically acquires the real-time environment image of the robot during working. Each time an image of the environment is acquired, the work area boundaries in the image are analyzed to instruct the robot to operate. Specifically, an example of the environment image is shown in fig. 2. The image gray difference shown in fig. 2 exists in the images acquired at the boundary position of the intelligent mobile robot working area, and the gray difference feature can be used for indicating that the boundary of the robot working area exists in the corresponding image.
Preferably, the method adopts a camera component or a photographing component to acquire the environment image.
Step 2: and extracting a boundary contour from the environment image by using an edge detection algorithm.
Similarly, as can be seen from fig. 2, in the environment image of the boundary position in the robot work area, the boundary line between different gray-scale color blocks is the boundary 10. In this step, when the boundary 10 in the environment image is extracted by using the edge detection algorithm, the boundary contour 11 is obtained. Preferably, in the present embodiment, a sobel operator is adopted to perform edge detection, so as to briefly and quickly acquire the boundary contour in the environment image.
However, due to the influence of light factors, complex and variable boundary shapes and other factors in the actual working environment, the conventional edge detection method cannot accurately and effectively identify the boundary, and the identified boundary has a large error, so that the requirement of high precision in the actual work of the robot cannot be met. As shown in fig. 3, the edge detection algorithm described above still has an error when detecting the boundary of the environment image, and the extracted boundary contour 11 has a low degree of overlapping with the actual boundary 10 of the environment image. Therefore, in order to more accurately obtain the boundary of the working area, the present solution further performs the step 3-5 of reducing the error between the boundary contour 11 and the boundary 10:
And step 3: and segmenting the boundary contour, and determining the sub-region to be detected in each segment.
Since the actual boundary changes less in a limited range, the image region near the boundary contour extracted by the edge detection algorithm is segmented in this step for segment-by-segment analysis.
Specifically, referring to fig. 4 and 5, first, the boundary contour 11 is equally segmented along the length direction thereof; then, taking the image size used in the training of the neural network model as a basic image unit (for example, an image block of 5 × 5 pixels), and enabling the basic image unit to cover the boundary outline along the direction of the boundary outline 11 to form a minimum area covering the boundary outline 11; further, considering that the boundary contour 11 may have an error with respect to the actual boundary in the environment image, in each segment, the minimum region is made to perform region extension coverage on both sides of the minimum region along the normal direction of the boundary contour, and finally the sub-regions 12 to be detected of each segment on the boundary contour 11 are formed. Preferably, the minimum area of each segment extends in the sub-area along the normal of the inner boundary profile 11 of the segment, and the distance between the minimum area and the sub-area is 50% of the height of the minimum area.
And 4, step 4: and detecting each sub-area to be detected by using a neural network model, and judging whether each basic image unit in the sub-area to be detected belongs to the working area.
Referring to fig. 6, fig. 6 is a flowchart of an algorithm of a neural network model used in the method. The neural network model is only used for judging whether the input image belongs to a working area or not, belongs to a two-classification model, sequentially comprises 5 parts of a convolution layer, a pooling layer, a convolution layer, a module group and classification output, and adopts a depth strategy to perform repeated convolution and pooling operation on the image to finish the feature extraction work of the image. Meanwhile, the neural network model adopts a deeper neural network layer number, so that parameters during training can be reduced to reduce the computational complexity of the whole network. In addition, in order to reduce the loss value during the training of the neural network model, the general convolution of the fully connected layer of the model is converted into sparse connection.
And 2 convolution kernels of 3 x 3 are adopted in the convolution layer of the neural network model, the image is convoluted for multiple times, and the image characteristic texture characteristic and the color characteristic are mainly extracted. Taking 2 convolution kernels of 3 x 3 reduces the overfitting phenomena in the case of a limited data set while reducing the computational effort, e.g. by about 28% compared to a convolution kernel of 5 x 5. Because each area after the image segmentation is smaller, the model adopting a smaller kernel can still keep higher identification accuracy rate for the low-pixel area.
The pooling layer adopts a maximum pooling mode, and selects the maximum value of the image area, namely the area most interested by the network as a pooling value.
The module is connected with the parameters of the rolling layer and the pooling layer in a connecting mode, and splicing of output data of the rolling layer and the pooling layer is completed.
And the classification output layer counts the probability that each pixel of the image belongs to the working area, and finally gives the probability that the whole image belongs to the working area. And when the model is trained, the classification output layer is compared with the image labels in the training set, and the neural network parameters are updated by continuous training and learning.
When the neural network model is trained, the model is trained in the form of a training set and a verification set, wherein samples of a working area and a non-working area in the training set and the verification set respectively account for half, so that a better training effect is obtained. Because the parameters of the neural network model are less and smaller kernels are adopted, the recognition time can be reduced while the higher recognition rate is kept, and the requirement of the robot for recognizing the boundary in real time in practical application is met. And after the neural network model completes training by using the training set, testing the accuracy of identification by using the test set, and ensuring the accuracy of judgment of the neural network model.
And 5: and (4) sequentially connecting the basic image units which are judged as the working areas in the step 4 along the length direction of the boundary outline to serve as the current working area boundary 13 of the intelligent mobile robot.
In each sub-region to be detected, taking the sub-region to be detected exemplified in fig. 7 as an example, when each basic image unit in the sub-region to be detected is input into the neural network model and the judgment is completed, the judgment is made as a working region image unit connection line, and the working region boundary in the sub-region shown in fig. 8 is obtained. Subsequently, as shown in fig. 9, the working area boundaries in each sub-area to be detected are sequentially connected along the length direction of the boundary contour 11 to obtain an overall working area boundary 13 of the environment image. The working area boundary 13 obtained after the neural network model identification is more attached to the boundary of the original environment image, and the identification precision is higher.
Further, step 2A is further included between step 1 and step 2: and preprocessing the environment image to improve the image quality. Preferably, the preprocessing comprises gaussian filtering, binarization and opening operations on the real-time image. The Gaussian filtering processing can enhance the image quality of the image; the contrast between different gray scale regions in the image can be enhanced by binarization processing and opening operation, and the boundary in the image is highlighted, so that subsequent analysis is facilitated.
Compared with the prior art, the method for identifying the boundary of the working area provided by the invention firstly utilizes an edge detection algorithm to preliminarily extract the boundary contour in the environment image of the position where the robot is located, expands the image areas at two sides of the boundary contour as the subinterval to be detected based on the boundary contour, further, the images in the subinterval are divided, the divided images are sequentially input into a neural network model to judge whether the divided images belong to the working area of the robot, and finally all the image units judged as the working area are connected to form the boundary of the working area of the robot. The method segments the image and inputs the image into the neural network model for judgment, so that the precision of the finally obtained working area boundary is further improved, meanwhile, the boundary outline of the image is extracted by using an edge detection algorithm, and the setting of the detection interval of the neural network model is determined based on the outline, so that the area range of the neural network model needing to be detected is greatly reduced, the identification efficiency is improved, and the real-time performance of robot work area boundary identification is enhanced. In practical application, when the intelligent mobile robot comes to a strange environment, the method can help the robot to acquire the boundary of the working area in real time by acquiring the image, and a device for helping the robot to identify the boundary does not need to be embedded in the working environment of the robot in advance, so that the cost is saved, and the range of scenes in which the intelligent mobile robot can work is expanded.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the technical spirit of the present invention or if they fall within the scope of the claims and the equivalent technology of the present invention.

Claims (10)

1. A method of identifying boundaries of a work area, comprising the steps of:
acquiring an environment image of the position of the intelligent mobile robot;
extracting a boundary contour from the environment image by using an edge detection algorithm;
segmenting the boundary contour, and determining sub-regions to be detected in each segment, wherein the sub-regions to be detected comprise a plurality of basic image units;
successively detecting each subarea to be detected by using a neural network model, and judging whether each basic image unit in the subarea to be detected belongs to a working area;
and sequentially connecting the basic image units of the working area judged by the neural network model along the length direction of the boundary outline to be used as the working area boundary of the intelligent mobile robot.
2. The method of identifying boundaries of a work area of claim 1, wherein: the method for determining the sub-region to be detected in each segment comprises the following steps: covering the basic image unit along the boundary contour to form a minimum area covering the boundary contour; in each segment, the minimum region is subjected to region extension covering along two sides of the normal direction of the boundary contour, and sub-regions to be detected of each segment on the boundary contour are formed.
3. The method of identifying boundaries of a work area of claim 2, wherein: when the boundary contour is segmented, equal segmentation is performed along the length direction of the boundary contour.
4. A method of identifying boundaries of a work area according to claim 3, wherein: the neural network model consists of a convolutional layer, a pooling layer, a convolutional layer, a module group and a classification output module in sequence; the neural network model is trained by adopting a sample set comprising working area pictures and non-working area pictures of a plurality of robots, and the recognition effect of the neural network model is verified by adopting a verification set comprising the working area pictures and the non-working area pictures of the plurality of robots.
5. The method of identifying work area boundaries of claim 4, wherein: and converting general convolution of the full connection layer of the neural network model into sparse connection.
6. The method of identifying work area boundaries of claim 5, wherein: and the convolution layer of the neural network model adopts 3 × 3 convolution kernels.
7. The method of identifying work area boundaries of claim 6, wherein: and the pooling layer of the neural network model adopts a maximum pooling mode.
8. The method of identifying boundaries of a work area of claim 7 wherein: the edge detection algorithm adopts a sobel operator.
9. The method of identifying boundaries of a work area of claim 8 wherein: preprocessing the environment image before extracting a boundary contour from the environment image by using an edge detection algorithm; the preprocessing comprises Gaussian filtering, binarization and opening operation on the real-time image.
10. The method of identifying boundaries of a work area of claim 9 wherein: when the sub-region is formed, the minimum region of each segment extends along the normal direction of the inner boundary profile of the segment, and the distance of the vertical extension is 50% of the height of the minimum region.
CN202010499982.9A 2020-06-04 2020-06-04 Method for identifying boundary of working area Active CN111860123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499982.9A CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499982.9A CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Publications (2)

Publication Number Publication Date
CN111860123A true CN111860123A (en) 2020-10-30
CN111860123B CN111860123B (en) 2023-08-08

Family

ID=72985773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499982.9A Active CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Country Status (1)

Country Link
CN (1) CN111860123B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230202A (en) * 2017-05-16 2017-10-03 淮阴工学院 The automatic identifying method and system of pavement disease image
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN110222622A (en) * 2019-05-31 2019-09-10 甘肃省祁连山水源涵养林研究院 A kind of ambient soil detection method and device
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
US20200020105A1 (en) * 2017-09-26 2020-01-16 Tencent Technolgy (Shenzhen) Company Limited Border detection method, server and storage medium
CN110968110A (en) * 2018-09-29 2020-04-07 广州极飞科技有限公司 Method and device for determining operation area, unmanned aerial vehicle and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230202A (en) * 2017-05-16 2017-10-03 淮阴工学院 The automatic identifying method and system of pavement disease image
US20200020105A1 (en) * 2017-09-26 2020-01-16 Tencent Technolgy (Shenzhen) Company Limited Border detection method, server and storage medium
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
CN110968110A (en) * 2018-09-29 2020-04-07 广州极飞科技有限公司 Method and device for determining operation area, unmanned aerial vehicle and storage medium
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN110222622A (en) * 2019-05-31 2019-09-10 甘肃省祁连山水源涵养林研究院 A kind of ambient soil detection method and device

Also Published As

Publication number Publication date
CN111860123B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN108492319B (en) Moving target detection method based on deep full convolution neural network
CN111862064B (en) Silver wire surface flaw identification method based on deep learning
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN113658132B (en) Computer vision-based structural part weld joint detection method
CN110321933B (en) Fault identification method and device based on deep learning
CN110930357B (en) In-service steel wire rope surface defect detection method and system based on deep learning
CN111402203A (en) Fabric surface defect detection method based on convolutional neural network
CN113436157B (en) Vehicle-mounted image identification method for pantograph fault
CN111860143B (en) Real-time flame detection method for inspection robot
CN115018846B (en) AI intelligent camera-based multi-target crack defect detection method and device
CN112308826A (en) Bridge structure surface defect detection method based on convolutional neural network
CN111539927B (en) Detection method of automobile plastic assembly fastening buckle missing detection device
CN113324864A (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN110751619A (en) Insulator defect detection method
CN113139528B (en) Unmanned aerial vehicle thermal infrared image dam dangerous case detection method based on fast _ RCNN
CN113240623B (en) Pavement disease detection method and device
CN113435452A (en) Electrical equipment nameplate text detection method based on improved CTPN algorithm
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN116508057A (en) Image recognition method, apparatus and computer readable storage medium
CN114331961A (en) Method for defect detection of an object
CN113610052A (en) Tunnel water leakage automatic identification method based on deep learning
CN111738264A (en) Intelligent acquisition method for data of display panel of machine room equipment
CN111860123B (en) Method for identifying boundary of working area
CN104820818A (en) Fast recognition method for moving object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant