CN111860123B - Method for identifying boundary of working area - Google Patents

Method for identifying boundary of working area Download PDF

Info

Publication number
CN111860123B
CN111860123B CN202010499982.9A CN202010499982A CN111860123B CN 111860123 B CN111860123 B CN 111860123B CN 202010499982 A CN202010499982 A CN 202010499982A CN 111860123 B CN111860123 B CN 111860123B
Authority
CN
China
Prior art keywords
boundary
area
neural network
network model
working area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010499982.9A
Other languages
Chinese (zh)
Other versions
CN111860123A (en
Inventor
焦新涛
苏霖锋
陈伟钦
付帅
赖金翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN202010499982.9A priority Critical patent/CN111860123B/en
Publication of CN111860123A publication Critical patent/CN111860123A/en
Application granted granted Critical
Publication of CN111860123B publication Critical patent/CN111860123B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying the boundary of a working area, which comprises the following steps: acquiring an environment image of the position of the intelligent mobile robot; extracting a boundary contour from the environment image by using an edge detection algorithm; segmenting the boundary contour, and determining a to-be-detected sub-region in each segment, wherein each to-be-detected sub-region comprises a plurality of basic image units; detecting each sub-region to be detected by using a neural network model, and judging whether each basic image unit in the sub-region to be detected belongs to a working region or not by using the neural network model; and the neural network model is judged to be the basic image unit of the working area, and the basic image unit is sequentially connected with the boundary of the current working area of the intelligent mobile robot along the length direction of the boundary outline. Compared with the prior art, the method can ensure the normal operation of the robot under the condition that the working boundary of the robot cannot be constructed manually in advance, and can accurately acquire the working area boundary of the robot.

Description

Method for identifying boundary of working area
Technical Field
The invention relates to the field of intelligent mobile robot working area identification, in particular to a method for identifying a boundary of a working area.
Background
The intelligent mobile robot is an automatic intelligent product and can automatically complete specific tasks in a specified area according to requirements. In the working process, in order to ensure the use safety of the robot, the robot needs to be limited in a certain working area. During operation, if the robot gets out of the safe area, damage may be caused to the robot and even accidental injury to the user. Ensuring that the robot works in a specified area is an important condition for ensuring the safety of robot work, and whether the boundary of the working area can be accurately and effectively identified is an important performance evaluation index of the intelligent mobile robot.
In the prior art, the boundary recognition method of various intelligent mobile robots mainly comprises the following steps: the artificial boundary is constructed by using the technologies of sensing natural boundaries of the environment such as walls and the like by using a contact or non-contact sensor, determining the boundaries through coordinate positioning, manually constructing the boundary of a working area through a live wire and the like. The method for determining the working environment through the inherent boundary of the environment has definite limit on the robot working environment, and the environment is required to have the inherent boundary which is beneficial to the robot identification, however, if the robot works in an open scene, the environment often has no obvious natural boundary which is beneficial to the robot identification, and the method cannot be applied; the boundary of the robot is determined by means of coordinate positioning and the like, so that the boundary cannot be accurately described due to obvious errors existing in various conventional positioning technologies, and the safety and the effectiveness of the robot cannot be ensured. The working area boundary is constructed by the electrified wire, and the defects of large wiring workload, easiness in interference of environmental electromagnetic signals and the like exist in the mode. Based on the above analysis, in the prior art, for an open scene where a working boundary of a robot cannot be manually constructed in advance, no method has yet been available to accurately identify the boundary of the working area.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a method for identifying the boundary of a working area, which can accurately identify the boundary of the working area of a robot under the condition that the working boundary of the robot cannot be constructed manually in advance.
The method for identifying the boundary of the working area comprises the following steps: acquiring an environment image of the position of the intelligent mobile robot; extracting a boundary contour from the environment image by using an edge detection algorithm; segmenting the boundary contour, and determining a to-be-detected sub-region in each segment, wherein each to-be-detected sub-region comprises a plurality of basic image units; detecting each sub-region to be detected by using a neural network model, and judging whether each basic image unit in the sub-region to be detected belongs to a working region or not; and the neural network model is judged to be the basic image unit of the working area, and the basic image unit is sequentially connected with the boundary of the current working area of the intelligent mobile robot along the length direction of the boundary outline.
Compared with the prior art, the method for identifying the boundary of the working area can identify the boundary of the working area by acquiring the image of the working environment, can still ensure the normal operation of the robot under the condition that the working boundary of the robot cannot be constructed manually in advance, and can acquire the boundary of the working area of the robot in real time and accurately to ensure the safe operation of the robot.
Further, the method for determining the subarea to be detected in each segment comprises the following steps: covering the basic image unit position along the boundary outline to form a minimum area covering the boundary outline; and in each segment, enabling the minimum area to be covered by area extension along two sides of the normal direction of the boundary contour to form a to-be-detected sub-area of each segment on the boundary contour.
Further, when the boundary contour is segmented, equal segmentation is performed along the length direction of the boundary contour.
Further, the neural network model sequentially comprises a convolution layer, a pooling layer, a convolution layer, a module group and a classification output module; the neural network model is trained by adopting a sample set comprising working area pictures and non-working area pictures of a plurality of robots, and the recognition effect of the neural network model is verified by adopting a verification set comprising working area pictures and non-working area pictures of the plurality of robots.
Further, the general convolution of the fully connected layers of the neural network model translates into sparse connections.
Further, the convolution layer of the neural network model adopts a convolution kernel of 3*3.
Further, the pooling layer of the neural network model adopts a maximum pooling mode.
Further, the edge detection algorithm adopts a sobel operator.
Further, preprocessing the environment image before extracting a boundary contour from the environment image by using an edge detection algorithm; the preprocessing includes gaussian filtering, binarizing, and open-ing the real-time image.
Further, when forming the subarea, the minimum area of each segment extends along the normal direction of the inner boundary outline of the segment, and the distance of the area extending up and down is 50% of the height of the minimum area.
Drawings
FIG. 1 is a flow chart of a method of identifying a boundary of a work area in accordance with the present invention;
FIG. 2 is an exemplary diagram of an environmental image of a boundary location in a robot work area;
FIG. 3 is a schematic diagram of a boundary contour obtained in an environmental image using an edge detection algorithm;
FIG. 4 is a schematic illustration of segmenting a boundary contour;
FIG. 5 is a schematic diagram of a sub-region to be detected in a segment of a boundary profile;
FIG. 6 is a flowchart of an algorithm for the neural network model employed in the present method;
FIG. 7 is an example of a sub-region image to be detected;
FIG. 8 is a schematic diagram of a boundary of a working area obtained by connecting judgment results based on a neural network model in a sub-area to be detected;
fig. 9 is a schematic diagram of a boundary of a working area obtained by connecting judgment results based on a neural network model in an environment image.
Detailed Description
The method for identifying the boundary of the working area acquires the environment image of the position where the intelligent mobile robot is located, acquires the fuzzy boundary based on the traditional edge detection algorithm, further carries out region division on the identified fuzzy boundary, respectively identifies each small region by utilizing the neural network model, judges whether the small region is the working area or not, and finally sequentially connects all the working areas to obtain the boundary of the working area so as to achieve the purposes of identifying the boundary of the working area and improving the accuracy of the identified boundary.
Referring to fig. 1, the method for identifying the boundary of the working area includes the following steps:
step 1: and acquiring an environment image of the position of the intelligent mobile robot.
In this step, according to the requirement that the robot needs to continuously determine the boundary of the working area of the new environment in the continuous movement, the method dynamically acquires the real-time environment image of the robot in working. The working area boundaries in the image will be analyzed after each acquisition of the environment image to indicate the robot operation. In particular, an example diagram of the environmental image is shown in fig. 2. The gray scale difference features of the images shown in fig. 2 exist in the images acquired at the boundary positions of the intelligent mobile robot working areas, and the gray scale difference features can be used for indicating the existence of the boundaries of the robot working areas in the corresponding images.
Preferably, the method adopts a camera assembly or a photographing assembly to acquire the environment image.
Step 2: and extracting boundary contours from the environment image by using an edge detection algorithm.
Similarly, as can be seen from fig. 2, in the environment image of the boundary position in the robot work area, the boundary line between the different gray-scale color patches is the boundary 10. In this step, when the boundary 10 in the environment image is extracted by using the edge detection algorithm, the boundary contour 11 is obtained. Preferably, in this embodiment, a sobel operator is used to perform edge detection, so as to obtain a boundary contour in an environmental image succinctly and rapidly.
However, in the actual working environment, the conventional edge detection method cannot accurately and effectively identify the boundary due to the influence of factors such as light factors, complex and changeable boundary shapes and the like, and the identified boundary has large errors, so that the requirement of high precision in the actual working of the robot cannot be met. As shown in fig. 3, the above-mentioned edge detection algorithm still has an error in detecting the boundary of the environmental image, and the degree of coincidence between the extracted boundary contour 11 and the actual boundary 10 of the environmental image is low. Therefore, in order to more accurately acquire the boundary of the working area, the present solution further performs step 3-5 of reducing the error between the boundary contour 11 and the boundary 10:
step 3: the boundary contour is segmented and sub-regions to be detected in each segment are determined.
Since the actual boundary generally varies less over a limited range, the image region near the boundary contour extracted by the edge detection algorithm is segmented in this step for piecewise analysis.
Specifically, referring to fig. 4 and 5, first, the boundary contour 11 is equally segmented along the length direction thereof; subsequently, using the image size used in training the neural network model as a basic image unit (for example, an image block of 5*5 pixels), enabling the basic image unit to cover the boundary contour along the direction of the boundary contour 11 to form a minimum area covering the boundary contour 11; further, considering that the boundary contour 11 may have an error with respect to an actual boundary in the environmental image, in each segment, the minimum area is made to cover the two sides of the minimum area in an area extending manner along the normal direction of the boundary contour, and finally, the sub-area 12 to be detected of each segment on the boundary contour 11 is formed. Preferably, the minimum area of each segment extends in the normal direction of the inner boundary contour 11 of the segment when forming the sub-area, and the distance of the extension is 50% of the height of the minimum area.
Step 4: and detecting each sub-region to be detected by using a neural network model, and judging whether each basic image unit in the sub-region to be detected belongs to a working region.
Referring to fig. 6, fig. 6 is a flowchart of an algorithm of the neural network model adopted in the method. The neural network model is only used for judging whether an input image belongs to a working area or not and belongs to a two-class model, and the neural network model sequentially comprises a convolution layer, a pooling layer, a convolution layer, a module group and 5 classified output parts, and adopts a depth strategy to repeatedly convolve and pool the image so as to finish the feature extraction work of the image. Meanwhile, as the neural network model adopts a deeper neural network layer number, parameters in training can be reduced so as to reduce the calculation complexity of the whole network. In addition, to reduce the loss value during neural network model training, the general convolution of the fully connected layers of the model is converted into sparse connections.
The convolution layer of the neural network model adopts 2 convolution kernels of 3*3, and the convolution is completed for the image for multiple times, so that the texture features and the color features of the image are mainly extracted. The use of 2 convolution kernels 3*3 reduces the overfitting phenomenon in the limited case of data sets while reducing the computational effort, e.g., by about 28% compared to the 5*5 convolution kernels. Because each region is smaller after the image is segmented, a model with smaller kernel can still keep higher recognition accuracy for the low-pixel region.
The pooling layer adopts a maximum pooling mode, and selects the maximum value of the image area, namely the area most interested by the network, as a pooling value.
And the module group receives the parameters of the convolution layer and the pooling layer and completes the splicing of the output data of the convolution layer and the pooling layer.
And the classification output layer counts the probability that each pixel of the image belongs to the working area, and finally gives the probability that the whole image belongs to the working area. When the model is trained, the classification output layer is compared with the image labels in the training set, training and learning are continuously carried out, and the neural network parameters are updated.
When the neural network model is trained, the model is trained in the form of a training set and a verification set, wherein the working area and the non-working area in the training set and the verification set respectively occupy half of the samples, so that a better training effect is obtained. Because the parameters of the neural network model are fewer and smaller cores are adopted, the recognition time can be reduced while the higher recognition rate is kept, and the requirement of the robot in actual application on recognizing the boundary in real time is met. After the neural network model is trained by the training set, the accuracy of the identification is tested by the testing set, and the accuracy of the judgment of the neural network model is ensured.
Step 5: and (3) sequentially connecting the basic image units which are judged to be the working areas in the step (4) along the length direction of the boundary outline to serve as the current working area boundary 13 of the intelligent mobile robot.
In each sub-area to be detected, taking the sub-area to be detected as an example in fig. 7, after each basic image unit in the sub-area to be detected inputs the neural network model and completes the judgment, the connection of the image units in the working area is judged to obtain the boundary of the working area in the sub-area as shown in fig. 8. Subsequently, as shown in fig. 9, the working area boundaries in each sub-area to be detected are sequentially connected along the length direction of the boundary contour 11 to obtain the overall working area boundary 13 of the environmental image. The working area boundary 13 obtained after the neural network model is identified is more attached to the boundary of the original environment image, and the identification accuracy is higher.
Further, step 2A is further included between step 1 and step 2: and preprocessing the environment image to improve the image quality. Preferably, the preprocessing includes gaussian filtering, binarizing and open-ing the real-time image. The Gaussian filtering processing can enhance the image quality of the image; the binarization processing and the open operation can enhance the contrast ratio between different gray areas in the image, and highlight the boundary in the image, thereby facilitating the subsequent analysis.
Compared with the prior art, the method for identifying the boundary of the working area provided by the invention has the advantages that firstly, the boundary outline in the environment image of the position of the robot is initially extracted by utilizing the edge detection algorithm, the image areas on two sides of the robot are expanded to serve as subintervals to be detected based on the boundary outline, further, images in the subintervals are segmented, the segmented images are sequentially input into the neural network model to judge whether the segmented images belong to the working area of the robot, and finally, all the image units judged as the working area are connected to form the boundary of the working area of the robot. The method segments the image and inputs the image into the neural network model for judgment, so that the accuracy of the boundary of the finally obtained working area is further improved, meanwhile, the boundary outline of the image is firstly extracted by utilizing an edge detection algorithm, and the setting of the detection interval of the neural network model is determined based on the outline, so that the area range of the neural network model to be detected is greatly reduced, the recognition efficiency is improved, and the real-time performance of the robot work area boundary recognition is enhanced. In practical application, when the intelligent mobile robot arrives in a strange environment, the method can help the robot to acquire the boundary of the working area in real time by acquiring the image, and a device for helping the robot to identify the boundary is not required to be buried in the working environment of the robot in advance, so that the cost is saved, and the scene range of the intelligent mobile robot capable of working is enlarged.
The present invention is not limited to the above-described embodiments, and if various modifications or variations of the present invention do not depart from the technical idea of the present invention, or if modifications and variations of the present invention fall within the scope of claims and equivalents thereof, the present invention is intended to include such modifications and variations.

Claims (5)

1. A method of identifying a boundary of a work area, comprising the steps of:
acquiring an environment image of the position of the intelligent mobile robot;
extracting a boundary contour from the environment image by using an edge detection algorithm;
segmenting the boundary contour, and determining a to-be-detected sub-region in each segment, wherein each to-be-detected sub-region comprises a plurality of basic image units;
each sub-area to be detected is detected successively by utilizing a neural network model, and whether each basic image unit in the sub-area to be detected belongs to a working area is judged;
the neural network model is judged to be a basic image unit of a working area, and the basic image unit is sequentially connected with the boundary along the length direction of the boundary outline to serve as the boundary of the working area of the intelligent mobile robot;
the method for determining the subarea to be detected in each segment comprises the following steps: covering the basic image unit position along the boundary outline to form a minimum area covering the boundary outline; in each segment, enabling the minimum area to be covered in an area extending mode along two sides of the normal direction of the boundary outline to form a to-be-detected sub-area of each segment on the boundary outline;
when the boundary contour is segmented, equal segmentation is performed along the length direction of the boundary contour;
the neural network model sequentially comprises a convolution layer, a pooling layer, a convolution layer, a module group and a classification output module; the neural network model is trained by adopting a sample set comprising working area pictures and non-working area pictures of a plurality of robots, and the recognition effect of the neural network model is verified by adopting a verification set comprising working area pictures and non-working area pictures of the plurality of robots;
the general convolution of the full connection layer of the neural network model is converted into sparse connection;
and when the minimum area of each segment forms a subarea, the minimum area extends along the normal direction of the inner boundary outline of the segment, and the vertical extending distance is 50% of the height of the minimum area.
2. A method of identifying a boundary of a work area as claimed in claim 1, wherein: the convolution layer of the neural network model adopts a convolution kernel of 3*3.
3. A method of identifying a boundary of a work area as claimed in claim 2, wherein: and the pooling layer of the neural network model adopts a maximum pooling mode.
4. A method of identifying a boundary of a work area as claimed in claim 3, wherein: the edge detection algorithm adopts a sobel operator.
5. The method of identifying a boundary of a work area of claim 4, wherein: preprocessing the environment image before extracting a boundary contour from the environment image by using an edge detection algorithm; the preprocessing includes gaussian filtering, binarizing and open computing the real-time image.
CN202010499982.9A 2020-06-04 2020-06-04 Method for identifying boundary of working area Active CN111860123B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010499982.9A CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010499982.9A CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Publications (2)

Publication Number Publication Date
CN111860123A CN111860123A (en) 2020-10-30
CN111860123B true CN111860123B (en) 2023-08-08

Family

ID=72985773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010499982.9A Active CN111860123B (en) 2020-06-04 2020-06-04 Method for identifying boundary of working area

Country Status (1)

Country Link
CN (1) CN111860123B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230202A (en) * 2017-05-16 2017-10-03 淮阴工学院 The automatic identifying method and system of pavement disease image
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN110222622A (en) * 2019-05-31 2019-09-10 甘肃省祁连山水源涵养林研究院 A kind of ambient soil detection method and device
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
CN110968110A (en) * 2018-09-29 2020-04-07 广州极飞科技有限公司 Method and device for determining operation area, unmanned aerial vehicle and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559344B (en) * 2017-09-26 2023-10-13 腾讯科技(上海)有限公司 Frame detection method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230202A (en) * 2017-05-16 2017-10-03 淮阴工学院 The automatic identifying method and system of pavement disease image
CN110297483A (en) * 2018-03-21 2019-10-01 广州极飞科技有限公司 To operating area boundary acquisition methods, device, operation flight course planning method
CN110968110A (en) * 2018-09-29 2020-04-07 广州极飞科技有限公司 Method and device for determining operation area, unmanned aerial vehicle and storage medium
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN110222622A (en) * 2019-05-31 2019-09-10 甘肃省祁连山水源涵养林研究院 A kind of ambient soil detection method and device

Also Published As

Publication number Publication date
CN111860123A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111862064B (en) Silver wire surface flaw identification method based on deep learning
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN110930357B (en) In-service steel wire rope surface defect detection method and system based on deep learning
CN112308826B (en) Bridge structure surface defect detection method based on convolutional neural network
CN107389701A (en) A kind of PCB visual defects automatic checkout system and method based on image
CN110321933B (en) Fault identification method and device based on deep learning
CN105389581B (en) A kind of rice germ plumule integrity degree intelligent identifying system and its recognition methods
CN111681240A (en) Bridge surface crack detection method based on YOLO v3 and attention mechanism
CN113324864B (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN109087286A (en) A kind of detection method and application based on Computer Image Processing and pattern-recognition
CN113240623B (en) Pavement disease detection method and device
CN112528979B (en) Transformer substation inspection robot obstacle distinguishing method and system
CN110751619A (en) Insulator defect detection method
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN115797813B (en) Water environment pollution detection method based on aerial image
CN106056078B (en) Crowd density estimation method based on multi-feature regression type ensemble learning
CN111476804A (en) Method, device and equipment for efficiently segmenting carrier roller image and storage medium
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN116508057A (en) Image recognition method, apparatus and computer readable storage medium
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN111860123B (en) Method for identifying boundary of working area
CN116704270A (en) Intelligent equipment positioning marking method based on image processing
Das et al. Automatic License Plate Recognition Technique using Convolutional Neural Network
CN114550139A (en) Lane line detection method and device
CN114782459B (en) Spliced image segmentation method, device and equipment based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant