CN113761970B - Method, system, robot and storage medium for identifying working position based on image - Google Patents

Method, system, robot and storage medium for identifying working position based on image Download PDF

Info

Publication number
CN113761970B
CN113761970B CN202010490409.1A CN202010490409A CN113761970B CN 113761970 B CN113761970 B CN 113761970B CN 202010490409 A CN202010490409 A CN 202010490409A CN 113761970 B CN113761970 B CN 113761970B
Authority
CN
China
Prior art keywords
image
contour
working
robot
working position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010490409.1A
Other languages
Chinese (zh)
Other versions
CN113761970A (en
Inventor
朱绍明
任雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Cleva Electric Appliance Co Ltd
Suzhou Cleva Precision Machinery and Technology Co Ltd
Original Assignee
Suzhou Cleva Electric Appliance Co Ltd
Suzhou Cleva Precision Machinery and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Cleva Electric Appliance Co Ltd, Suzhou Cleva Precision Machinery and Technology Co Ltd filed Critical Suzhou Cleva Electric Appliance Co Ltd
Priority to CN202010490409.1A priority Critical patent/CN113761970B/en
Priority to PCT/CN2020/118390 priority patent/WO2021243895A1/en
Publication of CN113761970A publication Critical patent/CN113761970A/en
Application granted granted Critical
Publication of CN113761970B publication Critical patent/CN113761970B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a system, a robot and a storage medium for identifying a working position based on an image, wherein the method comprises the following steps: acquiring an original image; extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarization image through threshold matching; performing contour detection on the binarized image to obtain each contour in the binarized image; confirming working positions of the robot for shooting original images according to the acquired outlines, wherein the working positions are working areas or non-working areas; the invention can distinguish the working area from the non-working area by the image shot by the camera on the robot.

Description

Method, system, robot and storage medium for identifying working position based on image
Technical Field
The invention relates to the field of intelligent control, in particular to a method, a system, a robot and a storage medium for identifying a working position based on an image.
Background
Low repetition rate, high coverage are the goal sought by walk-through robots such as mobile robots for dust collection, mowing, pool cleaning, and the like.
Taking a mobile robot as an intelligent mowing robot as an example, the mowing robot takes a lawn enclosed by a boundary as a working area to mow, and the outside of the lawn is defined as a non-working area.
In the prior art, the boundary line is usually embedded to mark the boundary of the lawn working area, but not only is the cost increased due to the fact that much manpower and material resources are required, but also certain requirements are required for wiring, such as the angle of a corner cannot be smaller than 90 degrees, and therefore the shape of the lawn working area is limited to a certain extent.
Therefore, a new way to distinguish between the working area and the non-working area of the robot is needed.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a method, a system, a robot and a storage medium for identifying a working position based on an image.
In order to achieve one of the above objects, an embodiment of the present invention provides a method for identifying a working position based on an image, the method including: acquiring an original image;
extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarization image through threshold matching;
performing contour detection on the binarized image to obtain each contour in the binarized image;
confirming working positions of the robot for shooting original images according to the acquired outlines, wherein the working positions are working areas or non-working areas;
The periodicity of the texture features of the image is utilized by the method to identify the location of the working area.
As a further improvement of an embodiment of the present invention, the feature extraction of the original image to form a texture feature image, and converting the texture feature image into a binary image through matching of threshold values includes:
converting the original image into a gray scale image;
smoothing and filtering the gray level image to form a denoising image;
performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image;
converting the texture feature image into a binarized image through threshold matching;
the method is favorable for extracting the outline of the texture feature image.
As a further improvement of an embodiment of the present invention, the performing contour detection on the binarized image to obtain each contour in the binarized image includes:
performing contour detection on the binarized image to obtain each contour in the binarized image;
traversing each contour, calculating whether one of the dimension parameters of the current contour is larger than a preset dimension parameter threshold, if so, storing the current contour in a contour set, and if not, discarding the current contour;
or traversing each contour, calculating whether all the size parameters of the current contour are larger than the corresponding preset size parameter threshold value, if so, storing the current contour in a contour set, and if not, discarding the current contour;
The dimensional parameters include: at least one of perimeter of the contour, area of the contour, number of pixel points contained in the contour and number of pixel points contained in a contour line of the contour;
the method is beneficial to acquiring effective contour information.
As a further improvement of an embodiment of the present invention, the determining the working position of the robot for capturing the original image according to the acquired profiles, where the working position is a working area or a non-working area, includes:
traversing each contour in turn by taking any contour as a basis according to the arrangement sequence of each contour in the contour set, comparing the contour as the basis with Hu invariant distance between the contours traversed currently according to MatchShapes, and dividing the contour as the basis and all contours with Hu invariant distance smaller than a preset moment threshold value with the same similar sub-contour set if the Hu invariant distance is smaller than the preset moment threshold value; if the Hu invariant pitch obtained by taking the profile as the basis is not smaller than a preset torque conversion threshold value, dividing the profile as the basis into independent similar sub-profile sets; in the traversal process, contours which are already divided into similar sub-contour sets do not repeatedly invoke traversal;
Obtaining the number m [ k ] of the contours in each similar sub-contour set, wherein k is the sequence number of the similar sub-contour set;
judging whether at least one m k is larger than a preset quantity comparison threshold, if yes, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area;
or judging whether the maximum value in m [ k ] is greater than a preset quantity comparison threshold value, if so, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area;
the method is favorable for checking the working position condition by counting the periodicity.
To achieve the above object, according to another embodiment of the present invention, there is provided a system for identifying a working position based on an image, the system including:
the image acquisition module is used for acquiring an original image;
the image conversion module is used for extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarized image through threshold matching;
the analysis module is used for carrying out contour detection on the binarized image to obtain each contour in the binarized image; and confirming the working position of the robot for shooting the original image according to the acquired outlines, wherein the working position is a working area or a non-working area.
As a further improvement of an embodiment of the present invention, the image conversion module is further configured to:
converting the original image into a gray scale image;
smoothing and filtering the gray level image to form a denoising image;
performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image;
the texture feature image is converted into a binarized image by matching of the threshold values.
As a further improvement of an embodiment of the present invention, the parsing module is further configured to:
performing contour detection on the binarized image to obtain each contour in the binarized image; traversing each contour, calculating whether one of the dimension parameters of the current contour is larger than a preset dimension parameter threshold, if so, storing the current contour in a contour set, and if not, discarding the current contour;
or traversing each contour, calculating whether all the size parameters of the current contour are larger than the corresponding preset size parameter threshold value, if so, storing the current contour in a contour set, and if not, discarding the current contour;
the dimensional parameters include: at least one of perimeter of the contour, area of the contour, number of pixels contained in the contour, and number of pixels contained in a contour line of the contour.
As a further improvement of an embodiment of the present invention, the parsing module is further configured to:
traversing each contour in turn by taking any contour as a basis according to the arrangement sequence of each contour in the contour set, comparing the contour as the basis with Hu invariant distance between the contours traversed currently according to MatchShapes, and dividing the contour as the basis and all contours with Hu invariant distance smaller than a preset moment threshold value with the same similar sub-contour set if the Hu invariant distance is smaller than the preset moment threshold value; if the Hu invariant pitch obtained by taking the profile as the basis is not smaller than a preset torque conversion threshold value, dividing the profile as the basis into independent similar sub-profile sets; in the traversal process, contours which are already divided into similar sub-contour sets do not repeatedly invoke traversal;
obtaining the number m [ k ] of the contours in each similar sub-contour set, wherein k is the sequence number of the similar sub-contour set; judging whether at least one m k is larger than a preset quantity comparison threshold, if yes, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area;
Or judging whether the maximum value in m [ k ] is greater than a preset quantity comparison threshold value, if so, confirming that the working position of the robot for shooting the original image is a working area; if not, confirming that the working position of the robot for shooting the original image is a working area.
In order to achieve one of the above objects, an embodiment of the present invention provides a robot including a memory storing a computer program and a processor implementing the steps of the method of recognizing a working position based on an image as described above when the processor executes the computer program.
In order to achieve one of the above objects, an embodiment of the present invention provides a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of identifying a working position based on an image as described above.
Compared with the prior art, the method, the system, the robot and the storage medium for identifying the working position based on the image can distinguish the working area from the non-working area through the image shot by the camera on the robot.
Drawings
FIG. 1 is a schematic structural view of a specific example of a robot lawnmower system of the present invention;
FIG. 2 is a flow chart of a method for identifying a working position based on an image according to a first embodiment of the present invention;
FIGS. 3 and 4 are flow diagrams illustrating a specific implementation procedure of one of the steps in FIG. 2;
FIGS. 5A, 5B, 5C, 5D, 5E, 5F, 5G are schematic illustrations of various examples of the present invention, respectively;
FIG. 6 is a flowchart of a method for identifying a working position based on an image according to a second embodiment of the present invention;
FIGS. 7 and 8 are flow diagrams illustrating a specific implementation procedure of one of the steps in FIG. 6;
FIG. 9 is a flowchart of a method for identifying a working position based on an image according to a third embodiment of the present invention;
FIGS. 10, 11 and 12 are flow diagrams illustrating the implementation of one of the steps in FIG. 9;
fig. 13 is a schematic block diagram of a system for identifying a working position based on an image according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the embodiments shown in the drawings. These embodiments are not intended to limit the invention and structural, methodological, or functional modifications of these embodiments that may be made by one of ordinary skill in the art are included within the scope of the invention.
The robot system of the invention can be a mowing robot system, a sweeping robot system, a snowplow system, a leaf suction machine system, a golf course ball picking machine system and the like, and each system can automatically walk in a working area and perform corresponding work.
As shown in fig. 1, the robot lawnmower system of the present invention includes: mowing Robot (RM).
The mowing robot includes: the device comprises a body 10, a walking unit, an image acquisition unit and a control unit, wherein the walking unit, the image acquisition unit and the control unit are arranged on the body 10. The walking unit includes: a driving wheel 111, a driven wheel 113, and a motor for driving the driving wheel 111; the motor can be a brushless motor with a reduction gearbox and a Hall sensor; after the motor is started, the driving wheel 111 can be driven to walk through the reduction gearbox, and the traveling actions such as forward and backward linear running, in-situ turning, circular arc running and the like can be realized by controlling the speed and the direction of the two wheels; the driven wheels 113 may be universal wheels, which are typically provided in 1 or 2, and mainly serve as supporting balances.
The image acquisition unit is used for acquiring a scene in a visual angle range of the image acquisition unit in a certain range, in the specific embodiment of the invention, the camera 12 is arranged at the upper part of the body 10, a certain included angle is formed between the camera 12 and the horizontal direction, and the scene in a certain range of the mowing robot can be shot; the camera 12 typically captures a scene within a range of the front of the mowing robot.
The control unit is a main controller 13 that performs image processing, for example: MCU or DSP, etc.
Further, the mowing robot further includes: a working mechanism for working and a power supply 14; in this embodiment, the working mechanism is a grass cutter head, and various sensors for sensing the walking state of the walking robot, such as: dumping, ground clearance, collision sensors, geomagnetism, gyroscopes, etc., are not specifically described herein.
As shown in fig. 2, a first embodiment of the present invention provides a method for identifying a working position based on an image, the method including the following steps:
a1, acquiring an original image;
a2, converting the original image into a binarized image, wherein the binarized image comprises zero-value pixel points and non-zero-value pixel points;
a3, calculating the distance L between each non-zero value pixel point in the binarized image and the nearest zero value pixel point P(X,Y) (X, Y) represents the coordinates of any non-zero pixel point in the binary image, and P (X, Y) represents any non-zero pixel point in the binary image; statistical distance L P(X,Y) A number m greater than a preset distance threshold; and confirming working positions of the robot for shooting the original image according to the number m, wherein the working positions are working areas or non-working areas.
In the specific embodiment of the invention, for the step A1, a scene in front of a mowing robot is photographed in real time by a camera arranged on the mowing robot to form an original image; the scene is a ground image in the advancing direction of the robot; further, after the main controller receives the original image, the original image is analyzed; the working position of the robot for photographing the original image, which is acquired by the robot, can thus be determined from the original image, as will be described in detail below. In this particular example, the original image is typically a color image in RGB format; it should be noted that, at different moments, the original images obtained by the camera are the same in size and rectangular.
For the step A2, in the realizable mode of the present invention, there are various modes for converting the original image in RGB format into the binarized image; the binarized image is: the pixel point in the image only has two gray values, which are 0 or 255, and the binarization processing process is a process of displaying the whole original image with obvious black and white effect.
In a preferred embodiment of the present invention, referring to fig. 3, step A2 specifically includes: a21, converting the original image into an HSV image formed by an HSV color space; converting the HSV image into a first binarized image through matching of threshold values; separating an original image, extracting an R channel image in the original image, and performing smoothing filtering treatment on the R channel image to form a denoising image; converting the denoising image into a second binarized image through threshold matching; extracting the edge of the denoising image through an edge detection algorithm to form an edge image, and inverting the edge image to form a third binarization image; a22, performing an AND operation on the first binarized image, the second binarized image and the third binarized image to form the binarized image.
It should be noted that, in the first binarization image forming process, the purpose of converting the original image into the HSV image formed by the HSV color space is to facilitate color detection; correspondingly, the step can also convert the RGB color space into a color space which is favorable for color detection, such as HIS or LAB, and further convert the obtained image of the step into a first binarized image in a threshold matching mode.
The threshold matching mode is to transform each pixel point through setting a threshold range, when the value of the pixel point is in the threshold range, the pixel value is adjusted to be 0 or 255, and when the pixel value is out of the threshold range, the pixel value is adjusted to be 255 or 0; the size of the threshold range can be specifically adjusted according to the needs; if the value of H, S, V corresponding to any pixel in the HSV image is in the corresponding preset threshold range, adjusting the gray value of the pixel to one of 0 and 255, and adjusting the gray value of the pixel of H, S, V which is out of the preset range to the other of 0 and 255; in this specific example, for the threshold range setting stage of the first binarized image, it is preferable to adjust the pixel value of the lawn color (i.e., the working area) to 255, i.e., to white.
The smoothing filter processing in the second binarized image forming process aims at removing noise in the image; for example: median filtering, mean filtering, gaussian filtering, etc.
The threshold value matching in the second binarization image forming process can be fixed threshold value matching or can be performed by adopting an Ojin threshold value method.
Preferably, in the second binarized image forming process in the step A2, the method further includes: performing expansion corrosion treatment on the second binarized image to form a new second binarized image; the dilation-erosion process is a morphological operation, the purpose of which is to remove noise in the image, and correspondingly, the step a22 specifically includes: and performing an AND operation on the first binarized image, the second binarized image and the third binarized image to form the binarized images.
In the third binarization image forming process, the adopted edge detection method is a canny algorithm.
In the preferred embodiment of the present invention, for step a22, performing an and operation on the first binarized image, the new second binarized image, and the third binarized image to form the binarized image; it can be understood that, because the first binarized image, the new second binarized image, and the third binarized image have the same size, and the pixel value of each pixel is not zero, that is, 255, in the process of taking and operating, if the values of the pixels at the same position in the 3 images are all 255, the pixel value of the pixel at the position corresponding to the obtained binarized image is 255; if the values of the pixel points at the same position in the 3 images are all 0, taking the pixel value of the pixel point at the position corresponding to the obtained binarized image as 0; in other cases, the pixel values of the pixel points corresponding to the positions in the obtained binarized image are all 0.
For step A3, referring to fig. 4, in a preferred embodiment of the present invention, step A3 specifically includes: calculating the distance difference between each non-zero value pixel point and the nearest zero value pixel point in the binarized image, and counting the total number m of non-zero pixel points, wherein the distance difference between each non-zero value pixel point and the nearest zero value pixel point is larger than a preset distance threshold value, and the m is a positive integer larger than zero; judging whether m is larger than a preset quantity threshold value, if so, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area.
In step A3, the preset distance threshold is a value preset by the system and representing the distance between two pixel points on the image, and the size of the value can be specifically adjusted according to the requirement; for example: the value is set to be 5; when the distance difference between any non-zero value pixel point and the nearest zero value pixel point is larger than 5, the value of the total number m of the non-zero pixel points is added by 1.
The preset quantity threshold is a quantity fixed value, and the size of the quantity fixed value can be specifically adjusted according to the requirement; for example: when m is greater than 400, the working position of the robot for shooting the original image is confirmed to be a non-working area.
In the preferred embodiment of the present invention, for step A3, based on the binarized image, a distance conversion image is formed on the binarized image according to the magnitude of the distance difference value matched with the non-zero pixel point; in the distance-converted image, the larger the distance difference, the higher the brightness thereof; thus, by observing the distance conversion image, the size distribution of the distance difference of the non-zero pixel point matching can be observed.
For ease of understanding, the present invention describes 2 specific examples for the image recognition working position-based method provided in the first embodiment, for reference: embodiment one and embodiment two.
Referring to fig. 5A, in the first embodiment, the original image, the binarized image, and the distance transformed image are sequentially shown from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; the distance conversion image only displays the conversion result of the lower half part of the binarized image; the preset distance threshold is 5, and the preset number threshold is 400; by calculation, m=962 > 400, and thus, it can be confirmed that the working position of the robot for photographing the original image is the non-working area. In this example, the higher the luminance of the non-zero pixel point, the greater the distance difference representing its closest zero value pixel point, as seen by observation: in this example, the bright spot area is large and has a certain continuity.
Referring to fig. 5B, in the second embodiment, the original image, the binarized image, and the distance transformed image are sequentially shown from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; the distance conversion image only displays the conversion result of the lower half part of the binarized image; the preset distance threshold is 5, and the preset number threshold is 400; through calculation, m=0 < 400, so that the working position of the robot for shooting the original image can be confirmed as a working area. In this example, the higher the luminance of the non-zero pixel point, the greater the distance difference representing its closest zero value pixel point, as seen by observation: in this example, the bright spots are discrete and relatively few.
As shown in fig. 6, a method for identifying a working position based on an image according to a second embodiment of the present invention includes: b1, acquiring an original image;
b2, extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarization image through threshold matching;
b3, carrying out contour detection on the binarized image to obtain each contour in the binarized image;
and B4, confirming the working position of the robot for shooting the original image according to the acquired outlines, wherein the working position is a working area or a non-working area.
In the specific embodiment of the invention, for the step B1, a scene in front of a mowing robot is photographed in real time by a camera arranged on the mowing robot to form an original image; the scene is a ground image in the advancing direction of the robot; further, after the main controller receives the original image, the original image is analyzed; the working position of the robot for photographing the original image, which is acquired by the robot, can thus be determined from the original image, as will be described in detail below. In this particular example, the original image is typically a color image in RGB format; it should be noted that, at different moments, the original images obtained by the camera are the same in size and rectangular.
In a preferred embodiment of the present invention, referring to fig. 7, step B2 specifically includes: b21, converting the original image into a gray image; b22, performing smoothing filter treatment on the gray level image to form a denoising image; b23, performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image; b24, converting the texture feature image into a binarized image through threshold matching.
It should be noted that, in the step B21, the original image in RGB format is converted into a gray image, which is favorable for texture detection; for B22, the smoothing filter processing is aimed at removing noise in the image; for example: median filtering, mean filtering, gaussian filtering and the like; for step B23, the feature operator in the LBP algorithm is not limited to LBP, but also includes improved operators of LBP, such as CLBP, ELBP, etc.; for the step B24, the threshold matching mode is to perform transformation on each pixel point through setting a threshold range, when the value of the pixel point is within the threshold range, the pixel value is adjusted to be 0 or 255, and when the value of the pixel point is outside the threshold range, the pixel point is adjusted to be 255 or 0; the size of the threshold range can be specifically adjusted according to the needs; in a specific example of the present invention, if the value corresponding to any pixel in the texture feature image is within the corresponding preset threshold range, the gray value of the pixel is adjusted to one of 0 and 255, and the gray value of the pixel outside the preset range is adjusted to the other of 0 and 255.
Further, in a preferred embodiment of the present invention, for step B3, the method specifically includes: b31, carrying out contour detection on the binarized image to obtain each contour in the binarized image; b32, traversing each contour, calculating whether one of the size parameters of the current contour is larger than a preset size parameter threshold, if so, storing the current contour in a contour set, and if not, discarding the current contour; or traversing each contour, calculating whether all the size parameters of the current contour are larger than the corresponding preset size parameter threshold value, if so, storing the current contour in a contour set, and if not, discarding the current contour; the dimensional parameters include: at least one of perimeter of the contour, area of the contour, number of pixels contained in the contour, and number of pixels contained in a contour line of the contour.
Taking the example that the contour parameters only comprise the number of pixel points contained on the contour line of the contour, correspondingly, the correspondingly set size parameter threshold value represents the number of the pixel points; for example: and setting a size parameter threshold 30, wherein when the number of pixels contained in the contour line of any contour is larger than 30, the current contour is stored in the contour set, and when the number of pixels contained in the contour line of any contour is not larger than 30, the current contour is discarded, namely, in the next calculation, the current contour is not counted.
It can be understood that when the profile parameters include a plurality of profile parameters, a size parameter threshold is set corresponding to each profile parameter; in addition, when calculating the contour set, the calculation result of one contour parameter can be counted, and the calculation result of a plurality of contour parameters can be combined for counting; for example: the number of the set contour parameters is three, correspondingly, the current contour can be stored in the contour set when the result of any contour parameter is larger than the preset size threshold value, the current contour can be stored in the contour set when the result of both contour parameters is larger than the preset size threshold value, the current contour can be stored in the contour set when the result of all contour parameters is larger than the preset size threshold value,
preferably, as shown in fig. 8, step B4 specifically includes: b41, traversing each contour in sequence by taking any contour as a basis according to the arrangement sequence of each contour in the contour set, comparing Hu invariant distance between the contour serving as the basis and the contour traversed currently according to MatchShapes, and dividing the contour serving as the basis and all contours with Hu invariant distance smaller than a preset moment threshold value with the same similar sub-contour set if the obtained Hu invariant distance is smaller than the preset moment threshold value; if the Hu invariant pitch obtained by taking the profile as the basis is not smaller than a preset torque conversion threshold value, dividing the profile as the basis into independent similar sub-profile sets; in the traversal process, contours which are already divided into similar sub-contour sets do not repeatedly invoke traversal; b42, obtaining the number m [ k ] of the contours in each similar sub-contour set, wherein k is the sequence number of the similar sub-contour set; judging whether at least one m k is larger than a preset quantity comparison threshold, if yes, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area; or judging whether the maximum value in m [ k ] is larger than a preset quantity comparison threshold, if so, confirming that the working position of the robot for shooting the original image is a non-working area, and if not, confirming that the working position of the robot for shooting the original image is a working area.
In step B41, matchShapes is a function provided by OppenCV for comparing two images Hu according to calculation without changing the distance, the function return value represents the similarity, the identical image return value is 0, and the return value is 1 at the maximum; the specific function is the prior art and will not be further described here; correspondingly, the preset torque conversion threshold is Hu invariant distance preset by a system for comparing the similarity of two profiles, and the size of the Hu invariant distance can be specifically adjusted according to the needs; if the Hu invariant distance between the two images is smaller than a preset torque threshold, the two images are considered to be similar, and the two images are divided into the same similar sub-contour set.
Further, the preset quantity comparison threshold is the quantity of the contours in any similar sub-contour preset on the system, and the size of the quantity of the contours can be specifically adjusted according to the needs; for example: the value is set to be 10; if the total number of the contours in one similar sub-contour set is greater than 10, indirectly indicating that the original image has periodic textures, and judging that the working position of the robot for shooting the original image is a non-working area; otherwise, the working position of the robot for shooting the original image is judged to be a working area.
For ease of understanding, the present invention describes 2 specific examples for the image recognition working position-based method provided in the second embodiment, for reference: embodiment three and embodiment four.
Referring to fig. 5C, in the third embodiment, the original image, the texture feature image, and the binarized image are sequentially from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; setting a preset quantity comparison threshold to be 10; and calculating, namely judging that the current image has no periodic texture in the current image, and determining the working position of the robot for shooting the original image as a working area, wherein the maximum number of similar contours in each similar sub-contour set of the binarized image is 7, and the number of the similar contours is smaller than a preset number comparison threshold value.
Referring to fig. 5D, in the fourth embodiment, the original image, the texture feature image, and the binarized image are sequentially shown from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; setting a preset quantity comparison threshold to be 10; and calculating, namely judging that the current image has periodic textures in the similar sub-contour sets of the binarized image, and determining that the working position of the robot for shooting the original image is a non-working area, wherein the maximum number of similar contours is 13 and is larger than a preset number comparison threshold value.
Referring to fig. 9, a method for identifying a working position based on an image according to a third embodiment of the present invention includes: c1, acquiring an original image;
C2, extracting features of an original image to form a texture feature image, performing Fourier transform on the texture image to form a spectrum image, and converting the spectrum image into a binarization image through threshold matching;
and C3, confirming the working position of the robot for shooting the original image according to the positions of the pixel points in the binarized image and the pixel values, and/or confirming the working position of the robot for shooting the original image according to the positions of the pixel points in the binarized image and the total number of the pixel points with the same pixel values, wherein the working position is a working area or a non-working area.
In the specific embodiment of the invention, for the step C1, a scene in front of a robot is photographed in real time by a camera arranged on the mowing robot to form an original image; the scene is a ground image in the advancing direction of the robot; further, after the main controller receives the original image, the original image is analyzed; the working position of the robot for photographing the original image, which is acquired by the robot, can thus be determined from the original image, as will be described in detail below. In this particular example, the original image is typically a color image in RGB format; it should be noted that, at different moments, the original images obtained by the camera are the same in size and rectangular.
In a preferred embodiment of the present invention, referring to fig. 10, step C2 specifically includes: c21, converting the original image into a gray image; c22, performing smoothing filter treatment on the gray level image to form a denoising image; c23, performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image; c24, carrying out Fourier transform on the texture image to form a spectrum image; and C25, converting the spectrum image into a binarized image through threshold matching.
It should be noted that, in step C21, converting the original image in RGB format into a gray image will be beneficial to texture detection; for C22, the smoothing filter processing is aimed at removing noise in the image; for example: median filtering, mean filtering, gaussian filtering and the like; for step C23, the feature operator in the LBP algorithm is not limited to LBP, but also includes improved operators of LBP, such as CLBP, ELBP, etc.; for the step C24, the origin of the spectrum image is the average gray scale of the whole image of the original image, the frequency is 0, the spectrum image spreads from the origin to the periphery, and the frequency is higher and higher; in the spectrum image, the texture is related to the high-frequency cut in the image spectrum, so in the following steps, the pixel points with the distance from the original point being larger than a preset limit threshold value in the binarized image are counted, and whether the texture features exist can be more accurately reflected. For step C25, in the manner of matching the threshold, the size of the threshold may be determined according to the ratio of the number of pixels of the pixel value of the h-channel in the chromaticity range to the total number of pixels in the image.
Further, referring to fig. 11, in one embodiment of the present invention, in step C3, determining, according to the position and the pixel value of the pixel point in the binarized image, the working position of the robot for capturing the original image specifically includes: c31, traversing the whole binary image by taking the center of the binary image as an origin, and acquiring pixel points with the distance from the origin being larger than a preset limit threshold; and C32, determining whether the sum of pixel values of the pixel points with the distance from the original point larger than a preset limiting threshold value is larger than a preset pixel value threshold value, if so, determining that the working position of the robot for shooting the original image is a non-working area, and if not, determining that the working position of the robot for shooting the original image is a working area.
Further, referring to fig. 12, in another embodiment of the present invention, in step C3, determining the working position of the robot to capture the original image according to the positions of the pixels in the binarized image and the total number of pixels having the same pixel value specifically includes: c31', taking the center of the binary image as an origin, traversing the whole binary image to obtain a non-zero value pixel point with the distance from the origin being greater than a preset limit threshold; and C32', counting whether the total number of all the obtained non-zero value pixel points is larger than a preset total number threshold, if so, confirming that the working position of the robot for shooting the original image is a non-working area, and if not, confirming that the working position of the robot for shooting the original image is a working area.
The preset limiting threshold is a fixed value which is preset on the system and compares the distance between any pixel point and the origin, and the size of the fixed value can be specifically adjusted according to the needs; for example: the value is set to 9; in C31, when the distance between any pixel point and the origin is greater than 9, taking the pixel point as the pixel point counted in the next step; in C31', when the distance between any non-zero pixel point and the origin is greater than 9, the non-zero pixel point is taken as the pixel point counted in the next step.
The preset pixel value threshold is a fixed value of a pixel value and a value of a comparison pixel point preset on the system, and the size of the pixel value threshold can be specifically adjusted according to the needs; for example: the value is set to 510; if the sum of the pixel values of the pixel points obtained through calculation is greater than 510, indirectly representing that the original image has periodic textures, and judging that the working position of the robot for shooting the original image is a non-working area; otherwise, the working position of the robot for shooting the original image is judged to be a working area.
The preset total number threshold is a fixed value of the number and the size of the preset non-zero value pixel points on the system, and the size of the fixed value can be specifically adjusted according to the needs; for example: the value is set to 2; if the total number of the non-zero value pixel points obtained through calculation is larger than 2, indirectly representing that the original image has periodic textures, and judging that the working position of the robot for shooting the original image is a non-working area; otherwise, the working position of the robot for shooting the original image is judged to be a working area.
For ease of understanding, the present invention describes 3 specific examples for the image recognition working position-based method of the third embodiment provided based on fig. 11 for reference: embodiment five, embodiment six and embodiment seven.
Referring to fig. 5E, in the fifth embodiment, an original image, a gray image, a texture feature image, a spectrum image, and a binary image are sequentially shown from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; setting a preset limiting threshold to 9 and setting a preset pixel value threshold to 510; and calculating to obtain a sum 1530 of pixel values of pixel points with the distance greater than 9 from the origin, judging that the current image has periodic textures because 1530 is more than 510, and determining that the working position of the robot for shooting the original image is a non-working area.
Referring to fig. 5F, in a sixth embodiment, an original image, a gray image, a texture feature image, a spectrum image, and a binary image are sequentially shown from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; setting a preset limiting threshold to 9 and setting a preset pixel value threshold to 510; and (3) calculating to obtain a sum of pixel values of pixel points with the distance from the origin greater than 9, wherein the sum of the pixel values is 0, and the result is that the sum of the pixel values is less than 510, so that no periodic texture exists in the current image, and the working position of the robot for shooting the original image is determined to be a working area.
Referring to fig. 5G, a seventh embodiment is an original image, a gray image, a texture feature image, a spectrum image, and a binary image in this order from left to right; the original image is a color image in RGB format; the binarized image is a black-and-white binary image; setting a preset limiting threshold to 9 and setting a preset pixel value threshold to 510; and (3) calculating to obtain a sum of pixel values of pixel points with the distance from the origin greater than 9, wherein the sum of pixel values of pixel points with the distance from the origin greater than 9 is 1020, and determining that the current image has periodic textures because 1020 is greater than 510, and determining that the working position of the robot for shooting the original image is a non-working area.
It should be noted that, in the determination process of the fifth, sixth and seventh examples, the determination may also be performed based on the method based on the image recognition working position according to the third embodiment provided in fig. 12, in a specific embodiment, the preset limit threshold is set to 9 as well, and the preset total number threshold is set to 2, where the determination process is similar to the determination process of the fifth, sixth and seventh examples, and the determination result is the same, and further description is omitted herein.
In an embodiment of the present invention, a robot is further provided, including a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the method for identifying a working position based on an image according to any one of the above embodiments when executing the computer program.
In an embodiment of the present invention, there is also provided a readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method for identifying a working position based on an image according to any of the above embodiments.
Referring to fig. 13, there is provided a system for identifying a working position based on an image, the system comprising: an image acquisition module 100, an image conversion module 200, and a parsing module 300.
For the system based on image recognition working position according to the first embodiment of the present invention, the image acquisition module 100 is configured to acquire an original image; the image conversion module 200 is configured to convert an original image into a binary image, where the binary image includes zero-value pixel points and non-zero-value pixel points; the analysis module 300 confirms the working position of the robot for shooting the original image according to the distance difference between each non-zero value pixel point and the nearest zero value pixel point in the binarized image, wherein the working position is a working area or a non-working area.
Further, the image acquisition module 100 in the system based on image recognition working position of the first embodiment is configured to implement step A1; the image conversion module 200 is used for implementing step A2; the parsing module 300 is used for implementing the step A3; it will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not repeated here.
For the system based on image recognition working position according to the second embodiment of the present invention, the image acquisition module 100 is configured to acquire an original image; the image conversion module 200 is used for extracting features of an original image to form a texture feature image, and converting the texture feature image into a binary image through threshold matching; the analysis module 300 is configured to perform contour detection on the binarized image to obtain each contour in the binarized image; and confirming the working position of the robot for shooting the original image according to the acquired outlines, wherein the working position is a working area or a non-working area.
Further, the image acquisition module 100 in the system based on image recognition working position according to the second embodiment is configured to implement step B1; the image conversion module 200 is used for implementing the step B2; the parsing module 300 is used for implementing the steps B3 and B4; it will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not repeated here.
For the system based on image recognition working position according to the third embodiment of the present invention, the image acquisition module 100 is configured to acquire an original image; the image conversion module 200 is configured to perform feature extraction on an original image to form a texture feature image, perform fourier transformation on the texture image to form a spectrum image, and convert the spectrum image into a binary image through threshold matching; the parsing module 300 is configured to confirm a working position of the robot for capturing an original image according to positions and pixel values of pixels in the binarized image, and/or confirm a working position of the robot for capturing the original image according to positions of pixels in the binarized image and a total number of pixels having the same pixel value, where the working position is a working area or a non-working area.
Further, the image acquisition module 100 in the system based on image recognition working position according to the third embodiment is configured to implement step C1; the image conversion module 200 is used for implementing step C2; the parsing module 300 is used for implementing step C3; it will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not repeated here.
In summary, according to the method, the system, the robot and the storage medium for identifying the working position based on the image, the working area and the non-working area can be distinguished through the image shot by the camera on the robot.
In the several embodiments provided in this application, it should be understood that the disclosed modules, systems, and methods may be implemented in other manners. The system embodiments described above are merely illustrative, and the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed.
The modules described as separate components may or may not be physically separate, and components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed on a plurality of network modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or 2 or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in hardware plus software functional modules.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (8)

1. A method for identifying a working position based on an image is characterized in that,
the method comprises the following steps:
acquiring an original image;
extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarization image through threshold matching;
performing contour detection on the binarized image to obtain each contour in the binarized image;
confirming working positions of the robot for shooting original images according to the acquired outlines, wherein the working positions are working areas or non-working areas;
the step of performing contour detection on the binarized image to obtain each contour in the binarized image includes:
performing contour detection on the binarized image to obtain each contour in the binarized image;
traversing each contour, calculating whether one of the dimension parameters of the current contour is larger than a preset dimension parameter threshold, if so, storing the current contour in a contour set, and if not, discarding the current contour;
or traversing each contour, calculating whether all the size parameters of the current contour are larger than the corresponding preset size parameter threshold value, if so, storing the current contour in a contour set, and if not, discarding the current contour;
the dimensional parameters include: at least one of perimeter of the contour, area of the contour, number of pixels contained in the contour, and number of pixels contained in a contour line of the contour.
2. The method for identifying a working location based on an image according to claim 1, wherein,
the feature extraction of the original image to form a texture feature image, and the conversion of the texture feature image into a binary image through the matching of a threshold value comprises the following steps:
converting the original image into a gray scale image;
smoothing and filtering the gray level image to form a denoising image;
performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image;
the texture feature image is converted into a binarized image by matching of the threshold values.
3. The method for identifying a working location based on an image according to claim 1, wherein,
and confirming the working position of the robot for shooting the original image according to the acquired profiles, wherein the working position is a working area or a non-working area, and the method comprises the following steps:
traversing each contour in turn by taking any contour as a basis according to the arrangement sequence of each contour in the contour set, comparing the contour as the basis with Hu invariant distance between the contours traversed currently according to MatchShapes, and dividing the contour as the basis and all contours with Hu invariant distance smaller than a preset moment threshold value with the same similar sub-contour set if the Hu invariant distance is smaller than the preset moment threshold value; if the Hu invariant pitch obtained by taking the profile as the basis is not smaller than a preset torque conversion threshold value, dividing the profile as the basis into independent similar sub-profile sets; in the traversal process, contours which are already divided into similar sub-contour sets do not repeatedly invoke traversal;
Obtaining the number m [ k ] of the contours in each similar sub-contour set, wherein k is the sequence number of the similar sub-contour set;
judging whether at least one m k is larger than a preset quantity comparison threshold, if yes, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area;
or judging whether the maximum value in m [ k ] is greater than a preset quantity comparison threshold value, if so, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area.
4. A system for identifying a working position based on an image is characterized in that,
the system comprises:
the image acquisition module is used for acquiring an original image;
the image conversion module is used for extracting features of the original image to form a texture feature image, and converting the texture feature image into a binarized image through threshold matching;
the analysis module is used for carrying out contour detection on the binarized image to obtain each contour in the binarized image; confirming working positions of the robot for shooting original images according to the acquired outlines, wherein the working positions are working areas or non-working areas;
Wherein, the parsing module is further configured to:
performing contour detection on the binarized image to obtain each contour in the binarized image;
traversing each contour, calculating whether one of the dimension parameters of the current contour is larger than a preset dimension parameter threshold, if so, storing the current contour in a contour set, and if not, discarding the current contour;
or traversing each contour, calculating whether all the size parameters of the current contour are larger than the corresponding preset size parameter threshold value, if so, storing the current contour in a contour set, and if not, discarding the current contour;
the dimensional parameters include: at least one of perimeter of the contour, area of the contour, number of pixels contained in the contour, and number of pixels contained in a contour line of the contour.
5. The system for identifying a working location based on an image of claim 4, wherein,
the image conversion module is further configured to:
converting the original image into a gray scale image;
smoothing and filtering the gray level image to form a denoising image;
performing feature extraction on the denoising image by adopting an LBP algorithm to form a texture feature image;
the texture feature image is converted into a binarized image by matching of the threshold values.
6. The system for identifying a working location based on an image of claim 4, wherein,
the parsing module is further configured to:
traversing each contour in turn by taking any contour as a basis according to the arrangement sequence of each contour in the contour set, comparing the contour as the basis with Hu invariant distance between the contours traversed currently according to MatchShapes, and dividing the contour as the basis and all contours with Hu invariant distance smaller than a preset moment threshold value with the same similar sub-contour set if the Hu invariant distance is smaller than the preset moment threshold value; if the Hu invariant pitch obtained by taking the profile as the basis is not smaller than a preset torque conversion threshold value, dividing the profile as the basis into independent similar sub-profile sets; in the traversal process, contours which are already divided into similar sub-contour sets do not repeatedly invoke traversal;
obtaining the number m [ k ] of the contours in each similar sub-contour set, wherein k is the sequence number of the similar sub-contour set; judging whether at least one m k is larger than a preset quantity comparison threshold, if yes, confirming that the working position of the robot for shooting the original image is a non-working area; if not, confirming that the working position of the robot for shooting the original image is a working area;
Or judging whether the maximum value in m [ k ] is greater than a preset quantity comparison threshold value, if so, confirming that the working position of the robot for shooting the original image is a working area; if not, confirming that the working position of the robot for shooting the original image is a working area.
7. A robot comprising a memory and a processor, said memory storing a computer program, characterized in that,
the processor, when executing the computer program, implements the steps of the method of identifying a working position based on an image as claimed in any one of claims 1-3.
8. A readable storage medium having a computer program stored thereon, characterized in that,
the computer program, when executed by a processor, implements the steps of the method of identifying a working position based on an image as claimed in any one of claims 1-3.
CN202010490409.1A 2020-06-02 2020-06-02 Method, system, robot and storage medium for identifying working position based on image Active CN113761970B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010490409.1A CN113761970B (en) 2020-06-02 2020-06-02 Method, system, robot and storage medium for identifying working position based on image
PCT/CN2020/118390 WO2021243895A1 (en) 2020-06-02 2020-09-28 Image-based working position identification method and system, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010490409.1A CN113761970B (en) 2020-06-02 2020-06-02 Method, system, robot and storage medium for identifying working position based on image

Publications (2)

Publication Number Publication Date
CN113761970A CN113761970A (en) 2021-12-07
CN113761970B true CN113761970B (en) 2023-12-26

Family

ID=78782830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010490409.1A Active CN113761970B (en) 2020-06-02 2020-06-02 Method, system, robot and storage medium for identifying working position based on image

Country Status (2)

Country Link
CN (1) CN113761970B (en)
WO (1) WO2021243895A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4312187A1 (en) * 2022-07-19 2024-01-31 Suzhou Cleva Precision Machinery & Technology Co., Ltd. Image analysis method and apparatus, computer device, and readable storage medium
CN115063055A (en) * 2022-08-17 2022-09-16 山东宇翔车桥制造有限公司 Semitrailer equipment operation performance supervisory systems based on data analysis
CN115797876B (en) * 2023-02-08 2023-04-07 华至云链科技(苏州)有限公司 Equipment monitoring processing method and system
CN116051681B (en) * 2023-03-02 2023-06-09 深圳市光速时代科技有限公司 Processing method and system for generating image data based on intelligent watch
CN116128877B (en) * 2023-04-12 2023-06-30 山东鸿安食品科技有限公司 Intelligent exhaust steam recovery monitoring system based on temperature detection
CN116228849B (en) * 2023-05-08 2023-07-25 深圳市思傲拓科技有限公司 Navigation mapping method for constructing machine external image
CN116687576B (en) * 2023-07-28 2024-01-16 北京万思医疗器械有限公司 Interventional consumable control method for vascular interventional operation robot
CN116935286B (en) * 2023-08-03 2024-01-09 广州城市职业学院 Short video identification system
CN116781824B (en) * 2023-08-24 2023-11-17 半糖去冰科技(北京)有限公司 File scanning method and system of android mobile phone
CN117647295B (en) * 2024-01-30 2024-05-14 合肥金星智控科技股份有限公司 Machine vision-based molten pool liquid level measurement method, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513072A (en) * 2015-12-05 2016-04-20 中国航空工业集团公司洛阳电光设备研究所 PTZ correction method
CN107231521A (en) * 2017-04-29 2017-10-03 安徽慧视金瞳科技有限公司 Camera automatic positioning method is used in a kind of meter reading identification
CN109102004A (en) * 2018-07-23 2018-12-28 鲁东大学 Cotton-plant pest-insects method for identifying and classifying and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839069B (en) * 2014-03-11 2017-04-12 浙江理工大学 Lawn miss cutting recognition method based on image analysis
CN107463166A (en) * 2016-06-03 2017-12-12 苏州宝时得电动工具有限公司 Automatic running device and its control traveling method
WO2017206950A1 (en) * 2016-06-03 2017-12-07 苏州宝时得电动工具有限公司 Automatic walking device and method for controlling the walking thereof
CN109658432A (en) * 2018-12-27 2019-04-19 南京苏美达智能技术有限公司 A kind of the boundary generation method and system of mobile robot

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105513072A (en) * 2015-12-05 2016-04-20 中国航空工业集团公司洛阳电光设备研究所 PTZ correction method
CN107231521A (en) * 2017-04-29 2017-10-03 安徽慧视金瞳科技有限公司 Camera automatic positioning method is used in a kind of meter reading identification
CN109102004A (en) * 2018-07-23 2018-12-28 鲁东大学 Cotton-plant pest-insects method for identifying and classifying and device

Also Published As

Publication number Publication date
WO2021243895A1 (en) 2021-12-09
CN113761970A (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN113761970B (en) Method, system, robot and storage medium for identifying working position based on image
CN113822094B (en) Method, system, robot and storage medium for identifying working position based on image
CN113822095B (en) Method, system, robot and storage medium for identifying working position based on image
CN103839069B (en) Lawn miss cutting recognition method based on image analysis
CN111353431B (en) Automatic working system, automatic walking equipment, control method thereof and computer readable storage medium
EP4242964A1 (en) Obstacle recognition method applied to automatic traveling device and automatic traveling device
CN113805571B (en) Robot walking control method, system, robot and readable storage medium
CN110084177B (en) Positioning system, method, control system, air conditioner and storage medium
CN113628202B (en) Determination method, cleaning robot and computer storage medium
US11354794B2 (en) Deposit detection device and deposit detection method
WO2022099755A1 (en) Image-based working area identification method and system, and robot
CN114494839A (en) Obstacle identification method, device, equipment, medium and weeding robot
CN115147713A (en) Method, system, device and medium for identifying non-working area based on image
WO2021238000A1 (en) Boundary-following working method and system of robot, robot, and readable storage medium
CN113223034A (en) Road edge detection and tracking method
CN115147714A (en) Method and system for identifying non-working area based on image
WO2022095170A1 (en) Obstacle recognition method and apparatus, and device, medium and weeding robot
CN107680083A (en) Parallax determines method and parallax determining device
CN115147712A (en) Method and system for identifying non-working area based on image
CN113096145B (en) Target boundary detection method and device based on Hough transformation and linear regression
EP4310790A1 (en) Image analysis method and apparatus, computer device, and readable storage medium
EP4312187A1 (en) Image analysis method and apparatus, computer device, and readable storage medium
US20240013540A1 (en) Obstacle recognition method and apparatus, device, medium and weeding robot
CN117094935A (en) Image analysis method, device, computer equipment and readable storage medium
CN117274949A (en) Obstacle based on camera, boundary detection method and automatic walking equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230525

Address after: 215000 No. 8 Ting Rong Street, Suzhou Industrial Park, Jiangsu, China

Applicant after: Suzhou Cleva Precision Machinery & Technology Co.,Ltd.

Applicant after: SKYBEST ELECTRIC APPLIANCE (SUZHOU) Co.,Ltd.

Address before: 215000 Huahong street, Suzhou Industrial Park, Jiangsu 18

Applicant before: Suzhou Cleva Precision Machinery & Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant