CN114677581A - Obstacle identification method, device, equipment, medium and weeding robot - Google Patents

Obstacle identification method, device, equipment, medium and weeding robot Download PDF

Info

Publication number
CN114677581A
CN114677581A CN202011553859.7A CN202011553859A CN114677581A CN 114677581 A CN114677581 A CN 114677581A CN 202011553859 A CN202011553859 A CN 202011553859A CN 114677581 A CN114677581 A CN 114677581A
Authority
CN
China
Prior art keywords
information
peak
chromaticity
point
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011553859.7A
Other languages
Chinese (zh)
Inventor
朱绍明
任雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Cleva Electric Appliance Co Ltd
Suzhou Cleva Precision Machinery and Technology Co Ltd
Original Assignee
Suzhou Cleva Precision Machinery and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Cleva Precision Machinery and Technology Co Ltd filed Critical Suzhou Cleva Precision Machinery and Technology Co Ltd
Priority to CN202011553859.7A priority Critical patent/CN114677581A/en
Priority to PCT/CN2021/140291 priority patent/WO2022135434A1/en
Publication of CN114677581A publication Critical patent/CN114677581A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01DHARVESTING; MOWING
    • A01D34/00Mowers; Mowing apparatus of harvesters
    • A01D34/006Control or measuring arrangements
    • A01D34/008Control or measuring arrangements for automated or remotely controlled operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method, a device, equipment, a medium and a weeding robot for identifying obstacles. The method comprises the following steps: obtaining the chromaticity information of the candidate weeding area image, and generating a chromaticity histogram to be processed to obtain first peak value information; generating a target chromaticity histogram, and determining second peak value information and chromaticity segmentation information; determining brightness information and roughness information; determining whether an obstacle exists according to the first peak information, the second peak information, the brightness information, and the roughness information. By operating the technical scheme provided by the embodiment of the invention, the problem that the boundary of the weeding area of the weeding robot is usually calibrated in a boundary line embedding mode in the prior art can be solved, a large amount of manpower and material resources are consumed, and the cost is increased. And the shape of the weeding area is limited to a certain extent due to the limitation of the embedding of the boundary line, so that the effect of improving the identification efficiency and accuracy of the obstacles in the candidate weeding area of the weeding robot is realized.

Description

Obstacle identification method, device, equipment, medium and weeding robot
Technical Field
The embodiment of the invention relates to a computer technology, in particular to a method, a device, equipment, a medium and a weeding robot for identifying obstacles.
Background
With the improvement of living standard, people pay more and more attention to the environmental construction, so the construction of urban landscaping gardens is more and more emphasized. Meanwhile, efficient greening maintenance, such as daily weeding, is becoming a demand gradually. However, since the conventional weeding machine requires manual operation, a weeding robot having an autonomous operation function is gradually emerging.
In the prior art, the boundary of the weeding area of the weeding robot is usually calibrated in a manner of embedding the boundary line, so that a large amount of manpower and material resources are consumed, and the cost is increased. And the shape of the herbicidal area is limited to some extent due to limitations on the burying of the borderline, such as the angle of the corner cannot be less than 90 degrees.
Disclosure of Invention
The embodiment of the invention provides a method, a device, equipment, a medium and a weeding robot for identifying obstacles, and aims to achieve the effect of improving the identification efficiency and accuracy of the obstacles in a candidate weeding area of the weeding robot.
In a first aspect, an embodiment of the present invention provides an obstacle identification method, where the method includes:
obtaining chromaticity information of a candidate weeding area image, and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information to obtain first peak value information of the to-be-processed chromaticity histogram;
generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram;
determining brightness information of the candidate weeding area image according to the chrominance segmentation information and the brightness image of the candidate weeding area image;
determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image.
In a second aspect, an embodiment of the present invention further provides an obstacle identification device, where the device includes:
the histogram generation module is used for acquiring chromaticity information of a candidate weeding area image and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information so as to acquire first peak value information of the to-be-processed chromaticity histogram;
the information determining module is used for generating a target chromaticity histogram according to the chromaticity histogram to be processed and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram;
a brightness information determination module for determining brightness information of the candidate weeding area image according to the chrominance segmentation information and a brightness image of the candidate weeding area image;
a roughness information determination module for determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
an obstacle determination module configured to determine whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information, so as to determine whether an obstacle exists in the candidate weeding area image.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device to store one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle identification method as described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the obstacle identification method as described above.
In a fifth aspect, an embodiment of the present invention further provides a weeding robot, which includes a robot body and the electronic device.
According to the embodiment of the invention, the chromaticity information of the candidate weeding area image is obtained, and the chromaticity histogram to be processed of the candidate weeding area image is generated according to the chromaticity information, so that the first peak value information of the chromaticity histogram to be processed is obtained; generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram; determining brightness information of the candidate weeding area image according to the chromaticity segmentation information and a brightness image of the candidate weeding area image; determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image; determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image. The boundary calibration method solves the problems that in the prior art, the boundary of a weeding area of the weeding robot is usually calibrated in a boundary line embedding mode, a large amount of manpower and material resources are consumed, and the cost is increased. And because the problem that the shape of the weeding area is limited to a certain extent due to the limitation of the embedding of the boundary line, the effect of improving the identification efficiency and accuracy of the obstacles in the candidate weeding area of the weeding robot is realized.
Drawings
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention;
fig. 2 is a flowchart of an obstacle identification method according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an obstacle identification apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an obstacle identification method according to an embodiment of the present invention, where this embodiment is applicable to a situation where a weeding robot identifies obstacles in a candidate weeding area, and the method can be executed by an obstacle identification device provided in an embodiment of the present invention, and the device can be implemented by software and/or hardware. Referring to fig. 1, the obstacle identification method provided in this embodiment includes:
step 110, obtaining chromaticity information of a candidate weeding area image, and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information to obtain first peak value information of the to-be-processed chromaticity histogram.
Wherein the candidate weeding area is a possible working area of the weeding robot, and all possible weeds are to be removed, namely the weeding area; or may be an obstacle or border that is similar in color to the grass, but that reflects less strongly and has a smoother surface and that provides some degree of brightness contrast with the surrounding grass.
The candidate weeding area image may be captured by a camera mounted on the weeding robot, which is not limited in this embodiment. The chromaticity information of the candidate weeding area image is a characteristic of the candidate weeding area image in terms of chromaticity as a whole, and information related to chromaticity in the image, such as a chromaticity value of each pixel point in the image, can be obtained by obtaining a chromaticity channel image of the candidate weeding area image, which is not limited in this embodiment.
The chromaticity information may be subjected to histogram statistics to generate a chromaticity histogram to be processed of the candidate weeding area image, so as to represent the designated chromaticity information of the pixel points in the candidate weeding area image, which is exemplarily the chromaticity value distribution statistics of all the pixel points in the candidate weeding area image, and this embodiment does not limit this. The abscissa of the to-be-processed chromaticity histogram may be a chromaticity value, and the ordinate may be a frequency, that is, the number of pixel points at the chromaticity value in the candidate weeding area image is used to reflect the chromaticity distribution of the pixel points in the candidate weeding area image.
The first peak information is designated peak information in the chromaticity histogram to be processed, and may be peak information with the largest frequency difference value corresponding to left and right adjacent chromaticities, which is not limited in this embodiment.
And 120, generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram.
The target chromaticity histogram is used for visually embodying chromaticity distribution information in the candidate weeding area image, and the generation process of the target chromaticity histogram according to the chromaticity histogram to be processed may be as follows: smoothing the chromaticity histogram to be processed to obtain a smoothed chromaticity histogram; determining a target peak point set and a target valley point set in a preset chromaticity interval of the smooth chromaticity histogram according to a preset screening rule; and generating a target chromaticity histogram according to the target peak point set and the target valley point set.
And smoothing the to-be-processed chromaticity histogram to remove noise in the to-be-processed chromaticity histogram and improve the accuracy of obtaining peak value information in a target chromaticity histogram generated according to the smoothed chromaticity histogram. The smoothing process may be a filtering process, which is not limited in this embodiment.
Screening data in the smooth chromaticity histogram according to a preset screening rule, and screening a target peak point set and a target valley point set which may be closely related to obstacle identification from all peak points and valley points in a preset chromaticity interval in the smooth chromaticity histogram, wherein optionally, the preset chromaticity interval is 15-95, which is not limited in this embodiment. And generating a target chromaticity histogram according to the target peak point set and the target valley point set.
The preset screening rules may include: each peak value in the target peak value point set is larger than a preset multiple of each valley value in the target valley value point; the distance between the target peak points is greater than a preset distance threshold; the peak value of the target peak point is greater than the preset peak threshold, which is not limited in this embodiment.
Illustratively, the peak value of the target peak point is greater than K × the valley value of the target valley point, where K is a preset multiple. The distance between the target peak points is greater than a preset distance threshold D, and the peak value of the target peak points is greater than a preset peak threshold C.
The second peak information of the target chromaticity histogram is information related to a peak in the target chromaticity histogram, and may be a specific numerical value of a specified peak in the target chromaticity histogram, which is not limited in this embodiment. The chroma division information is used to divide the target chroma histogram into left and right regions according to the chroma value to obtain respective information in the two regions.
In this embodiment, optionally, the determining the second peak information and the chroma division information of the target chroma histogram includes:
acquiring peak point information and valley point information in a preset chromaticity interval of the target chromaticity histogram; the peak point information includes: the number of peak points, the peak value of the peak points and the chromatic value of the peak points; the valley point information includes: valley point valley and valley point chroma values;
and determining second peak value information and chroma segmentation information of the target chroma histogram according to the peak value point information and/or the valley value point information.
And acquiring the number of peak points of the peak points, the peak values and the peak point colorimetric values of the peak points, and the valley values and the valley point colorimetric values of the valley points in a preset colorimetric interval in the target colorimetric histogram. The number of the peak points is the number of the peak points in a preset chromaticity interval in the target chromaticity histogram, the peak value of the peak point is the frequency corresponding to the peak point, the chromaticity value of the peak point is the corresponding chromaticity value at the peak point, the valley value of the valley point is the frequency corresponding to the valley point, and the chromaticity value of the valley point is the corresponding chromaticity value at the valley point.
The second peak information and the chroma division information of the target chroma histogram are determined according to the peak point information and/or the valley point information, and different second peak information and chroma division information can be determined according to different peak point numbers and by combining with other peak point information and/or valley point information.
And determining second peak value information and chroma segmentation information of the target chroma histogram according to the peak value point information and/or the valley value point information, so that the accuracy and pertinence of determining the second peak value information and the chroma segmentation information are improved.
In this embodiment, optionally, determining second peak information and chroma division information of the target chroma histogram according to the peak point information and/or the valley point information includes:
if the number of the peak points is less than two, determining second peak value information and chromaticity segmentation information of the target chromaticity histogram according to the existence of the minimum demarcation point and the number of the peak points;
and if the number of the peak points is more than or equal to two, acquiring the second peak information and the chromaticity segmentation information according to the peak values of the peak points, the chromaticity values of the peak points, the bottom values of the bottom points and the chromaticity values of the bottom points.
The second peak information may include a maximum peak point chroma value of the first region and a maximum peak point chroma value of the second region, which is not limited in this embodiment, and the first region and the second region of the target chroma histogram may be determined by the chroma segmentation information and the preset chroma interval.
When the number of the peak points is less than two, the division threshold value of the candidate weeding area image can be determined by a Otsu threshold value method and the like, and the division threshold value is used as the chroma division value of the target chroma histogram.
Judging whether a minimum demarcation point exists in the preset chromaticity interval, wherein the minimum demarcation point can be a point with the minimum chromatic value in the points of which the frequency count is greater than the frequency counts of two adjacent points on the right side of the point, exemplarily, sequentially determining from a left end point in the preset chromaticity interval to the right whether the point meets the requirement of the demarcation point, and if the corresponding chromatic value is mi, determining the point corresponding to the minimum mi as the minimum demarcation point when the frequency count corresponding to mi is greater than the frequency counts corresponding to mi +1 and mi + 2.
If the minimum demarcation point exists, when the number of the peak points is 0, determining the colorimetric value corresponding to the minimum demarcation point as the first regional maximum peak point colorimetric value, and determining the right end point colorimetric value in the preset colorimetric interval as the second regional maximum peak point colorimetric value.
And when the number of the peak points is 1, determining the colorimetric values corresponding to the minimum demarcation points as the maximum peak point colorimetric values of the first area, and determining the peak point colorimetric values as the maximum peak point colorimetric values of the second area.
If the minimum demarcation point does not exist, when the number of the peak points is 0, determining the colorimetric value of the left end point in the preset colorimetric interval as the colorimetric value of the maximum peak point of the first area, and determining the colorimetric value of the right end point in the preset colorimetric interval as the colorimetric value of the maximum peak point of the second area.
When the number of peak points is 1, the peak point colorimetric values are determined as the first-region maximum peak point colorimetric values and the second-region maximum peak point colorimetric values.
And if the number of the peak points is more than or equal to 2, acquiring second peak information and chroma segmentation information according to the peak values of the peak points, the chroma values of the peak points, the valley values of the valley points and the chroma values of the valley points. The specific steps may include: 1) and determining a pair of peak point and valley point with the maximum peak-valley ratio according to the peak value of the peak point and the valley value of the peak point adjacent to the valley point, and marking the valley point as m. 2) Finding out the maximum peak value on the left side of the m point, and taking the colorimetric value corresponding to the peak value as the colorimetric value of the maximum peak value point of the first area; and taking the colorimetric value corresponding to the maximum peak value on the right side of the point m as the colorimetric value of the maximum peak value point of the second area. 3) And taking the colorimetric value corresponding to the value with the minimum frequency between the colorimetric values of the maximum peak point of the first area and the maximum peak point of the second area as a colorimetric segmentation value.
And determining different second peak information and chroma segmentation information according to the number of the peak points so as to obtain the corresponding second peak information and chroma segmentation information under all conditions and improve the accuracy of obstacle identification under each subsequent condition.
Step 130, determining brightness information of the candidate weeding area image according to the chroma segmentation information and the brightness image of the candidate weeding area image.
The lightness image may be a lightness channel image obtained by performing channel separation on the candidate weeding area image, and the lightness channel image may be preprocessed, where the preprocessing may include filtering processing, normalization processing, and the like, and this embodiment does not limit this. According to the brightness channel image, the brightness value of each pixel point in the candidate weeding area image can be obtained, and according to the chromaticity segmentation information, the brightness information of the pixel points in different chromaticity areas can be obtained and used as the brightness information of the candidate weeding area image.
Step 140, determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image.
The edge extraction can be carried out on the brightness image after the pretreatment to obtain an edge image, and a canny operator can be adopted to carry out the edge extraction so as to improve the accuracy rate of obtaining the edge information in the edge image.
And determining roughness information through edge information in the edge image, wherein the edge information comprises gray values of edge pixel points. The roughness information may be average roughness, and when the roughness information is average roughness, the average roughness of the edge may be obtained by dividing the number of pixels in the edge image whose gray scale value is equal to 255 by the total number of pixels in the edge image.
According to the chroma segmentation information, roughness information of pixel points in different chroma areas can be obtained and used as roughness information of candidate weeding area images.
The roughness information is determined through the gray value of the pixel in the edge information of the edge image, the accuracy rate of obtaining the roughness information is improved, and therefore the accuracy rate of identifying the obstacle is improved.
Step 150, determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information, so as to determine whether an obstacle exists in the candidate weeding area image.
And comparing the first peak value information of the chromaticity histogram to be processed, the second peak value information of the target chromaticity histogram, the brightness information of the candidate weeding area image and the roughness information of the candidate weeding area image with preset information judgment conditions. The preset information judgment condition is related to the first peak value information, the second peak value information, the brightness information and the roughness information. If the preset information judgment condition is met, determining that no shadow area exists in the candidate weeding area image, and if a suspected obstacle area exists in the candidate weeding area is identified, determining that an obstacle exists in the candidate weeding area so as to facilitate the weeding robot to perform subsequent obstacle processing.
For example, when it is determined that the candidate weeding area image does not currently have a shadow area, the candidate weeding area image may be segmented by a dynamic segmentation method, and in this case, a dark-colored obstacle may be represented by a black area and a non-shadow area may be represented by a white area in the segmented image.
Illustratively, the identified obstacles are those that are similar in color to grass, but have a relatively low reflection and a relatively smooth surface that provides some degree of brightness contrast with the surrounding grass.
According to the technical scheme provided by the embodiment, the chromaticity information of the candidate weeding area image is obtained, and the to-be-processed chromaticity histogram of the candidate weeding area image is generated according to the chromaticity information so as to obtain the first peak value information of the to-be-processed chromaticity histogram; generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram; determining brightness information of the candidate weeding area image according to the chrominance segmentation information and the brightness image of the candidate weeding area image; determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image; determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image. The boundary calibration method solves the problems that in the prior art, the boundary of a weeding area of the weeding robot is usually calibrated in a boundary line embedding mode, a large amount of manpower and material resources are consumed, and the cost is increased. And due to the problem that the shape of the weeding area is limited to a certain extent due to the limitation of the embedding of the boundary line, the effect of improving the identification efficiency and accuracy of the obstacles in the candidate weeding area of the weeding robot is achieved.
Example two
Fig. 2 is a flowchart of an obstacle identification method according to a second embodiment of the present invention, and this technical solution is supplementary to a process of determining whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information, so as to determine whether an obstacle exists in the candidate weeding area image. Compared with the above solution, the present solution is specifically optimized to, before determining whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information to determine whether an obstacle exists in the candidate weeding area image, further include:
determining a first peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the first region;
and determining a second peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the second area.
And determining whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image, including:
and if the average lightness of the first area is in a preset first average lightness interval, the average lightness of the second area is in a preset second average lightness interval, the average lightness difference is in a preset average lightness difference interval, the roughness of the first area or the roughness of the second area is smaller than a preset roughness threshold, the peak value of the abrupt peak point is larger than a preset abrupt peak point peak value threshold, the chromaticity difference of the first peak point is larger than a preset first peak point chromaticity difference threshold, and the chromaticity difference of the second peak point is larger than a preset second peak point chromaticity difference threshold, determining that no shadow area exists in the candidate weeding area image. Specifically, the flow chart of the obstacle identification method is shown in fig. 2:
step 210, obtaining chromaticity information of a candidate weeding area image, and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information to obtain first peak value information of the to-be-processed chromaticity histogram; the first peak information includes: the peak value of the abrupt peak point and the chromatic value of the abrupt peak point.
And the sudden change peak value point is the point with the maximum frequency difference value with the left and right adjacent points in the chromaticity histogram to be processed. The peak value of the sudden change peak point is the frequency corresponding to the sudden change peak point, and the chromatic value of the sudden change peak point is the chromatic value corresponding to the sudden change peak point.
Step 220, generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram; the second peak information includes a first-area maximum peak point colorimetric value and a second-area maximum peak point colorimetric value.
The chroma value of the maximum peak point of the first region is a chroma value corresponding to the maximum peak point of the first chroma region in the target chroma histogram, the chroma value of the maximum peak point of the second region is a chroma value corresponding to the maximum peak point of the second chroma region in the target chroma histogram, and the first chroma region and the second chroma region of the target chroma histogram can be determined by chroma segmentation information and a preset chroma interval together. For example, the preset chromaticity interval is [15,95], the chromaticity partition value is li, the first chromaticity interval corresponding to the first chromaticity region may be [15, li ], and the pixel point of the candidate weeding region image with the chromaticity value in the first chromaticity interval is the first region pixel point; the chromaticity interval corresponding to the second chromaticity region may be (li, 95), and the pixel point having the chromaticity value in the second chromaticity interval is the second region pixel point.
Step 230, determining brightness information of the candidate weeding area image according to the chromaticity segmentation information and the brightness image of the candidate weeding area image; the brightness information includes a first region average brightness, a second region average brightness, and an average brightness difference of the first region average brightness and the second region average brightness.
The average brightness of the first area is the average value of all pixel brightness in the first area, the average brightness of the second area is the average value of all pixel brightness in the second area, and the average brightness difference is the absolute value of the average brightness difference between the first area and the second area.
Step 240, determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image; the roughness information includes a first region roughness and a second region roughness.
The first region roughness is roughness information of the pixels in the first region, and can be average roughness of the pixels in the first region; the second region roughness is roughness information of the pixels in the first region, and may be average roughness of the pixels in the second region.
Step 250, determining a first peak point chromaticity value difference according to the mutation peak point chromaticity value and the first region maximum peak point chromaticity value; and determining a second peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the second area.
The first peak point chromaticity difference is determined according to the mutant peak point chromaticity value and the first region maximum peak point chromaticity value, for example, the mutant peak point chromaticity value is single p, the first region maximum peak point chromaticity value is h1, and the first peak point chromaticity difference is sh 1-single p-h1 because the mutant peak point is often located between the first region maximum peak point and the second region maximum peak point. And determining a second peak point chromaticity difference according to the abrupt peak point chromaticity value and the second region maximum peak point chromaticity value, wherein the second region maximum peak point chromaticity value is h2, and the second peak point chromaticity difference is sh 2-h 2-single P.
Step 260, if the average brightness of the first area is in a preset first average brightness interval, the average brightness of the second area is in a preset second average brightness interval, the average brightness difference is in a preset average brightness difference interval, the roughness of the first area or the roughness of the second area is smaller than a preset roughness threshold, the peak value of the abrupt peak point is larger than a preset abrupt peak point peak threshold, the chromaticity difference of the first peak point is larger than a preset first peak point chromaticity difference threshold, and the chromaticity difference of the second peak point is larger than a preset second peak point chromaticity difference threshold, determining that no shadow area exists in the candidate weeding area image, and determining whether an obstacle exists in the candidate weeding area image.
Illustratively, the average brightness of the first region is Va, the average brightness of the second region is Vb, the average brightness difference is Dv, the roughness of the first region is Sa, the roughness of the second region is Sb, the peak of the abrupt peak is singleV, the chromaticity difference of the first peak is sh1, and the chromaticity difference of the second peak is sh 2. If the first average lightness interval is (100,140), the second average lightness interval is (37,72), the average lightness difference interval is [0,80 ], the roughness threshold is 0.25, the abrupt change peak point peak threshold is 133, the first peak point chromaticity difference threshold is 10, and the second peak point chromaticity difference threshold is 10.
The preset information judgment condition is 100< Va <140 and 37< Vb <72 and 0 ≦ Dv <80 and Sa <0.25 and singleV >133 and sh1>10 and sh2>10, or 100< Va <140 and 37< Vb <72 and 0 ≦ Dv <80 and Sb <0.25 and singleV >133 and sh1>10 and sh2> 10. The preset information judgment condition may be adjusted according to a specific judgment scenario, which is not limited in this embodiment. When the condition is met, determining that no shadow area exists in the candidate weeding area image, and if an obstacle exists, the obstacle may be an object which is dark, similar to the lawn in color and smooth in surface.
According to the embodiment of the invention, whether obstacles exist in the candidate weeding area image is determined through the average lightness of the first area, the average lightness of the second area, the average lightness difference, the roughness of the first area or the roughness of the second area, the peak value of the sudden change peak point, the chromaticity difference of the first peak point and the chromaticity difference of the second peak point, so that the problem that obstacles which are close to the grassland color, weak in reflection and smooth in surface and can form a certain degree of brightness contrast with the surrounding grassland exist on the grassland sometimes is solved, the obstacles or boundaries are similar to the characteristics of the grassland with shadow and easily cause mistaken identification of the grassland with shadow as the obstacles, and the identification efficiency and accuracy of the obstacles which are close to the grassland color and smooth in the candidate weeding area of the weeding robot are improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an obstacle identification device according to a third embodiment of the present invention. The device can be realized in a hardware and/or software mode, can execute the obstacle identification method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. As shown in fig. 3, the apparatus includes:
the histogram generating module 310 is configured to obtain chrominance information of a candidate weeding area image, and generate a to-be-processed chrominance histogram of the candidate weeding area image according to the chrominance information to obtain first peak value information of the to-be-processed chrominance histogram;
the information determining module 320 is configured to generate a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determine second peak information and chromaticity segmentation information of the target chromaticity histogram;
a brightness information determining module 330 for determining brightness information of the candidate herbicidal region image from the chromaticity division information and a brightness image of the candidate herbicidal region image;
a roughness information determining module 340 for determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
an obstacle determination module 350, configured to determine whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information, so as to determine whether an obstacle exists in the candidate weeding area image.
According to the technical scheme provided by the embodiment, the chromaticity information of the candidate weeding area image is obtained, and the to-be-processed chromaticity histogram of the candidate weeding area image is generated according to the chromaticity information so as to obtain the first peak value information of the to-be-processed chromaticity histogram; generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram; determining brightness information of the candidate weeding area image according to the chrominance segmentation information and the brightness image of the candidate weeding area image; determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image; determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image. The boundary calibration method solves the problems that in the prior art, the boundary of a weeding area of the weeding robot is usually calibrated in a boundary line embedding mode, a large amount of manpower and material resources are consumed, and the cost is increased. And due to the problem that the shape of the weeding area is limited to a certain extent due to the limitation of the embedding of the boundary line, the effect of improving the identification efficiency and accuracy of the obstacles in the candidate weeding area of the weeding robot is achieved.
On the basis of the above technical solutions, optionally, the information determining module includes:
the information acquisition unit is used for acquiring peak point information and valley point information in a preset chromaticity interval of the target chromaticity histogram; the peak point information includes: the number of peak points, the peak value of the peak points and the chromatic value of the peak points; the valley point information includes: valley point valley and valley point chroma values;
an information determining unit, configured to determine second peak information and chroma division information of the target chroma histogram according to the peak point information and/or the valley point information.
On the basis of the foregoing technical solutions, optionally, the information determining unit includes:
a first information determining subunit, configured to determine, if the number of peak points is less than two, second peak information and chroma segmentation information of the target chroma histogram according to whether a minimum demarcation point and the number of peak points exist;
and the second information determining subunit is configured to, if the number of the peak points is greater than or equal to two, obtain the second peak information and the chroma division information according to the peak values of the peak points, the chroma values of the peak points, the valley values of the valley points, and the chroma values of the valley points.
On the basis of the above technical solutions, optionally, the first peak information includes: the peak value of the mutation peak point and the chromatic value of the mutation peak point; the second peak information comprises a first region maximum peak point colorimetric value and a second region maximum peak point colorimetric value; the brightness information comprises a first region average brightness, a second region average brightness, and an average brightness difference of the first region average brightness and the second region average brightness; the roughness information comprises a first region roughness and a second region roughness;
the device further comprises:
a chromaticity value difference determining module, configured to determine a first peak point chromaticity value difference according to the mutation peak point chromaticity value and the first area maximum peak point chromaticity value before the obstacle determining module; and determining a second peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the second area.
On the basis of the above technical solutions, optionally, the obstacle determining module includes:
an obstacle determining unit, configured to determine that a shadow area does not exist in the candidate weeding area image if the first area average brightness is in a preset first average brightness interval, the second area average brightness is in a preset second average brightness interval, the average brightness difference is in a preset average brightness difference interval, the first area roughness or the second area roughness is smaller than a preset roughness threshold, the abrupt peak point peak is larger than a preset abrupt peak point peak threshold, the first peak point chromaticity difference is larger than a preset first peak point chromaticity difference threshold, and the second peak point chromaticity difference is larger than a preset second peak point chromaticity difference threshold, so as to determine whether an obstacle exists in the candidate weeding area image.
Example four
Fig. 4 is a schematic structural diagram of an electronic apparatus according to a fourth embodiment of the present invention, as shown in fig. 4, the electronic apparatus includes a processor 40, a memory 41, an input device 42, and an output device 43; the number of the processors 40 in the electronic device may be one or more, and one processor 40 is taken as an example in fig. 4; the processor 40, the memory 41, the input device 42 and the output device 43 in the electronic apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 41 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the obstacle identification method in the embodiment of the present invention. The processor 40 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 41, that is, implements the obstacle identification method described above.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to electronic devices over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for obstacle identification, the method including:
obtaining chromaticity information of a candidate weeding area image, and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information to obtain first peak value information of the to-be-processed chromaticity histogram;
generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram;
determining brightness information of the candidate weeding area image according to the chrominance segmentation information and the brightness image of the candidate weeding area image;
determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also perform related operations in the obstacle identification method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the obstacle identification device, the included units and modules are merely divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
EXAMPLE six
The sixth embodiment of the invention provides a weeding robot, which comprises a robot body and further comprises the electronic equipment disclosed by any embodiment of the invention.
Specifically, the electronic device mounted on the weeding robot may perform operations related to an obstacle recognition method according to any of the embodiments of the present invention.
The robot body can comprise a left driving wheel and a right driving wheel which can be respectively driven by a motor, and the motor can be a brushless motor with a reduction gearbox and a Hall sensor. The robot body realizes the running operations of forward, backward, turning, arc and the like by controlling the speed and the direction of the two driving wheels. The robot body further comprises universal wheels, a camera and a rechargeable battery, wherein the universal wheels play a role in supporting and balancing. The camera is arranged at a designated position of the robot and forms a preset included angle with the horizontal direction so as to shoot candidate weeding area images. The rechargeable battery is used for providing power supply for the robot to work.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An obstacle recognition method, comprising:
acquiring chromaticity information of a candidate weeding area image, and generating a to-be-processed chromaticity histogram of the candidate weeding area image according to the chromaticity information to acquire first peak value information of the to-be-processed chromaticity histogram;
generating a target chromaticity histogram according to the to-be-processed chromaticity histogram, and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram;
determining brightness information of the candidate weeding area image according to the chromaticity segmentation information and a brightness image of the candidate weeding area image;
determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
determining whether a shadow area exists in the candidate weeding area image according to the first peak value information, the second peak value information, the brightness information and the roughness information to determine whether an obstacle exists in the candidate weeding area image.
2. The method of claim 1, wherein determining second peak information and chroma segmentation information for the target chroma histogram comprises:
acquiring peak point information and valley point information in a preset chromaticity interval of the target chromaticity histogram; the peak point information includes: the number of peak points, the peak value of the peak points and the chromatic value of the peak points; the valley point information includes: valley point valley and valley point chroma values;
and determining second peak value information and chroma segmentation information of the target chroma histogram according to the peak value point information and/or the valley value point information.
3. The method of claim 2, wherein determining second peak information and chroma segmentation information of the target chroma histogram according to the peak point information and/or the valley point information comprises:
if the number of the peak points is less than two, determining second peak value information and chromaticity segmentation information of the target chromaticity histogram according to the existence of the minimum demarcation point and the number of the peak points;
and if the number of the peak points is more than or equal to two, acquiring the second peak information and the chromaticity segmentation information according to the peak values of the peak points, the chromaticity values of the peak points, the bottom values of the bottom points and the chromaticity values of the bottom points.
4. The method according to any one of claims 1-3, wherein the first peak information comprises: the peak value of the mutation peak point and the chromatic value of the mutation peak point; the second peak information comprises a first region maximum peak point colorimetric value and a second region maximum peak point colorimetric value; the brightness information comprises a first region average brightness, a second region average brightness, and an average brightness difference of the first region average brightness and the second region average brightness; the roughness information comprises a first region roughness and a second region roughness;
before determining whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information to determine whether an obstacle exists in the candidate weeding area image, further comprising:
determining a first peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the first region;
and determining a second peak point chromaticity value difference according to the mutation peak point chromaticity value and the maximum peak point chromaticity value of the second area.
5. The method according to claim 4, wherein determining whether a shadow area is present in the candidate herbicidal area image based on the first peak information, the second peak information, the brightness information, and the roughness information comprises:
if the average brightness of the first area is in a preset first average brightness interval, the average brightness of the second area is in a preset second average brightness interval, the average brightness difference is in a preset average brightness difference interval, the roughness of the first area or the roughness of the second area is smaller than a preset roughness threshold, the peak value of the abrupt peak point is larger than a preset abrupt peak point peak value threshold, the chromaticity difference of the first peak point is larger than a preset first peak point chromaticity difference threshold, and the chromaticity difference of the second peak point is larger than a preset second peak point chromaticity difference threshold, determining that no shadow area exists in the candidate weeding area image, and determining whether an obstacle exists in the candidate weeding area image.
6. An obstacle recognition device, comprising:
the histogram generation module is used for acquiring the chrominance information of the candidate weeding area image and generating a to-be-processed chrominance histogram of the candidate weeding area image according to the chrominance information so as to acquire first peak value information of the to-be-processed chrominance histogram;
the information determining module is used for generating a target chromaticity histogram according to the chromaticity histogram to be processed and determining second peak value information and chromaticity segmentation information of the target chromaticity histogram;
a brightness information determination module for determining brightness information of the candidate weeding area image according to the chrominance segmentation information and a brightness image of the candidate weeding area image;
a roughness information determination module for determining roughness information of the candidate weeding area image according to the chrominance segmentation information and the edge image of the lightness image;
an obstacle determination module configured to determine whether a shadow area exists in the candidate weeding area image according to the first peak information, the second peak information, the brightness information, and the roughness information, so as to determine whether an obstacle exists in the candidate weeding area image.
7. The apparatus of claim 6, wherein the information determining module comprises:
the information acquisition unit is used for acquiring peak point information and valley point information in a preset chromaticity interval of the target chromaticity histogram; the peak point information includes: the number of peak points, the peak value of the peak points and the chromatic value of the peak points; the valley point information includes: valley point valley and valley point chroma values;
and the information determining unit is used for determining second peak value information and chroma segmentation information of the target chroma histogram according to the peak value point information and/or the valley value point information.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle identification method of any of claims 1-5.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the obstacle identification method according to any one of claims 1-5.
10. A weeding robot comprising a robot body, characterized by further comprising the electronic apparatus as recited in claim 8.
CN202011553859.7A 2020-12-24 2020-12-24 Obstacle identification method, device, equipment, medium and weeding robot Pending CN114677581A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011553859.7A CN114677581A (en) 2020-12-24 2020-12-24 Obstacle identification method, device, equipment, medium and weeding robot
PCT/CN2021/140291 WO2022135434A1 (en) 2020-12-24 2021-12-22 Obstacle identification method, apparatus and device, and medium and weeding robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011553859.7A CN114677581A (en) 2020-12-24 2020-12-24 Obstacle identification method, device, equipment, medium and weeding robot

Publications (1)

Publication Number Publication Date
CN114677581A true CN114677581A (en) 2022-06-28

Family

ID=82070683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011553859.7A Pending CN114677581A (en) 2020-12-24 2020-12-24 Obstacle identification method, device, equipment, medium and weeding robot

Country Status (2)

Country Link
CN (1) CN114677581A (en)
WO (1) WO2022135434A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8848978B2 (en) * 2011-09-16 2014-09-30 Harman International (China) Holdings Co., Ltd. Fast obstacle detection
CN104537373B (en) * 2015-01-13 2017-08-25 哈尔滨工业大学 Sublingual vessel is diagnosed with multispectral sublingual image characteristic extracting method
CN109360239B (en) * 2018-10-24 2021-01-15 长沙智能驾驶研究院有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN110930321A (en) * 2019-11-06 2020-03-27 杭州恩玖软件有限公司 Blue/green screen digital image matting method capable of automatically selecting target area
CN111830988A (en) * 2020-07-29 2020-10-27 苏州科瓴精密机械科技有限公司 Automatic walking equipment, control method and system thereof and readable storage medium

Also Published As

Publication number Publication date
WO2022135434A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
CN109584258B (en) Grassland boundary identification method and intelligent mowing device applying same
CN103577828B (en) A kind of Approach for road detection based on edge feature
CN110188640B (en) Face recognition method, face recognition device, server and computer readable medium
CN111612797B (en) Rice image information processing system
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
WO2022095170A1 (en) Obstacle recognition method and apparatus, and device, medium and weeding robot
CN114677581A (en) Obstacle identification method, device, equipment, medium and weeding robot
WO2022095171A1 (en) Obstacle recognition method, apparatus and device, medium, and weeding robot
CN111079530A (en) Mature strawberry identification method
CN114677318A (en) Obstacle identification method, device, equipment, medium and weeding robot
WO2022095161A1 (en) Obstacle recognition method and apparatus, device, medium and weeding robot
WO2022135444A1 (en) Obstacle recognition method, apparatus, device, medium and weeding robot
EP4266207A1 (en) Method and apparatus for obstacle recognition, device, medium, and robot lawn mower
CN110532892B (en) Method for detecting road vanishing point of single image of unstructured road
EP4266206A1 (en) Obstacle recognition method and apparatus, device, medium and weeding robot
CN113435287A (en) Lawn obstacle recognition method and device, mowing robot and readable storage medium
CN114494842A (en) Method and system for identifying working area based on image and robot
US20240029268A1 (en) Image Analysis Method and Apparatus, Computer Device, and Readable Storage Medium
EP4312187A1 (en) Image analysis method and apparatus, computer device, and readable storage medium
EP4276755A1 (en) Image segmentation method and apparatus, computer device, and readable storage medium
WO2022062048A1 (en) Roughness compensation method and system, image processing device, and readable storage medium
CN117094934A (en) Image analysis method, device, computer equipment and readable storage medium
CN115147713A (en) Method, system, device and medium for identifying non-working area based on image
CN117893987A (en) Lane line detection and fitting method based on visual features and mathematical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230525

Address after: 215000 No. 8 Ting Rong Street, Suzhou Industrial Park, Jiangsu, China

Applicant after: Suzhou Cleva Precision Machinery & Technology Co.,Ltd.

Applicant after: SKYBEST ELECTRIC APPLIANCE (SUZHOU) Co.,Ltd.

Address before: 215000 Huahong street, Suzhou Industrial Park, Jiangsu 18

Applicant before: Suzhou Cleva Precision Machinery & Technology Co.,Ltd.