CN112418003B - Work platform obstacle recognition method and system and anti-collision method and system - Google Patents

Work platform obstacle recognition method and system and anti-collision method and system Download PDF

Info

Publication number
CN112418003B
CN112418003B CN202011223503.7A CN202011223503A CN112418003B CN 112418003 B CN112418003 B CN 112418003B CN 202011223503 A CN202011223503 A CN 202011223503A CN 112418003 B CN112418003 B CN 112418003B
Authority
CN
China
Prior art keywords
obstacle
image data
feature
environment
environmental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011223503.7A
Other languages
Chinese (zh)
Other versions
CN112418003A (en
Inventor
沈裕强
邓超
熊路
朱后
岳泽擎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zoomlion Intelligent Aerial Work Machinery Co Ltd
Original Assignee
Hunan Zoomlion Intelligent Aerial Work Machinery Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zoomlion Intelligent Aerial Work Machinery Co Ltd filed Critical Hunan Zoomlion Intelligent Aerial Work Machinery Co Ltd
Priority to CN202011223503.7A priority Critical patent/CN112418003B/en
Publication of CN112418003A publication Critical patent/CN112418003A/en
Application granted granted Critical
Publication of CN112418003B publication Critical patent/CN112418003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/86Combinations of sonar systems with lidar systems; Combinations of sonar systems with systems not using wave reflection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for identifying an obstacle of a working platform, and an anti-collision method and a system, wherein the method for identifying the obstacle comprises the following steps: acquiring environment image data without blind areas around a working platform transmitted by each channel; calculating a feature cluster matrix of the environmental image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between an object in the environment image data and an obstacle in the environment sample database according to the obstacle type; and judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance. The working platform anti-collision system applies the obstacle recognition method, judges the distance between the obstacle and the working platform through ultrasonic radar data, and generates an anti-collision alarm according to the distance. The scheme of the invention realizes the identification of the obstacle and eliminates the misjudgment object.

Description

Work platform obstacle recognition method and system and anti-collision method and system
Technical Field
The invention relates to the field of high-altitude operation equipment, in particular to a working platform obstacle recognition method, a working platform obstacle recognition system, a working platform anti-collision method, a working platform anti-collision system and a high-altitude operation platform.
Background
The aerial work platform (Aerial work platform) is a product for servicing movable aerial works such as aerial works in various industries, equipment installation, overhaul and the like. For example, conventional aerial platform related products mainly include: scissor type aerial work platform, vehicle-mounted aerial work platform, crank arm type aerial work platform, self-propelled aerial work platform, aluminum alloy aerial work platform and sleeve type aerial work platform.
No matter what type of aerial working platform, operators are required to be carried at a distance close to the ground during working, and then the operators are lifted or moved to an expected position, so that the operators can conveniently carry out aerial working. Many times, certain barriers are arranged in a scene of high-altitude operation, and the barriers are required to be avoided when the high-altitude operation platform is lifted or moved to be controlled, so that the high-altitude operation platform and the barriers are prevented from being collided, equipment is prevented from being damaged, and casualties are avoided.
The existing anti-collision method generally adopts an ultrasonic radar anti-collision system installed on an aerial working platform, and in a top view of the aerial working platform, ultrasonic radar ranging systems are installed in the left side, the right side and the rear side, but the ultrasonic radar ranging systems can only detect data in most areas of the left side, the right side and the rear side, all areas cannot be monitored, a monitoring blind area exists, and once an obstacle comes from the monitoring blind area, the obstacle cannot be detected, and the anti-collision system fails. In addition, the ultrasonic radar ranging system can only detect whether an object exists in the monitoring range and the current distance between the object and the aerial working platform, can not judge whether the object in the monitoring range is an obstacle, and has a large misjudgment probability. In the high-altitude operation industry, when a worker holds a tool and enters other similar working conditions such as an anti-collision system area, the system can be mistakenly regarded as an obstacle, namely, an alarm or operation such as action inhibition can be sent out, and inconvenience is brought to the operator.
Disclosure of Invention
The object of the embodiment of the invention is to provide a method and a system for identifying an obstacle of a working platform, and an anti-collision method and a system, wherein the method and the system for identifying the obstacle can identify whether an object appearing in image data is an obstacle according to environment image data of the working platform, and judge the type of the obstacle; the anti-collision method combines the obstacle identified by the obstacle identification method and the ultrasonic radar data to carry out anti-collision warning, and carries out obstacle sample collection and training through a deep learning algorithm, so that the obstacle identification in practical application is realized, erroneous judgment objects are removed, and the direction and the distance of surrounding obstacles are accurately judged for a user in real time.
To achieve the above object, a first aspect of the present invention provides an image-based work platform obstacle recognition method, the method including:
acquiring environment image data without blind areas around a working platform transmitted by each channel;
calculating a feature cluster matrix of the environmental image data;
judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database;
Calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category;
and judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance. And acquiring environment image data of the working platform to perform image matching recognition, and determining whether objects around the working platform are obstacles or not, so that non-obstacles such as body parts of operators are prevented from being misjudged as obstacles.
Further, the environmental image data includes:
and environment image data of a bottom view area, a platform surface head-up area, a right left view area, a left right view area, a front rear view area and a rear front view area above the working platform. The method comprises the steps that environment image data of a plurality of areas are obtained, the environment image data of a bottom view area, a right left view area and a left right view area above a platform cover the view angle of a longitudinal range in a working plane, the environment image data of the top view area of the platform cover the range of the working plane of the working platform, the environment image data of the left view area, the left right view area and a rear front view area cover the view angle of a transverse range in the working plane, multi-azimuth detection is formed, and a detection blind area is reduced.
Further, the neural network algorithm is a deep convolutional neural network algorithm.
Further, the determining, according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database, the category of the obstacle to which the feature cluster matrix of the environmental image data belongs includes:
judging whether the feature cluster matrix of the environment image data belongs to the feature cluster matrix of the obstacle in any environment sample database or not:
if the object does not belong to the environment image data, judging that the object in the environment image data is an obstacle;
and if the characteristic cluster matrix belongs to the environment image data, determining the obstacle category to which the characteristic cluster matrix of the environment image data belongs. The types of objects in the environment can be classified quickly by comparing the feature cluster matrix of the environment image data with the feature cluster matrix of the obstacle in the environment sample database.
Further, the calculating the feature recognition distance between the object in the environmental image data and the obstacle in the environmental sample database according to the obstacle category includes:
carrying out convolution treatment on the feature clustering matrix to obtain convolved features;
performing maximum pooling operation on the convolved features to obtain a feature value after maximum pooling;
Exciting the characteristic value after the maximum pooling to obtain an excited characteristic value;
calculating a training error of the environmental image;
normalizing the training error;
and calculating the feature recognition distance between the feature cluster matrix and the excited feature value. And (3) carrying out image characteristic value extraction and image matching by adopting a deep convolutional neural network model. When the deep convolutional neural network model performs intelligent recognition training and classification, training parameters are reduced through local connection and weight sharing. The obstacle recognition is carried out through the deep convolutional neural network model, so that the characteristics of the obstacle can be automatically learned from the training sample, the manual design and the intervention are reduced, the data complex structure can be found from the multi-layer learning mode, and the success rate of the obstacle recognition in the image is improved. After the feature cluster matrix comparison and judgment, the feature recognition distance calculation is only needed to be carried out once, so that the calculation amount is reduced.
Further, the determining whether the object in the environmental image data is an obstacle according to the feature recognition distance includes:
if the feature recognition distance is larger than a set threshold value, the object in the environment image data is an obstacle;
And if the feature recognition distance is smaller than a set threshold value, the object in the environment image data is not an obstacle, and the environment image data is defined as self-learning data. And judging the obstacle according to the characteristic recognition distance, and collecting the image data with smaller characteristic distance as self-learning data, so that the characteristic distance calculation parameters are conveniently optimized according to the obstacle recognition condition in the actual use process, and the obstacle recognition accuracy is improved.
Further, the calculating the feature cluster matrix of the environmental image data includes:
an image module for thinning the environmental image data into a plurality of set pixel sizes;
calculating a feature cluster matrix M of the image module, wherein M=Eva { L, H, P, delta };
eva { } is a clustering calculation function; l is the image width; h is the image height; p is a channel; delta is the product.
Further, the performing convolution processing on the feature cluster matrix to obtain a feature after convolution includes:
inputting the feature cluster matrix into an input layer;
and carrying out convolution processing on the input feature map, wherein when the jth convolution layer is operated, the feature map of the jth convolution layer is expressed as:
wherein ,Representing an output map for the feature map of the j-th layer; />Is an excitation function; * A 2-dimensional convolution operation symbol; />Representing an input map for the feature map of the i-th layer; />Is a convolution kernel expression; />Is a bias value;
the max pooling operation is calculated by the following formula:
wherein max is calculated by taking the maximum value, s is the size of the pooling layer area;neurons mapped within (j, k) for the ith feature; />Neurons located at ϑ are mapped for the ith feature, where ϑ =j·s+m·k·s+n, where m, n is the positional offset of the neuron within the pooling areaAn amount of;
exciting the characteristic value after the maximum pooling to obtain an excited characteristic value, wherein the exciting comprises the following steps: adopting a ReLu function R (x) as an excitation function, giving excitation when the characteristic value x is higher than a preset mark, otherwise, clearing;
the calculating the distance between the feature cluster matrix M and the excited feature value comprises the following steps:
calculating feature recognition distance between feature cluster matrix M and sample
Wherein, alpha, beta, gamma, tau is an initialization correction parameter, x i ∈M。
Further, the calculating training errors of the environmental image includes:
calculating the training error of the environment image by adopting a square error method, and calculating the training error E of the E-th sample e Expressed as:
wherein c is the number of output nodes, f is a cyclic variable, f=1, 2, c,the e sample gradient true value for the f node, < >>Training an output layer for an e-th sample of the f-th node;
the normalizing process for the training error comprises the following steps:
the training error is as followsNormalization processing is carried out on the formula to obtain normalized errors
wherein ,for E e Square value of>Mean value of the square of the error of the previous e, +.>Is the standard deviation.
A second aspect of the present invention provides an image-based work platform obstacle recognition system, the obstacle recognition system comprising:
the environment image data acquisition unit is used for acquiring environment image data without blind areas around the working platform transmitted by each channel;
the feature cluster matrix calculation unit is used for calculating a feature cluster matrix of the environment image data;
the obstacle type judging unit is used for judging the type of the obstacle to which the characteristic clustering matrix of the environment image data belongs according to the characteristic clustering matrix of the environment image data and the characteristic clustering matrix of the obstacle in the environment sample database;
a feature recognition distance calculation unit, configured to calculate a feature recognition distance between an object in the environmental image data and an obstacle in an environmental sample database according to the obstacle category;
And the obstacle judging unit is used for judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance. The obstacle recognition system acquires environment image data of the working platform to perform image matching recognition, determines whether objects around the working platform are obstacles or not, recognizes the types of the obstacles, and avoids that non-obstacles such as body parts of operators are misjudged as the obstacles.
Further, the feature recognition distance calculation unit includes:
the image characteristic value calculation unit is used for calculating the characteristic value of the input characteristic cluster matrix based on the deep convolutional neural network;
the training error calculation unit is used for calculating the training error of the environment image and normalizing the training error;
and the distance calculation unit is used for calculating the feature recognition distance between the feature cluster matrix and the feature value of the environment image data. And (3) carrying out image characteristic value extraction and image matching by adopting a deep convolutional neural network model. When the deep convolutional neural network model performs intelligent recognition training and classification, training parameters are reduced through local connection and weight sharing. The obstacle recognition is carried out through the deep convolutional neural network model, so that the characteristics of the obstacle can be automatically learned from the training sample, the manual design and the intervention are reduced, the data complex structure can be found from the multi-layer learning mode, and the success rate of the obstacle recognition in the image is improved.
The third aspect of the present invention provides an image-based work platform collision avoidance system, to which the image-based work platform obstacle recognition method is applied, the work platform collision avoidance system comprising:
the environment data acquisition component is used for acquiring environment image data and ultrasonic radar data of non-blind areas around the working platform; and
the processor is used for acquiring the environment image data and the ultrasonic radar data and calculating a characteristic clustering matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and judging the distance between the obstacle and the working platform through the ultrasonic radar data, and generating an anti-collision alarm according to the distance. The anti-collision system collects environment image data and ultrasonic radar data of the working platform through the environment data collection component, performs obstacle sample collection and training through a deep learning algorithm, achieves obstacle recognition in the environment image data, and generates anti-collision warning when the distance between the working platform and an obstacle is smaller than a set distance by combining the ultrasonic radar data.
Further, the environmental data collection assembly includes:
the camera is used for collecting environment image data of a head-up area of a platform surface of the working platform; and
the radar and vision composite sensor is used for collecting environment image data and ultrasonic radar data of a bottom viewing area above the working platform, a right left viewing area, a front rear viewing area, a rear front viewing area and a left right viewing area of the working platform. The radar, the vision composite sensor and the camera are used for collecting environmental data together, so that an environmental data basis is provided for obstacle recognition and anti-collision warning.
Further, the radar ranging angle of the radar and vision composite sensor is larger than 110 degrees, and the measuring range is 100-2000mm; the detection angle of the vision sensor of the radar and vision composite sensor is larger than 180 degrees, and the measurement range is 20-5000mm. In order to realize the acquisition of environment image data and ultrasonic radar data of a bottom view area, a right left view area, a rear front view area and a left right view area on the upper side of the working platform, a radar and vision composite sensor is required to be arranged in each direction, the detection angle of the adopted vision sensor is larger than 180 degrees, and the minimum 180-degree range of each direction taking the installation direction as the normal direction is within the detection view angle of the vision sensor, so that the environment image data of the bottom view area, the right left view area and the left right view area on the upper side of the platform cover the blind spot-free view angle of the longitudinal 270-degree range in the working plane, and the environment image data of the head-up area of the working platform surface detected by the camera is combined, so that the blind spot detection can be avoided in the longitudinal plane. The environment image data of the left viewing area on the right side of the platform, the left viewing area on the right side and the rear front viewing area cover a blind spot-free view angle of 270 degrees in the transverse range in the working plane, and the blind spot-free detection can be realized in the transverse plane by combining the environment image data of the head-up area of the working platform surface detected by the camera.
The fourth aspect of the present invention provides an image-based platform collision avoidance method, based on the image-based platform collision avoidance system, the method comprising:
the environment data acquisition component acquires environment image data and ultrasonic radar data of non-blind areas around the working platform;
the processor acquires the environment image data and the ultrasonic radar data, and calculates a feature cluster matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and carrying out fusion processing on the obstacle recognition result and the ultrasonic radar data, judging the distance between the obstacle and the working platform, and generating an anti-collision alarm according to the distance. The method collects environmental image data and ultrasonic radar data of a working platform. The obstacle sample collection and training are carried out through a deep learning algorithm, the obstacle recognition in the environment image data is realized, and an anti-collision alarm is generated when the distance between the working platform and the obstacle is smaller than the set distance by combining the ultrasonic radar data.
The fifth aspect of the invention provides an aerial working platform comprising the image-based working platform collision avoidance system. And arranging the anti-collision system on the aerial working platform to realize anti-collision warning in the moving process of the aerial working platform.
In another aspect, the present invention provides a machine-readable storage medium having stored thereon instructions for causing a machine to perform the image-based work platform collision avoidance method.
According to the technical scheme, the environment data acquisition component is used for acquiring the environment image data and the ultrasonic radar data of the working platform, the obstacle in the environment image data is identified through the deep convolutional neural network algorithm, the type of the obstacle is judged, the distance between the obstacle and the working platform is judged by combining the ultrasonic radar data, and an anti-collision alarm is generated when the distance is smaller than a preset value, so that objects are identified in the working operation process, misjudgment objects are removed, and the direction and the distance of surrounding obstacles are accurately judged for a user in real time.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
FIG. 1 is a flowchart of a method for identifying an obstacle on an image-based work platform according to an embodiment of the present invention;
FIG. 2 is a block diagram of an image-based work platform obstacle recognition system provided by an embodiment of the present invention;
FIG. 3 is a block diagram of an image-based work platform collision avoidance system provided by one embodiment of the present invention;
FIG. 4 is a flowchart of an image-based work platform anti-collision method provided by an embodiment of the present invention;
FIG. 5 is a schematic view of a crank arm aerial platform according to an embodiment of the present invention;
FIG. 6 is a schematic view of an aerial platform orientation definition used in one embodiment of the present invention;
fig. 7 is a schematic view of an anti-collision system of an aerial working platform provided in a real-time manner.
Description of the reference numerals
1-chassis, 2-revolving stage, 3-folding arm, 4-telescopic arm, 5-flying arm, 6-platform, 7-first radar and vision compound sensor, 8-camera, 9-third radar and vision compound sensor, 10-fourth radar and vision compound sensor, 11-second radar and vision compound sensor.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
Fig. 1 is a flowchart of a method for identifying an obstacle on an image-based working platform according to an embodiment of the present invention. As shown in fig. 1, the method includes:
acquiring environment image data without blind areas around a working platform transmitted by each channel;
calculating a feature cluster matrix of the environmental image data;
judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database;
calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category;
and judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance. And acquiring environment image data of the working platform to perform image matching recognition, and determining whether objects around the working platform are obstacles or not, so that non-obstacles such as body parts of operators are prevented from being misjudged as obstacles.
Further, the environmental image data includes:
and environment image data of a bottom view area, a platform surface head-up area, a right left view area, a left right view area, a front rear view area and a rear front view area above the working platform. The method comprises the steps that environment image data of a plurality of areas are obtained, the environment image data of a bottom view area, a right left view area and a left right view area above a platform cover the view angle of a longitudinal range in a working plane, the environment image data of the top view area of the platform cover the range of the working plane of the working platform, the environment image data of the left view area, the left right view area and a rear front view area cover the view angle of a transverse range in the working plane, multi-azimuth detection is formed, and a detection blind area is reduced.
Further, the neural network algorithm is a deep convolutional neural network algorithm.
Further, the determining, according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database, the category of the obstacle to which the feature cluster matrix of the environmental image data belongs includes:
judging whether the feature cluster matrix of the environment image data belongs to the feature cluster matrix of the obstacle in any environment sample database or not:
If the object does not belong to the environment image data, judging that the object in the environment image data is an obstacle;
and if the characteristic cluster matrix belongs to the environment image data, determining the obstacle category to which the characteristic cluster matrix of the environment image data belongs. The types of objects in the environment can be classified quickly by comparing the feature cluster matrix of the environment image data with the feature cluster matrix of the obstacle in the environment sample database.
Further, the calculating the feature cluster matrix of the environmental image data includes:
inputting the environmental image data into a deep convolutional neural network;
thinning the environmental image data into a plurality of image modules of set pixel size, in one embodiment of the invention the size of the image modules is 50 x 50 pixel size;
calculating a feature cluster matrix M of the image module, wherein the M covers 4 feature values and can be expressed as follows: m=eva { L, H, P, δ };
eva { } is a clustering calculation function; l is the image width; h is the image height; p is a channel; delta is the product.
Further, the calculating the feature recognition distance between the object in the environmental image data and the obstacle in the environmental sample database according to the obstacle category includes:
1) Performing convolution processing on the feature clustering matrix to obtain convolved features, wherein the method specifically comprises the following steps of: inputting the feature cluster matrix into an input layer;
And carrying out convolution processing on the input feature map to make the image features more obvious. In operation on the jth convolutional layer, the feature map of the jth layer may be expressed as:
wherein ,for the feature map of layer j, output map is represented,/->Is an excitation function; * A 2-dimensional convolution operation symbol; />Representing an input map for the feature map of the i-th layer; />Is a convolution kernel expression; />Is a bias value.
2) And carrying out maximum pooling operation on the convolved features to obtain the feature value after maximum pooling. The max pooling operation is calculated by the following formula:
wherein max is calculated by taking the maximum value, s is the size of the pooling layer area;mapping to the ith featureNeurons within (j, k); />Neurons located at ϑ are mapped for the ith feature, where ϑ =j·s+m·k·s+n, where m, n is the positional offset of the neuron within the pooling area.
3) And exciting the characteristic value after the maximum pooling to obtain the excited characteristic value, and highlighting the characteristic of the input image again through excitation. The invention adopts ReLu function R (x) as excitation function, when the characteristic value x is higher than the preset mark, excitation is given, otherwise, zero clearing is carried out;
4) Calculating the training error of the environment image, calculating the training error of the environment image by adopting a square error method, and calculating the training error E of the E sample e Expressed as:
wherein c is the number of output nodes, f is a cyclic variable, f=1, 2, c,the e sample gradient true value for the f node, < >>The output layer is trained for the e-th sample of the f-th node.
5) Normalizing the training error, which specifically includes normalizing the training error according to the following formula to obtain a normalized error
wherein ,for E e Square value of>Mean value of the square of the error of the previous e, +.>Is the standard deviation.
6) Calculating a feature recognition distance between the feature cluster matrix M and the excited feature value, wherein the feature recognition distance comprises the following steps: calculating feature recognition distance between feature cluster matrix M and sample
Wherein, alpha, beta, gamma, tau is an initialization correction parameter, x i ∈M。
The larger the feature matrix M is, the closer the feature matrix M is to a sample, and if the feature recognition distance is larger than a set threshold value, an object in the environment image data is an obstacle;
and if the feature recognition distance is smaller than a set threshold value, the object in the environment image data is not an obstacle, and the environment image data is defined as self-learning data. And judging the obstacle according to the characteristic recognition distance, and collecting the image data with smaller characteristic distance as self-learning data, so that the characteristic distance calculation parameters are conveniently optimized according to the obstacle recognition condition in the actual use process, and the obstacle recognition accuracy is improved.
And (3) carrying out image characteristic value extraction and image matching by adopting a deep convolutional neural network model. When the deep convolutional neural network model performs intelligent recognition training and classification, training parameters are reduced through local connection and weight sharing. The obstacle recognition is carried out through the deep convolutional neural network model, so that the characteristics of the obstacle can be automatically learned from the training sample, the manual design and the intervention are reduced, the data complex structure can be found from the multi-layer learning mode, and the success rate of the obstacle recognition in the image is improved.
Fig. 2 is a block diagram of an image-based work platform obstacle recognition system according to one embodiment of the present invention. As shown in fig. 2, the obstacle recognition system includes:
the environment image data acquisition unit is used for acquiring environment image data without blind areas around the working platform transmitted by each channel;
the feature cluster matrix calculation unit is used for calculating a feature cluster matrix of the environment image data;
the obstacle type judging unit is used for judging the type of the obstacle to which the characteristic clustering matrix of the environment image data belongs according to the characteristic clustering matrix of the environment image data and the characteristic clustering matrix of the obstacle in the environment sample database;
A feature recognition distance calculation unit, configured to calculate a feature recognition distance between an object in the environmental image data and an obstacle in an environmental sample database according to the obstacle category;
and the obstacle judging unit is used for judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance. The obstacle recognition system acquires environment image data of the working platform to perform image matching recognition, determines whether objects around the working platform are obstacles or not, recognizes the types of the obstacles, and avoids that non-obstacles such as body parts of operators are misjudged as the obstacles.
Further, the feature recognition distance calculation unit includes:
the image characteristic value calculation unit is used for calculating the characteristic value of the input characteristic cluster matrix based on the deep convolutional neural network;
the training error calculation unit is used for calculating the training error of the environment image and normalizing the training error;
and the distance calculation unit is used for calculating the feature recognition distance between the feature cluster matrix and the feature value of the environment image data. And (3) carrying out image characteristic value extraction and image matching by adopting a deep convolutional neural network model. When the deep convolutional neural network model performs intelligent recognition training and classification, training parameters are reduced through local connection and weight sharing. The obstacle recognition is carried out through the deep convolutional neural network model, so that the characteristics of the obstacle can be automatically learned from the training sample, the manual design and the intervention are reduced, the data complex structure can be found from the multi-layer learning mode, and the success rate of the obstacle recognition in the image is improved.
Fig. 3 is a block diagram of an image-based work platform collision avoidance system provided by an embodiment of the present invention. The anti-collision system applies the image-based working platform obstacle recognition method, as shown in fig. 3, and the working platform anti-collision system comprises:
the environment data acquisition component is used for acquiring environment image data and ultrasonic radar data of non-blind areas around the working platform; and
the processor is used for acquiring the environment image data and the ultrasonic radar data and calculating a characteristic clustering matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and judging the distance between the obstacle and the working platform through the ultrasonic radar data, and generating an anti-collision alarm according to the distance. The anti-collision system collects environment image data and ultrasonic radar data of the working platform through the environment data collection component, performs obstacle sample collection and training through a deep learning algorithm, achieves obstacle recognition in the environment image data, and generates anti-collision warning when the distance between the working platform and an obstacle is smaller than a set distance by combining the ultrasonic radar data.
Further, the environmental data collection assembly includes:
the camera is used for collecting environment image data of a head-up area of a platform surface of the working platform; and
the radar and vision composite sensor is used for collecting environment image data and ultrasonic radar data of a bottom viewing area above the working platform, a right left viewing area, a front rear viewing area, a rear front viewing area and a left right viewing area of the working platform. The radar, the vision composite sensor and the camera are used for collecting environmental data together, so that an environmental data basis is provided for obstacle recognition and anti-collision warning.
Further, the radar ranging angle of the radar and vision composite sensor is larger than 110 degrees, and the measuring range is 100-2000mm; the detection angle of the vision sensor of the radar and vision composite sensor is larger than 180 degrees, and the measurement range is 20-5000mm. In order to realize the acquisition of environment image data and ultrasonic radar data of a bottom view area, a right left view area, a rear front view area and a left right view area on the upper side of the working platform, a radar and vision composite sensor is required to be arranged in each direction, the detection angle of the adopted vision sensor is larger than 180 degrees, and the minimum 180-degree range of each direction taking the installation direction as the normal direction is within the detection view angle of the vision sensor, so that the environment image data of the bottom view area, the right left view area and the left right view area on the upper side of the platform cover the blind spot-free view angle of the longitudinal 270-degree range in the working plane, and the environment image data of the head-up area of the working platform surface detected by the camera is combined, so that the blind spot detection can be avoided in the longitudinal plane. The environment image data of the left viewing area on the right side of the platform, the left viewing area on the right side and the rear front viewing area cover a blind spot-free view angle of 270 degrees in the transverse range in the working plane, and the blind spot-free detection can be realized in the transverse plane by combining the environment image data of the head-up area of the working platform surface detected by the camera.
Fig. 4 is a flowchart of an image-based platform collision avoidance method according to an embodiment of the present invention. The anti-collision method is based on the image-based working platform anti-collision system, as shown in fig. 4, and comprises the following steps:
the environment data acquisition component acquires environment image data and ultrasonic radar data of non-blind areas around the working platform;
the processor acquires the environment image data and the ultrasonic radar data, and calculates a feature cluster matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and carrying out fusion processing on the obstacle recognition result and the ultrasonic radar data, judging the distance between the obstacle and the working platform, and generating an anti-collision alarm according to the distance. The method collects environmental image data and ultrasonic radar data of a working platform. The obstacle sample collection and training are carried out through a deep learning algorithm, the obstacle recognition in the environment image data is realized, and an anti-collision alarm is generated when the distance between the working platform and the obstacle is smaller than the set distance by combining the ultrasonic radar data.
In a first aspect, the present application provides an aerial work platform comprising the image-based work platform collision avoidance system. And arranging the anti-collision system on the aerial working platform to realize anti-collision warning in the moving process of the aerial working platform.
The application is described in detail by taking a crank arm type aerial working platform as an example, as shown in fig. 5, the crank arm type aerial working machine comprises a chassis 1, a turntable 2, a folding arm 3, a telescopic arm 4, a fly arm 5 and a platform 6 which are sequentially connected, wherein the platform 6 corresponds to the working platform in the application. On the crank arm type aerial work platform, one side face of the work platform is connected with the fly arm 5, the aerial work platform is lifted or translated through the turntable 2, the folding arm 3, the telescopic arm 4 and the fly arm 5, and in this case, the connection side of the work platform and the fly arm 5 is not easy to collide, so that monitoring on the fly arm side can be omitted when the environment data acquisition assembly is arranged. As shown in fig. 6, the connection side of the working platform and the fly arm 5 is defined as the front, the left, right and rear positions can be correspondingly determined, and the upper and lower positions are determined according to common knowledge.
As shown in fig. 7, when the environmental data collection assembly is arranged, a camera 8 facing backward and a first radar and vision composite sensor 7 facing upward are arranged on the front fence of the working platform, the camera 8 is used for collecting environmental image data of a head-up area of the surface of the working platform, and the first radar and vision composite sensor 7 is used for collecting environmental image data and ultrasonic radar data of a head-up area above the working platform; a second radar and vision composite sensor 11 facing to the left is arranged on the left fence of the working platform and is used for collecting environment image data and ultrasonic radar data of a left-right vision area of the working platform; a third radar and vision composite sensor 9 facing to the right is arranged on the right fence of the working platform and is used for collecting environment image data and ultrasonic radar data of a left-looking area on the right side of the working platform; a fourth radar and vision compound sensor 10 is arranged on the rear fence of the work platform, directed rearward, for acquiring environmental image data and ultrasonic radar data of the forward looking area behind the work platform. The processor is arranged on the working platform. In this embodiment, the work platform is provided with a control assembly, and the processor may be implemented using a processor in the work platform control assembly.
With the above arrangement, the data of the first radar and vision composite sensor 7, the second radar and vision composite sensor 11, the third radar and vision composite sensor 9 cover blind spot-free viewing angles and distance detection of 270 ° of the longitudinal range of the working platform, and the environmental image data collected by the camera 8 can be detected without dead angles in the longitudinal plane. The data of the second radar and vision composite sensor 11, the third radar and vision composite sensor 9 and the fourth radar and vision composite sensor 10 cover the blind spot-free visual angle and distance detection of 270 degrees of the transverse range of the working platform, and the environmental image data collected by the camera 8 can be detected in a transverse plane without dead angles.
The camera 8, the first to fourth radars and the vision composite sensor 10 acquire environment image data and ultrasonic radar data, the environment image data and the ultrasonic radar data are transmitted to the processor for identifying obstacles, and the processor calculates a characteristic clustering matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and judging the distance of the obstacle by combining the ultrasonic radar data, and generating an anti-collision alarm when the distance is smaller than a set value.
In other embodiments, the work platform collision avoidance system may further include a display screen, wherein when the processor generates the collision avoidance alert, a red alert prompt is generated in a corresponding direction on the display screen, and for an object determined to be not an obstacle, the object is identified on the display screen as a green acceptable portion.
In other embodiments, the processor may also control the work platform to cease action while generating the anti-collision alert.
In other embodiments, the environment data acquisition component can be additionally provided with a radar and vision composite sensor so as to acquire environment image data and ultrasonic radar data within the range of 360 degrees and 270 degrees of the horizontal direction and the longitudinal direction of the working platform, thereby forming the non-blind area obstacle monitoring of the working platform.
In other embodiments, other neural network algorithms with the same function may be used to perform image matching recognition to identify whether an object in the environmental image belongs to an obstacle in the sample database.
According to the invention, based on a deep learning algorithm, a working platform environment sample database is established, and surrounding environment objects can be intelligently identified. The image features can be automatically learned from the training sample set, the manual design and intervention are reduced, the complex data structure can be found from the multi-layer learning mode, and the success rate of image recognition is improved. Through intellectual detection system and distinguish whether be the obstacle, when detecting to be the object such as operating personnel and instrument, then the system can call the content of high altitude construction environment database and carry out the comparison judgement, when the system judges to be the non-obstacle, then the sign this object is green acceptable part to realize intelligent identification's purpose, more high-efficient convenience provides the effect of anticollision for the user. The fisheye effect of the camera is utilized, and the radar ranging system (all realized by the radar and the vision composite sensor) performs data fusion processing, so that the periphery of the working platform can be detected and alarmed without dead angles or blind areas.
In another aspect, the present invention provides a machine-readable storage medium having stored thereon instructions for causing a machine to perform the image-based work platform collision avoidance method.
Those skilled in the art will appreciate that all or part of the steps in a method for implementing the above embodiments may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a single-chip microcomputer, chip or processor (processor) to perform all or part of the steps in a method according to the embodiments of the invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The alternative embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the embodiments of the present invention are not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solutions of the embodiments of the present invention within the scope of the technical concept of the embodiments of the present invention, and all the simple modifications belong to the protection scope of the embodiments of the present invention. In addition, the specific features described in the above embodiments may be combined in any suitable manner without contradiction. In order to avoid unnecessary repetition, the various possible combinations of embodiments of the invention are not described in detail.
In addition, any combination of the various embodiments of the present invention may be made, so long as it does not deviate from the idea of the embodiments of the present invention, and it should also be regarded as what is disclosed in the embodiments of the present invention.

Claims (15)

1. An image-based work platform obstacle recognition method, the method comprising:
acquiring environment image data without blind areas around a working platform transmitted by each channel;
calculating a feature cluster matrix M of the environmental image data, wherein M=Eva { L, H, P, delta };
eva { } is a clustering calculation function; l is the image width; h is the image height; p is a channel; delta is the product;
judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database;
calculating the feature recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category, wherein the feature recognition distance comprises the following steps:
carrying out convolution treatment on the feature clustering matrix to obtain convolved features;
performing maximum pooling operation on the convolved features to obtain a feature value after maximum pooling;
Exciting the characteristic value after the maximum pooling to obtain an excited characteristic value;
calculating a feature recognition distance between the feature cluster matrix and the excited feature value, including:
calculating feature recognition distance between feature cluster matrix M and sample
Wherein, alpha, beta, gamma, tau is an initialization correction parameter, x i ∈M;
And judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance.
2. The image-based work platform obstacle recognition method of claim 1, wherein the environmental image data comprises:
and environment image data of a bottom view area, a platform surface head-up area, a right left view area, a left right view area, a front rear view area and a rear front view area above the working platform.
3. The image-based working platform obstacle recognition method according to claim 1, wherein the determining the obstacle category to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database includes:
judging whether the feature cluster matrix of the environment image data belongs to the feature cluster matrix of the obstacle in any environment sample database or not:
If the object does not belong to the environment image data, judging that the object in the environment image data is an obstacle;
and if the characteristic cluster matrix belongs to the environment image data, determining the obstacle category to which the characteristic cluster matrix of the environment image data belongs.
4. The image-based work platform obstacle recognition method according to claim 1, wherein the calculating the feature recognition distance of the object in the environmental image data and the obstacle in the environmental sample database according to the obstacle class further comprises:
calculating a training error of the environmental image;
and normalizing the training error.
5. The image-based work platform obstacle recognition method according to claim 1, wherein the determining whether an object in the environmental image data is an obstacle according to the feature recognition distance comprises:
if the feature recognition distance is larger than a set threshold value, the object in the environment image data is an obstacle;
and if the feature recognition distance is smaller than a set threshold value, the object in the environment image data is not an obstacle, and the environment image data is defined as self-learning data.
6. The image-based work platform obstacle recognition method of claim 1, wherein the computing the feature cluster matrix of the environmental image data comprises:
An image module for thinning the environmental image data into a plurality of set pixel sizes;
and calculating a characteristic cluster matrix M of the image module.
7. The image-based working platform obstacle recognition method according to claim 1, wherein the performing convolution processing on the feature cluster matrix to obtain a convolved feature comprises:
inputting the feature cluster matrix into an input layer;
and carrying out convolution processing on the input feature map, wherein when the jth convolution layer is operated, the feature map of the jth convolution layer is expressed as:
wherein ,representing an output map for the feature map of the j-th layer; />Is an excitation function; * A 2-dimensional convolution operation symbol; />Representing an input map for the feature map of the i-th layer; />Is a convolution kernel expression; />Is a bias value;
the max pooling operation is calculated by the following formula:
wherein max is calculated by taking the maximum value, s is the size of the pooling layer area;neurons mapped within (j, k) for the ith feature; />Mapping neurons located at ϑ for the ith feature, where ϑ = j·s+m·k·s+n, where m, n is the positional offset of the neurons within the pooling area;
exciting the characteristic value after the maximum pooling to obtain an excited characteristic value, wherein the exciting comprises the following steps: adopting a ReLu function R (x) as an excitation function, giving excitation when the characteristic value x is higher than a preset mark, otherwise, clearing;
8. The image-based work platform obstacle recognition method of claim 4, wherein the calculating the training error of the environmental image comprises:
calculating the training error of the environment image by adopting a square error method, and calculating the training error E of the E-th sample e Expressed as:
wherein c is the number of output nodes, f is a cyclic variable, f=1, 2, c,the e sample gradient true value for the f node, < >>Training an output layer for an e-th sample of the f-th node;
the normalizing process for the training error comprises the following steps:
normalizing the training error according to the following formula to obtain a normalized error
wherein ,for E e Square value of>Mean value of the square of the error of the previous e, +.>Is the standard deviation.
9. An image-based work platform obstacle recognition system, the obstacle recognition system comprising:
the environment image data acquisition unit is used for acquiring environment image data without blind areas around the working platform transmitted by each channel;
the feature cluster matrix calculation unit is used for calculating a feature cluster matrix of the environment image data;
the obstacle type judging unit is used for judging the type of the obstacle to which the characteristic clustering matrix of the environment image data belongs according to the characteristic clustering matrix of the environment image data and the characteristic clustering matrix of the obstacle in the environment sample database;
A feature recognition distance calculating unit for calculating a feature recognition distance of an object in the environmental image data and an obstacle in an environmental sample database according to the obstacle category, including:
the image characteristic value calculation unit is used for calculating the characteristic value of the input characteristic cluster matrix based on the deep convolutional neural network;
the training error calculation unit is used for calculating the training error of the environment image and normalizing the training error;
a distance calculating unit for calculating a feature recognition distance between the feature cluster matrix and the feature value of the environmental image data;
and the obstacle judging unit is used for judging whether the object in the environment image data is an obstacle or not according to the characteristic recognition distance.
10. An image-based work platform collision avoidance system applying the image-based work platform obstacle recognition method of any one of claims 1-8, wherein the work platform collision avoidance system comprises:
the environment data acquisition component is used for acquiring environment image data and ultrasonic radar data of non-blind areas around the working platform; and
the processor is used for acquiring the environment image data and the ultrasonic radar data and calculating a characteristic clustering matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and judging the distance between the obstacle and the working platform through the ultrasonic radar data, and generating an anti-collision alarm according to the distance.
11. The image-based work platform collision avoidance system of claim 10 wherein the environmental data acquisition component comprises:
the camera is used for collecting environment image data of a head-up area of a platform surface of the working platform; and
the radar and vision composite sensor is used for collecting environment image data and ultrasonic radar data of a bottom viewing area above the working platform, a right left viewing area, a front rear viewing area, a rear front viewing area and a left right viewing area of the working platform.
12. The image-based work platform collision avoidance system of claim 10 wherein the radar ranging angle of said radar and vision composite sensor is greater than 110 ° with a range of measurement of 100-2000mm; the detection angle of the vision sensor of the radar and vision composite sensor is larger than 180 degrees, and the measurement range is 20-5000mm.
13. An image-based work platform collision avoidance method based on the image-based work platform collision avoidance system of any of claims 10 to 12, the method comprising:
the environment data acquisition component acquires environment image data and ultrasonic radar data of non-blind areas around the working platform;
The processor acquires the environment image data and the ultrasonic radar data, and calculates a feature cluster matrix of the environment image data; judging the category of the obstacle to which the feature cluster matrix of the environmental image data belongs according to the feature cluster matrix of the environmental image data and the feature cluster matrix of the obstacle in the environmental sample database; calculating the characteristic recognition distance between the object in the environment image data and the obstacle in the environment sample database according to the obstacle category; judging whether an object in the environment image data is an obstacle or not according to the feature recognition distance; and carrying out fusion processing on the obstacle recognition result and the ultrasonic radar data, judging the distance between the obstacle and the working platform, and generating an anti-collision alarm according to the distance.
14. An aerial work platform comprising the image-based work platform collision avoidance system of any of claims 10 to 12.
15. A machine-readable storage medium having instructions stored thereon for causing a machine to perform the image-based work platform collision avoidance method of claim 13.
CN202011223503.7A 2020-11-05 2020-11-05 Work platform obstacle recognition method and system and anti-collision method and system Active CN112418003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011223503.7A CN112418003B (en) 2020-11-05 2020-11-05 Work platform obstacle recognition method and system and anti-collision method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011223503.7A CN112418003B (en) 2020-11-05 2020-11-05 Work platform obstacle recognition method and system and anti-collision method and system

Publications (2)

Publication Number Publication Date
CN112418003A CN112418003A (en) 2021-02-26
CN112418003B true CN112418003B (en) 2023-09-29

Family

ID=74827600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011223503.7A Active CN112418003B (en) 2020-11-05 2020-11-05 Work platform obstacle recognition method and system and anti-collision method and system

Country Status (1)

Country Link
CN (1) CN112418003B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596009A (en) * 2017-12-29 2018-09-28 西安智加科技有限公司 A kind of obstacle detection method and system for agricultural machinery automatic Pilot
CN111144279A (en) * 2019-12-25 2020-05-12 苏州奥易克斯汽车电子有限公司 Method for identifying obstacle in intelligent auxiliary driving
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment
CN111191559A (en) * 2019-12-25 2020-05-22 国网浙江省电力有限公司泰顺县供电公司 Overhead line early warning system obstacle identification method based on time convolution neural network
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596009A (en) * 2017-12-29 2018-09-28 西安智加科技有限公司 A kind of obstacle detection method and system for agricultural machinery automatic Pilot
CN111507145A (en) * 2019-01-31 2020-08-07 上海欧菲智能车联科技有限公司 Method, system and device for detecting barrier at storage position of embedded vehicle-mounted all-round looking system
CN111144279A (en) * 2019-12-25 2020-05-12 苏州奥易克斯汽车电子有限公司 Method for identifying obstacle in intelligent auxiliary driving
CN111191559A (en) * 2019-12-25 2020-05-22 国网浙江省电力有限公司泰顺县供电公司 Overhead line early warning system obstacle identification method based on time convolution neural network
CN111160302A (en) * 2019-12-31 2020-05-15 深圳一清创新科技有限公司 Obstacle information identification method and device based on automatic driving environment

Also Published As

Publication number Publication date
CN112418003A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN110689761B (en) Automatic parking method
CN108596058A (en) Running disorder object distance measuring method based on computer vision
CN109460709B (en) RTG visual barrier detection method based on RGB and D information fusion
US20230014874A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
CN107031623B (en) A kind of road method for early warning based on vehicle-mounted blind area camera
CN113255481B (en) Crowd state detection method based on unmanned patrol car
US10878288B2 (en) Database construction system for machine-learning
CN113370977B (en) Intelligent vehicle forward collision early warning method and system based on vision
WO2022188663A1 (en) Target detection method and apparatus
CN112967283B (en) Target identification method, system, equipment and storage medium based on binocular camera
CN112836633A (en) Parking space detection method and parking space detection system
CN103279741A (en) Pedestrian early warning system based on vehicle-mounted infrared image and working method thereof
Bai et al. Stereovision based obstacle detection approach for mobile robot navigation
CN107796373A (en) A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven
CN112598066A (en) Lightweight road pavement detection method and system based on machine vision
CN112861631A (en) Wagon balance human body intrusion detection method based on Mask Rcnn and SSD
CN113034378A (en) Method for distinguishing electric automobile from fuel automobile
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN115100741A (en) Point cloud pedestrian distance risk detection method, system, equipment and medium
CN114359865A (en) Obstacle detection method and related device
KR20160081190A (en) Method and recording medium for pedestrian recognition using camera
CN112418003B (en) Work platform obstacle recognition method and system and anti-collision method and system
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
Álvarez et al. Perception advances in outdoor vehicle detection for automatic cruise control
CN116805234A (en) Warehouse material control method based on laser radar and camera fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 410010 No. 701 Xulong South Road, Xiangjiang New District, Changsha City, Hunan Province

Applicant after: Hunan Zoomlion intelligent aerial work machinery Co.,Ltd.

Address before: 410010 room 4110, 4th floor, office building, 677 Lugu Avenue, high tech Development Zone, Changsha City, Hunan Province

Applicant before: Hunan Zoomlion intelligent aerial work machinery Co.,Ltd.

GR01 Patent grant
GR01 Patent grant