CN112395961B - Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler - Google Patents

Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler Download PDF

Info

Publication number
CN112395961B
CN112395961B CN202011198244.7A CN202011198244A CN112395961B CN 112395961 B CN112395961 B CN 112395961B CN 202011198244 A CN202011198244 A CN 202011198244A CN 112395961 B CN112395961 B CN 112395961B
Authority
CN
China
Prior art keywords
image
pedestrian
pixel
value
water pressure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011198244.7A
Other languages
Chinese (zh)
Other versions
CN112395961A (en
Inventor
续欣莹
杨斌超
谢刚
程兰
张喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202011198244.7A priority Critical patent/CN112395961B/en
Publication of CN112395961A publication Critical patent/CN112395961A/en
Application granted granted Critical
Publication of CN112395961B publication Critical patent/CN112395961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/22Improving land use; Improving water use or availability; Controlling erosion

Abstract

A self-adaptive control method for visual active pedestrian avoidance and water pressure of a sprinkler comprises the following steps: firstly, acquiring a real-time video by a binocular camera, and processing to obtain an acquired image L and an acquired image R; secondly, performing binocular stereo matching through the collected image L and the collected image R to obtain a visual difference image; thirdly, respectively sending the collected image L and the collected image R into a neural network rectangular pedestrian detection and carrying out pedestrian target consistency inspection on the detection result; fourthly, carrying out consistency inspection on the pedestrian target and obtaining pedestrian depth information through visual difference; fifthly, the pedestrian depth information is sent into a PID algorithm to carry out real-time adaptive control on the water pressure; and sixthly, sending the relevant data to the monitoring unit. The technical scheme of the embodiment of the invention realizes real-time video processing, pedestrian detection and positioning, adaptive control of water pressure and active pedestrian avoidance function, solves the problems of active pedestrian avoidance and adaptive control of water pressure of the current sprinkler, and is convenient to popularize and use.

Description

Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler
Technical Field
The invention belongs to the technical field of intelligent sprinkler spraying systems, and particularly relates to a visual active pedestrian avoidance and water pressure self-adaptive control method for a sprinkler.
Background
The existing watering cart is used for road surface cleaning, dust prevention and other works in city streets, factory parks and the like, achieves the purposes of controlling pollution and cleaning environment, and is an indispensable road cleaning tool in the current society. However, the sprinkler is often confronted with the pedestrian which appears suddenly in the running process to cause the wrong sprinkling, which causes conflict and even legal problems. And because the control mode of watering lorry has not been intelligent yet, lead to the water waste serious.
There are many ways to implement techniques for detecting and locating objects, such as lidar, binocular stereo vision, etc. The binocular stereoscopic vision has the advantages of low cost, strong practicability, high precision and wide applicability, the detection target and the positioning of the binocular vision are greatly developed, and the face unlocking of the mobile phone is most widely applied. The camera performs analysis and recognition after detecting the target, and performs target positioning by using visual difference to obtain a specific target coordinate position, but the camera has not been widely applied in the industrial field due to large calculation amount, low efficiency and the like. In the image-based pedestrian detection field, the early characteristic point matching detection-based mode can be influenced by light, visual distortion is generated by human actions, and the like, so that the problem of low recognition rate is caused. The method for establishing the neural network model through deep learning in machine learning developed later well solves the problems of low recognition rate, low efficiency, dependence on manual parameter setting and the like, but the low-cost and real-time use of the neural network model is influenced due to the fact that the neural network model is usually too complex, large in parameter quantity, overfitting problems and the like. Traditional control of watering lorry water pump is the control mode of break-make usually, and control mode is single, and the driver's cabin control end is not intelligent, is not convenient for operate and observes all ring edge borders and watering lorry water pump behavior in real time.
Therefore, a vision active pedestrian avoidance and water pressure self-adaptive control method of the sprinkler which is simple in structure, small in size, low in cost, reasonable in design, intelligent, good in real-time performance and fast in response is absent at present, and the vision active pedestrian avoidance and water pressure self-adaptive control are realized by installing a binocular camera, control room equipment, water pump control equipment and the like on the sprinkler.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a visual active pedestrian avoidance and water pressure self-adaptive control method for a sprinkler, aiming at the defects in the prior art, which is designed to solve the problems of active pedestrian avoidance and water pressure self-adaptive control of the existing sprinkler, and has the advantages of good real-time performance, quick response and convenient popularization and use.
In order to solve the technical problems, the invention adopts the technical scheme that: the embedded equipment is composed of hardware and software and is a device capable of independently operating
A vision active pedestrian avoidance and water pressure self-adaptive control method for a sprinkler comprises the steps of carrying out limit constraint correction on a binocular camera, uniformly acquiring video image frames for image analysis, and respectively carrying out normalization processing on an acquired image L and an acquired image R, wherein the L and the R are respectively a left original image and a right original image of the binocular camera to obtain an acquired image;
converting the collected image L and the collected image R after normalization into R, G, B three channels (red, green and blue color channels respectively), and calculating the absolute difference value of each pixel point of the collected image L and each pixel point of the collected image R of the three channels respectively to obtain an absolute difference mean value;
converting the acquired image L and the acquired image R which are subjected to normalization processing into gray level images, dividing the gray level images into a plurality of local images according to a rectangle, carrying out comparison coding on other pixel values in each local image by taking a pixel value of a central point of the rectangle as a threshold value to obtain a bit string of a local area, and then calculating a bit string difference value;
carrying out weighted summation on the obtained absolute difference mean value and the obtained bit string difference value to obtain binocular image matching cost;
converting the acquired image L and the acquired image R after normalization processing into five-dimensional characteristic vectors under a CIELAB space and a two-dimensional coordinate, establishing a distance measurement range for the five-dimensional characteristic vectors, carrying out local clustering on pixel points and establishing constraint conditions for clustering optimization to obtain superpixel segmentation blocks of the acquired image L and the acquired image R;
optimizing the matching cost of the obtained binocular image, and obtaining the optimal matching effect by minimizing a global optimization function, wherein the weight of the global optimization function is determined by whether pixel points belong to the same super-pixel segmentation block;
the weight W2 belonging to the same pixel block is W1, the weight W2 not belonging to the same pixel block is W3, the gray difference value of the pixel divided by W3 and the superpixel is dynamically adjusted, and W1, W2 and W3 are weight coefficients of different pixel blocks and are real numbers;
a pedestrian detection method based on a neural network, the neural network comprising: a backbone network, a converged network and a decision unit;
the backbone network is a feature extraction network and obtains feature maps with a plurality of sizes;
the fusion network adds the obtained multi-scale characteristic image tensors with a plurality of sizes to obtain a multi-scale fusion characteristic image;
the decision unit comprises target boundary box prediction, coordinate prediction and pedestrian prediction. The method comprises the steps that at least 2 boundary frames are predicted through logistic regression in boundary frame prediction, the predicted boundary frame with the largest intersection ratio of the predicted boundary frame and a real marking frame is calculated to serve as a final predicted boundary frame, a coordinate predicted value is obtained through calculating a central point coordinate value of the final predicted boundary frame, and pedestrian category prediction is obtained through binary cross entropy loss;
the loss function of the neural network is the variance sum of the coordinate predicted value, the variance of the boundary box predicted value, the variance of the confidence coefficient containing pedestrians, the variance of the confidence coefficient not containing pedestrians and the variance of the pedestrian category probability variance;
the output image is a plurality of characteristic images with the size of T x T (T is a real number), each cell is responsible for detecting a pedestrian target in the grid, and the pedestrian target is obtained by respectively performing down-sampling on the original image. The number of the predicted bounding boxes of each layer is 2;
tensor size transformation of images in the neural network is realized by changing the step length of a convolution kernel of the neural network;
the neural network convolution kernel is in the form of 1 × 3, 3 × 1 or 1 × 5, 5 × 1;
and (3) carrying out consistency check on the predicted pedestrian frame diagrams in the collected image L and the collected image R, obtaining a correct prediction result when the intersection ratio of the predicted pedestrian frame diagrams is more than 0.8, and if the intersection ratio of the predicted pedestrian frame diagrams is wrong, calculating a two-dimensional coordinate axis value and a visual difference of the predicted frame diagrams according to the predicted frame diagrams, and obtaining a three-dimensional coordinate of the pedestrian together with the internal parameters of the binocular camera.
Based on the obtained three-dimensional coordinate value, performing median filtering algorithm on the coordinate value, and collecting a plurality of times of distances to perform sequencing filtering to obtain accurate depth information;
and sending the depth information into a PID algorithm based on the obtained depth information to obtain the input of a control frequency converter, and controlling a synchronous permanent magnet motor water pump through the frequency converter to control the water pressure.
Meanwhile, the invention also discloses an electronic device which has simple method steps and reasonable design and can monitor the surrounding environment of the sprinkler, and the electronic device comprises:
one or more processors, one or more sets of binocular cameras, one or more machine readable media having instructions stored thereon, one or more hydraulic control units, one or more supervisory control devices;
the one or more sets of binocular cameras, the one or more machine readable media having instructions stored thereon, the one or more hydraulic control units, and the one or more supervisory control devices are each coupled to the one or more processors.
The one or more processors enable the electronic device to execute a visual active pedestrian avoidance and water pressure adaptive control method for a sprinkler when the one or more processors are in operation;
the one or more groups of binocular cameras are video image acquisition units and acquire an acquired image L and an acquired image R, wherein the L and the R are respectively a left original image and a right original image of the binocular cameras;
the one or more machine-readable media stores instructions to perform a visual active pedestrian avoidance and water pressure adaptive control method for a sprinkler;
the one or more water pressure control units comprise a synchronous permanent magnet motor water pump, a frequency converter, an electromagnetic valve and the like to control water pressure;
the one or more monitoring control devices comprise a control panel, a display interface and the like, the control panel controls functions of a water pump switch, a pedestrian detection working switch and the like, and the display interface displays control parameter information such as real-time videos transmitted in a serial port mode and relevant pedestrian detection parameters.
Compared with the prior art, the invention has the following advantages:
1. the invention adopts a sprinkler vision active pedestrian avoidance and water pressure self-adaptive control method, which combines a real-time binocular vision image stereo matching positioning algorithm with an image extracted by a neural network to detect pedestrians, calculates the three-dimensional coordinates and depth information of the pedestrians, performs filtering processing, controls the water pressure in real time through a PID algorithm and transmits real-time parameters to a control end into a whole, and fills the blank of the industrial application field of the sprinkler pedestrian active avoidance and water pressure self-adaptive control method;
2. the real-time binocular vision image stereo matching positioning algorithm optimizes the matching cost and the aggregation mode of the binocular stereo matching algorithm, has low calculated amount and good real-time performance, and is convenient to use in an industrial embedded platform.
3. The neural network for detecting the pedestrians improves the detection precision by using a characteristic fusion mode of a residual error structure, adjusts the size of a convolution kernel to reduce the parameters of a training amount, optimizes the structure and predicts the number of bounding boxes to simplify a model, reduces the false detection rate by using a cross-over ratio mode for consistency detection and improves the operation speed, and the neural network has small calculated amount and high accuracy and is convenient to popularize and use.
4. According to the invention, three-dimensional coordinates and depth information of pedestrians are filtered and then the water pressure is controlled in real time through a PID algorithm, and the water spraying distance and the pedestrian distance are adaptively adjusted, so that the pedestrians are actively avoided, the spraying area is increased to the maximum extent, and the design is novel and reasonable.
5. The electronic equipment adopted by the invention provides a monitoring system, so that a driver can conveniently monitor and control the electronic equipment, and the working efficiency is improved.
In conclusion, the invention has reasonable design innovation and strong practicability, fills the blank of the industrial application field and is convenient for popularization and use.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
Fig. 1 is a schematic general flow chart of a visual active pedestrian avoidance and water pressure adaptive control method for a sprinkler according to the present invention.
Fig. 2 is a schematic diagram of an embodiment of a method for adaptive control of pedestrian avoidance and water pressure in a visual sense of a sprinkler according to an embodiment of the present invention.
Fig. 3 is a schematic view of the electronic device installation of the present invention.
Fig. 4 is a schematic diagram illustrating a third case obtaining manner of the local area bit string according to the present invention.
FIG. 5 is a diagram illustrating the structure of a convolution kernel according to the present invention.
Fig. 6 is a schematic diagram of a backbone network according to the present invention.
FIG. 7 is a schematic diagram of input-to-output sampling according to the present invention.
Description of reference numerals:
1-binocular vision equipment; 3-a monitoring unit; 4-a spraying unit;
10, a water storage tank; 11-a loader; 14-sprinkling nozzle.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
Fig. 1 is a schematic general flow chart of a visual active pedestrian avoidance and water pressure adaptive control method for a sprinkler of the present invention;
as shown in fig. 2, a visual active pedestrian avoidance and water pressure self-adaptive control method for a sprinkler includes the following steps:
step S1: respectively carrying out normalization processing on the collected image L and the collected image R to obtain images with sizes of 416 × 416; the acquired image L and the acquired image R are images which are acquired uniformly according to a certain number of frames from videos shot by a binocular camera in the same scene;
step S2: calculating pixel point absolute difference mean values of RGB three channels of the collected image L and the collected image R and bit string difference values of local gray value codes of the collected image L and the collected image R, and weighting and summing the absolute difference mean values and the bit string difference values to obtain matching cost;
step S3: performing cost aggregation on the minimized global optimization function to obtain the optimal visual difference, and obtaining a visual difference image; wherein the weight of the global optimization function is determined by the super-pixel segmentation result under the constraint condition;
step S4: the images obtained through normalization processing are respectively sent to a neural network model for pedestrian prediction; the neural network model is an optimal pedestrian detection model obtained under the training of a relevant data set;
step S5: the method comprises the steps that a backbone network extracts a plurality of feature layers with certain sizes through a plurality of convolution kernels with certain sizes, a fusion network fuses the feature layers with certain sizes through upsampling to obtain a plurality of fusion feature maps, and a decision unit predicts a pedestrian target, a target boundary frame and coordinates according to the fusion feature maps;
step S6: respectively carrying out consistency check on the pedestrian target boundary frames predicted by the collected image L and the collected image R, and obtaining a two-dimensional coordinate of the pedestrian if the intersection ratio is greater than a certain threshold value;
step S7: the depth information is sent into a PID algorithm to obtain parameters for controlling the water pressure, and the water pressure is controlled in real time to realize active pedestrian avoidance and water pressure self-adaptive control;
step S8: the pedestrian detection result, the real-time video and the related control parameters are sent to the monitoring equipment of the cab in a serial port transmission mode, and the related control operation is sent to the water pressure control unit in the serial port transmission mode through the monitoring equipment;
the sprinkler vision active pedestrian avoidance and water pressure self-adaptive control method described in the example is suitable for performing binocular stereo matching and pedestrian detection by using images acquired by binocular vision equipment in real time and controlling water pressure by using synchronous permanent magnet motor water pump equipment, and the synchronous permanent magnet motor water pump can linearly control the water pressure through a frequency converter so as to achieve self-adaptive control of actively avoiding pedestrians. The type of binocular vision equipment is not limited, and the acquired images are normalized into uniform pixel size in a subsequent algorithm; the type of the synchronous permanent magnet motor water pump is not limited, and the related parameters of the control relationship between the water cost and the water pressure of different types of synchronous permanent magnet motors can be set to adapt to different environments.
In step S1, normalizing the collected image L and the collected image R to obtain images of 416 × 416 size, where the collected image L and the collected image R used subsequently are both normalized images; the collected image L and the collected image R are images which are uniformly collected according to a certain number of frames from videos shot by the binocular camera in the same scene.
In this embodiment, the related devices are firstly installed on the sprinkler, and the installation manner shown in fig. 3 is combined, and the device is not limited to a specific installation position and number of the vehicle body, and can be installed on the front side, the middle side, the rear side or both sides of the vehicle body.
In this embodiment, the electronic equipment is installed at the automobile body rear side, and 1 binocular vision equipment is 1.2 meters apart from ground height, and 3 the monitoring unit is installed in 11 load cars, and 4 the unit of spraying sprays the operation through 14 watering spouts with the water source of 10 water storage tanks.
The collected images L and R are images which are uniformly collected according to a certain number of frames in videos shot by a left camera and a right camera of the same binocular camera respectively in the same scene; before the videos shot by the left camera and the right camera of the binocular camera are uniformly collected according to a certain number of frames, the cameras perform epipolar line correction. The collected image L is used as a reference image, and the collected image R is used as a target image.
Step S2: calculating the pixel point absolute difference mean value of RGB three channels of the collected image L and the collected image R and the bit string difference value of local gray value coding of the collected image L and the collected image R, and weighting and summing the absolute difference mean value and the bit string difference value to obtain the matching cost.
In this example, the purpose of calculating the matching cost is to find the best correlation between the pixel to be matched and the candidate pixel, and assuming that the value range of the visual difference is [0, D ], under this value range, the correlation between the pixel to be matched and the candidate pixel is calculated, and the correlation is obtained by the matching cost formula.
Specifically, the method comprises the following steps:
step S21: and converting the acquired image L and the acquired image R into layers of three RGB channels, respectively calculating absolute values of layer pixel differences of the three channels, and then calculating an average value. C AV The mean absolute difference value of pixel points of RGB three channels of the collected image L and the collected image R is obtained, M is the pixel value of the current pixel point, p is the current pixel point, i is the layer of the current three channels, D is the value range of visual difference [0, D ]]The current value in (1) is obtained, the mean value of absolute differences of pixel points is obtained, and the calculation formula is as follows:
Figure BDA0002754620580000081
step S22: converting the collected image L and the collected image R into gray level images, respectively carrying out local area segmentation, carrying out comparison coding calculation on gray level values of each pixel point in a rectangle with the local area size of n x m, obtaining bit strings of the local areas of the collected image L and the collected image R through comparison coding calculation, carrying out exclusive OR operation on the obtained bit strings, and counting the number of exclusive OR calculation results of 1 to obtain a difference value C BIT U and v are coordinate values of pixel points of the current local area, n 'is the maximum integer not greater than half of n, m' is the maximum integer not greater than half of m,
Figure BDA0002754620580000082
for bit-wise concatenation operation of bits, C bit For the local region bit string, Xor (x, y) is the number of the bit string x and the bit string y that are subjected to Xor operation and then the accumulated Xor operation result is 1, and the calculation formula is as follows:
Figure BDA0002754620580000091
Figure BDA0002754620580000092
the ξ operation is then defined by the following equation:
Figure BDA0002754620580000093
in the implementation application, there may be a plurality of such options for the obtaining manner of the local area bit string according to different requirements, and in this embodiment, the following three processing cases of the local area bit string are provided.
The first method comprises the following steps:
when the values of n and m in the local region n × m are both odd numbers, selecting a central pixel point of the local region as a reference pixel point, taking the gray value of the reference pixel point as a gray threshold value, comparing the gray value of each pixel point of the local region in a traversal mode, and if the gray value of the pixel point is larger than the gray threshold value, the gray value is 0, otherwise, the gray value is 1, and if the gray value of the pixel point is larger than the gray threshold value, obtaining n × m-1 binary bit strings in the local region of the size n × m.
And the second method comprises the following steps:
when the values of n and m in the local region n × m are both even numbers, selecting four pixel points in the center of the local region as a reference pixel block, taking the gray value average value of the reference pixel block as a gray threshold, comparing the gray value of each pixel point in the local region in a traversal mode, wherein the gray value of each pixel point is 0 if the gray value of the pixel point is larger than the gray threshold, otherwise, 1, and obtaining n × m-4 binary bit strings in the local region with the size of n × m.
And the third is that:
when one of the values of n and m in the local region n x m is an even number and one is an odd number, two pixel points in the center of the local region are selected as reference pixel blocks, the mean value of the gray value of the reference pixel blocks is used as a gray threshold, the gray value of each pixel point in the local region is compared in a traversal mode, the gray value of the pixel point is 0 if being larger than the gray threshold, otherwise, the local region n x m is 1, and n m-2 binary bit strings are obtained.
In this embodiment, referring to fig. 4, a third case obtaining manner of a local area bit string is shown, where the size of the local area is 4 × 4, a broken-line frame is a reference pixel block, and a difference value C is obtained BIT Is 8.
Step S23: pass weight value V AV And V BIT Calculating to obtain binocular image matching cost C SUM The calculation formula is as follows.
C SUM (p,d)=ρ(C BIT (p,d),V BIT )+ρ(C AV (p,d),V AV )
Figure BDA0002754620580000101
In this embodiment, the collected image L is used as a reference image, and the matching cost is calculated by sequentially traversing the pixels of the image according to the methods in steps S21 to S23.
By adopting the method, the matching cost of each parallax response in the visual difference range can be obtained, so that a series of matching cost values can be obtained.
However, the optimal matching cost is not found based on the calculated matching cost, the result is several, the result is inaccurate, the specific vision difference image cannot be judged, and the optimal matching cost must be found through matching cost aggregation so as to obtain the unique vision difference image.
As shown in step S3, performing cost aggregation on the minimized global optimization function to obtain the optimal visual difference, and obtaining a visual difference image; wherein the weight of the global optimization function is determined by the super-pixel segmentation result under the constraint condition.
Specifically, the method comprises the following steps:
step S31: obtaining the optimal visual difference of the visual difference of each pixel of the global image, wherein the global optimization function is matched with the cost C through a binocular image SUM The sum of the first parameter and the second parameter(ii) a Wherein the first parameter is that the visual difference value of the current pixel point and the visual difference value of the surrounding pixel points are 1, and the weight is W 1 And if the second parameter is that the visual difference value of the current pixel point and the visual difference value of the surrounding pixel points are greater than 1, the weight is W 2 The formula is as follows:
Figure BDA0002754620580000102
Figure BDA0002754620580000103
Figure BDA0002754620580000104
wherein F (D) is a global optimization function, D p To collect the visual difference of the current pixel point p of the image L, alpha is Boolean operation, N p A plurality of pixel points around the current pixel point p of the collected image L are acquired.
Step S32: the global optimization function obtains prior weight values through superpixel segmentation, and superpixel segmentation is carried out on the collected image L and the collected image R respectively, wherein the W belongs to the same pixel block 2 The weight is W 1 W not belonging to the same block 2 The weight is W 3 ,W 3 And dynamically adjusting the gray difference value of the pixels divided by the super pixels.
Specifically, superpixel segmentation is based on constraint condition superpixel segmentation clustering search, local clustering is carried out on image pixels, adjacent pixel points with similar characteristics form a pixel block, the size of the pixel block is limited by constraint conditions, and the specific steps are as follows:
step S321: converting the collected image L and the collected image R into five-dimensional characteristic vectors under CIELAB color space and XY coordinates, initializing cluster numbers, initializing a cluster center point, and selecting a point with the minimum gradient value from a plurality of pixel points around the initialized cluster center point as a new cluster center point to avoid the cluster center point at the texture edge.
Step S322: constructing distance measurement standards for the five-dimensional feature vectors, wherein the distance measurement standards comprise color distances and space distances, and calculating the distance between each searched pixel point and a cluster center point, wherein the calculation formula is as follows:
Figure BDA0002754620580000111
Figure BDA0002754620580000112
Figure BDA0002754620580000113
wherein D is C Representing the distance of the color, D S Represents the spatial distance, N S Is the maximum spatial distance within a cluster, which is the square root of the quotient of the total number of pixels of the image and the number of initialized clusters, N C The maximum color distance within a cluster is set to a constant.
Step S323: each pixel point can be searched by a plurality of seed points, each pixel point has a plurality of distances between the pixel point and the seed points, the seed point corresponding to the minimum distance value is taken as the clustering center point of the pixel point, and the clustering result is obtained through iterative computation.
However, since the obtained clustering result includes situations such as too small pixel blocks and multi-connectivity, adaptive clustering optimization is required to obtain reasonable clustering pixel blocks.
Step S324: and carrying out cluster search under constraint conditions on pixel points in a plurality of directions around the center of each superpixel cluster. And searching pixel points in a plurality of directions around the pixel point of each cluster for the central point of each cluster, and then clustering the pixel points meeting the constraint conditions.
Taking the left side of the pixel point as an example, whether the spatial distance between the current super pixel cluster center point d1 and the left side center point d2 is smaller than a threshold s1 is judged, if so, the pixel points with the color difference values in a plurality of directions of the current center point lower than a threshold c1 are searched and aggregated, but the constraint condition is that the spatial distance is not greater than the threshold s2, and the calculation formula is as follows.
D space (d1,d2)<s1,
D colour (d1,d2)<c1,(D space <s2)
Wherein D is space Is the spatial distance of two pixels, D colour The color difference value of the two pixel points.
And obtaining the super-pixel segmentation result of the collected image L and the collected image R according to the steps S321 to S324.
Step S33: and based on a global optimization function, selecting the visual difference with the minimum matching cost aggregation of each pixel point, optimizing the visual difference to obtain the optimal visual difference value, and forming an optimal visual difference image.
In step S4, the normalized images are sent to the neural network model for pedestrian prediction; the neural network model is an optimal pedestrian detection model obtained under the training of a relevant data set.
Specifically, the method comprises the following steps:
step S41: an image data set is collected and labeled.
The training set is formed by normalizing pictures of pedestrians in environments such as roads and parks, the pictures are marked by using a marking tool, and the width and the height of a pedestrian area and a pedestrian area, a two-dimensional coordinate value of a pedestrian center point and the like are divided by marking.
Step S42: and pre-training the neural network for detecting the pedestrian to obtain the neural network for detecting the pedestrian.
The neural network comprises a backbone network, more than two fusion networks and a decision unit. Wherein the loss function of the network is:
when the neural network is trained, the loss function formula of the neural network is as follows:
Figure BDA0002754620580000131
wherein S is 2 Represents the number of grids T, here 13X 13 or 26X 26, B represents the number of prediction boxes per cell, here 2, I ij The values here are 0 and 1 for the presence of a pedestrian.
The loss functions are respectively the sum of the loss functions of position prediction, width and height, two confidence degrees and category probability, namely coordinate prediction value variance, boundary box prediction value variance, confidence degree variance containing pedestrians, confidence degree variance not containing pedestrians and variance sum of pedestrian category probability variance.
In step S5, the backbone network extracts a plurality of feature layers with certain sizes through a plurality of convolution kernels with certain sizes, the fusion network performs upsampling fusion on the plurality of feature layers with certain sizes to obtain a plurality of fusion feature maps, and the decision unit predicts a pedestrian target, a target bounding box, and coordinates according to the fusion feature maps.
Specifically, the method comprises the following steps:
step S51: the pixel values of the input image are converted into R, G, B three-channel data.
Step S52: the backbone network extracts a plurality of feature layers with certain sizes through convolution kernels with certain sizes, wherein the sizes of the convolution kernels are 1 × n and n × 1, as shown in fig. 5, the feature layers are schematic diagrams of convolution kernel structures, and the convolution kernel structures can effectively increase the depth of a model, reduce the size and shorten the training time under the condition that parameter variables are unchanged or reduced.
The backbone network comprises 104 convolutional layers, 1 average pooling layer, 1 full-link layer and one Softmax layer, and is shown in FIG. 6.
Step S53: and extracting a series of image features through convolution operation to obtain a multi-scale feature map. As Scale1 and Scale2 in fig. 5 are feature maps with different scales, a 2-layer feature map is output in the network.
Step S54: and outputting the feature maps based on different scales to a fusion network for processing to obtain a multi-scale fusion feature map. As shown in fig. 5, two Scale feature maps Scale1 and Scale2 are obtained, the feature map Scale1 is subjected to feature fusion with Scale2 by tensor addition after upsampling through a convolution layer, a regularization layer, an activation function layer and a double matrix, and then a second feature image is obtained through the convolution layer, the regularization layer and the activation function layer, and the feature map Scale1 is subjected to a convolution layer, a regularization layer and an activation function layer to obtain a first feature image.
Step S55: and the decision unit predicts the obtained multi-scale fusion characteristic graph to finally obtain a prediction result.
Specifically, the first characteristic image and the second characteristic image respectively pass through a convolution layer, a regularization layer and an activation function layer to obtain a first output characteristic image and a second output characteristic image. The scale size is obtained by respectively carrying out 32-time and 16-time down-sampling on the original sample, the size is 13 × 13 and 26 × 26, and as shown in fig. 7, the scale size is a schematic diagram of input-output sampling; the predicted values of the first output characteristic image and the second output characteristic image are tensors with the sizes of 13 × 2 (4+1+1) and 26 × 2 (4+1+1), wherein 2 refers to the number of predicted boundary frames of each layer, 4 refers to the sizes (x, y, w, h) of the boundary frames, 1 refers to the confidence coefficient, and 1 refers to the category of pedestrians.
The decision unit comprises target boundary box prediction, coordinate prediction and pedestrian prediction. The boundary frame prediction predicts two boundary frames through logistic regression, a prediction boundary frame with the maximum intersection ratio of the prediction boundary frame and a real marking frame is calculated to serve as a final prediction boundary frame, a coordinate prediction value is obtained through calculating the coordinate value of the center point of the final prediction boundary frame, and pedestrian category prediction is obtained through binary cross entropy loss.
And step S6, performing consistency check on the predicted pedestrian target bounding boxes of the collected image L and the collected image R respectively, and obtaining a two-dimensional coordinate of the pedestrian if the intersection ratio is greater than a certain threshold.
Specifically, in step S4, the pedestrian detection is performed on the captured image L and the captured image R, the pedestrian prediction bounding box is obtained in both the captured image L and the captured image R, the visual difference value corresponding to the visual difference image is subtracted from the two-dimensional coordinate of the pedestrian prediction bounding box of the captured image L to obtain the pedestrian comparison bounding box, the pedestrian comparison bounding box is compared with the pedestrian prediction bounding box of the captured image R to calculate the intersection ratio, if the intersection ratio is greater than 0.8, the pedestrian prediction is considered to be accurate, otherwise, the prediction result is discarded.
In step S7, the depth information is sent to the PID algorithm to obtain parameters for controlling the water pressure, and the water pressure is controlled in real time to realize active pedestrian avoidance and adaptive control of water pressure.
Specifically, the method comprises the following steps:
step S71: and converting the parallax obtained by the two-dimensional coordinates and the visual difference images of the pedestrians into three-dimensional coordinate information of the pedestrians, extracting the depth information of the pedestrians to perform a median filtering algorithm, and collecting a plurality of times of distances to perform sequencing filtering to obtain accurate depth information.
Step S72: and sending the depth information into a PID algorithm to adaptively control the water pressure in real time, and actively avoiding pedestrians by controlling the water pressure so as to realize the maximization of the spraying area.
The depth information is sent into a PID algorithm to obtain an output value serving as an input value of the frequency converter, the output value of the frequency converter serves as an input value of the permanent magnet synchronous motor water pump, so that the water pressure is controlled through the permanent magnet synchronous motor water pump, and the relation between the frequency output by the frequency converter and the rotating speed of the permanent magnet synchronous motor water pump, the relation between the rotating speed of the permanent magnet synchronous motor water pump and the water pressure, and the relation between the water pressure and the jet distance are adjusted through characteristic curves of different devices.
In step S8, the pedestrian detection result, the real-time video and the related control parameters are transmitted to the monitoring device of the cab via serial port transmission, and the related control operation is transmitted to the hydraulic control unit via the monitoring device via serial port transmission.
Specifically, the control parameters include real-time video signals, pedestrian depth information, water pump working signals, power signals and the like, and the monitoring devices include image display devices, key control and the like.
In conclusion, the sprinkler vision active pedestrian avoidance and water pressure self-adaptive control method provided by the disclosure can complete road surface cleaning under the environments of urban and rural roads, factory parks and the like according to user requirements, and realizes the functions of active pedestrian avoidance, maximum spraying area, water resource saving and the like.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and all simple modifications, changes and equivalent structural changes made to the above embodiment according to the technical spirit of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (2)

1. A self-adaptive control method for visual active pedestrian avoidance and water pressure of a sprinkler is characterized by comprising the following steps of:
s1: respectively carrying out normalization processing on an acquired image L of a left camera and an acquired image R of a right camera of binocular vision equipment to obtain an image with the size of Q1 × Q1 pixels, wherein Q1 is a real number; the collected image L of the left camera and the collected image R of the right camera are images which are obtained by uniformly collecting more than 30 frames in a video shot by the binocular camera in the same scene;
s2: calculating the matching cost of the binocular images, and calculating the average value C of absolute differences of pixel points of RGB (red, green and blue) three channels of the collected image L of the left camera and the collected image R of the right camera AV And bit string difference C calculated by encoding n m gray values of local regions of collected images L and R of the left camera and the right camera BIT Weighting and summing the mean absolute difference value and the bit string difference value to obtain the binocular image matching cost C SUM
Figure FDA0003709573860000011
Figure FDA0003709573860000012
Figure FDA0003709573860000013
C SUM (p,d)=ρ(C BIT (p,d),V BIT )+ρ(C AV (p,d),V AV )
Figure FDA0003709573860000014
Wherein, V AV 、V BIT N, m, V are weighted values AV 、V BIT Are all real numbers; p is the current pixel point of the collected image L of the left camera, d is the visual difference, M is the pixel value of the current pixel point, Xor (x, y) is the number of 1 which is obtained by carrying out XOR operation on x and y, u and v are the pixel coordinates of the current local area, n 'is the downward rounding of the local area n/2, M' is the downward rounding of M/2, C is the downward rounding of the local area n/2 bit Coding the gray values of the local regions n x m of the collected image L of the left camera and the collected image R of the right camera to calculate the bit strings of the local regions;
Figure FDA0003709573860000015
for bitwise concatenation of bits, the ξ operation is defined by the following equation:
Figure FDA0003709573860000021
s3: calculating binocular matching cost aggregation, and performing cost aggregation through a minimized global optimization function to obtain the optimal visual difference to obtain a visual difference image; wherein the weight of the global optimization function is determined by the super-pixel segmentation result under the constraint condition; calculating binocular matching cost aggregation:
for the minimum global optimization function F (D), the visual difference of one pixel obtains the optimized visual difference, the global optimization function obtains a prior weight through superpixel segmentation, the collected image L of the left camera and the collected image R of the right camera are respectively subjected to superpixel segmentation, and the W belonging to the same pixel block 2 The weight is W 1 W not belonging to the same block 2 The weight is W 3 ,W 3 Dynamic adjustment of the grey difference of the pixels with respect to the super-pixel division, W 1 、W 2 、W 3 The weighting coefficients of different pixel blocks are real numbers;
Figure FDA0003709573860000022
Figure FDA0003709573860000023
Figure FDA0003709573860000024
wherein F (D) is a global optimization function, D p To collect the visual difference of the current pixel point p of the image L, alpha is Boolean operation, N p 8 pixel points around the current pixel point p of the collected image L of the left camera are obtained;
ii, carrying out cluster search under constraint conditions on pixel points in a plurality of directions around the center of each super pixel cluster; for the central point of each cluster, searching pixel points in a plurality of directions around the pixel point of each cluster, and then clustering the pixel points meeting constraint conditions; taking the left side of a pixel point as an example, judging whether the spatial distance between a current super pixel cluster central point d1 and a left side central point d2 is smaller than a threshold value s1, wherein s1 is a real number which is a set pixel spatial distance threshold value, the color difference values of K directions of the central points are lower than a threshold value c1, and the pixel points with the spatial distances smaller than s2 are aggregated, and c1 is a real number;
D space (d1,d2)<s1,
D color (d1,d2)<c1,
D space <s2
wherein D is space (D1, D2) and D color (d1, d2) are the spatial distance and the color difference value of two pixel points of d1 and d2 respectively;
s4: the images obtained through normalization processing are respectively sent to a neural network model for pedestrian prediction; the neural network model is an optimal pedestrian detection neural network model obtained under the training of a relevant data set;
s5, extracting feature layers with a plurality of sizes through a plurality of convolution kernels with certain sizes by a trunk network of the pedestrian detection neural network, obtaining a plurality of fusion feature maps by the fusion network through upsampling and fusing the feature layers with the plurality of sizes, and predicting a pedestrian target, a target boundary frame and coordinates according to the fusion feature maps by a decision unit;
s6: respectively carrying out consistency check on the pedestrian target boundary frames predicted by the collected image L of the left camera and the collected image R of the right camera, and obtaining a two-dimensional coordinate of the pedestrian if the intersection ratio is greater than a certain threshold value;
s7, converting the parallax obtained by the two-dimensional coordinates and the visual difference images of the pedestrians into three-dimensional coordinate information of the pedestrians, extracting the depth information of the pedestrians, carrying out median filtering algorithm, collecting a plurality of distances, and carrying out sequencing filtering to obtain accurate depth information; sending the depth information into a PID algorithm to adaptively control the water pressure in real time;
s8: the pedestrian detection result, the real-time video and the related control parameters are sent to the monitoring equipment of the cab in a serial port transmission mode, and the related control operation is sent to the water pressure control unit in the serial port transmission mode through the monitoring equipment.
2. The adaptive control method for visual active pedestrian avoidance and water pressure on a sprinkler according to claim 1, wherein the neural network model comprises:
A. the pedestrian detection neural network is an optimal pedestrian detection model obtained under the training of a relevant data set; the neural network structure comprises a backbone network, more than two fusion networks and a decision unit, wherein the parameters of each fusion network are independent;
B. extracting pixel values of an input image by a backbone network, converting the pixel values into R, G, B data of three color channels of red, green and blue respectively, extracting a series of image features through convolution operation to obtain a multi-scale feature image, outputting feature maps of different scales to a fusion network for processing to obtain a multi-scale fusion feature image, predicting the obtained multi-scale fusion feature image by a decision unit, and finally obtaining a prediction result;
C. the decision unit comprises target boundary box prediction, coordinate prediction and pedestrian prediction, wherein the boundary box prediction predicts at least 2 boundary boxes through logistic regression, the predicted boundary box with the largest intersection ratio of the predicted boundary box and the real marking box is calculated to serve as a final predicted boundary box, the coordinate prediction value is obtained through calculating the coordinate value of the center point of the final predicted boundary box, and the pedestrian category prediction is obtained through binary cross entropy loss;
D. the loss function of the neural network is the variance sum of the coordinate predicted value, the variance of the boundary box predicted value, the variance of the confidence coefficient containing pedestrians, the variance of the confidence coefficient not containing pedestrians and the variance of the pedestrian category probability variance;
E. the input image is divided into T grid points T, the value of T is 13 or 26, each unit grid is responsible for detecting the pedestrian target in the grid, 2 layers of feature maps are respectively output in the grid, and the feature maps are obtained by respectively carrying out 32-time and 16-time down-sampling on the original image, wherein the size of the feature maps is 13 grid points 13 or 26 grid points 26; the total predicted characteristic diagram is 2 characteristic diagrams, and is subjected to characteristic fusion through a matrix up-sampling algorithm, each layer of predicted value is a tensor with the size of T x T2 x (4+1+1), wherein 2 refers to the number of predicted boundary frames of each layer, the size of the boundary frames is obtained through a K-means clustering algorithm, 4 refers to the size x, y, w and h of a frame, wherein x, y, w and h are real numbers, 1 refers to confidence coefficient, and 1 refers to a pedestrian category;
F. the convolution kernel of the neural network is in the form of 1 x 3, 3 x 1 or 1 x 5, 5 x 1 to speed up the training and reduce the parameter variables while increasing the number of network layers.
CN202011198244.7A 2020-10-31 2020-10-31 Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler Active CN112395961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011198244.7A CN112395961B (en) 2020-10-31 2020-10-31 Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011198244.7A CN112395961B (en) 2020-10-31 2020-10-31 Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler

Publications (2)

Publication Number Publication Date
CN112395961A CN112395961A (en) 2021-02-23
CN112395961B true CN112395961B (en) 2022-08-09

Family

ID=74597818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011198244.7A Active CN112395961B (en) 2020-10-31 2020-10-31 Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler

Country Status (1)

Country Link
CN (1) CN112395961B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113298869B (en) * 2021-04-23 2023-08-04 南方电网数字电网科技(广东)有限公司 Distance measuring method, distance measuring device, computer device, and storage medium
CN113188000B (en) * 2021-05-14 2022-07-01 太原理工大学 System and method for identifying and rescuing people falling into water beside lake
CN113263859A (en) * 2021-06-23 2021-08-17 四川国鼎建筑设计有限公司 Ecological energy-saving sculpture tree
CN114394100B (en) * 2022-01-12 2024-04-05 深圳力维智联技术有限公司 Unmanned patrol car control system and unmanned car
CN117252926B (en) * 2023-11-20 2024-02-02 南昌工控机器人有限公司 Mobile phone shell auxiliary material intelligent assembly control system based on visual positioning

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101741433B1 (en) * 2015-06-09 2017-05-30 엘지전자 주식회사 Driver assistance apparatus and control method for the same
US11493348B2 (en) * 2017-06-23 2022-11-08 Direct Current Capital LLC Methods for executing autonomous rideshare requests
CN108049355A (en) * 2017-12-13 2018-05-18 徐剑霞 A kind of environmental-protection sprinkler of automatic avoidance pedestrian
CN109507923A (en) * 2018-11-29 2019-03-22 南宁思飞电子科技有限公司 One kind automatically controlling sprinkling truck system and its control method based on ambient temperature and pedestrian position
CN109457654A (en) * 2018-12-21 2019-03-12 吉林大学 A kind of sprinkling truck preventing from mistake in jetting system based on pedestrian detection
CN110136202A (en) * 2019-05-21 2019-08-16 杭州电子科技大学 A kind of multi-targets recognition and localization method based on SSD and dual camera
CN211210874U (en) * 2019-07-31 2020-08-11 中科云谷科技有限公司 Intelligent watering system and intelligent watering cart
CN211773246U (en) * 2019-11-28 2020-10-27 南京信息工程大学 Automatic watering device of solar energy road
CN111553252B (en) * 2020-04-24 2022-06-07 福建农林大学 Road pedestrian automatic identification and positioning method based on deep learning and U-V parallax algorithm
CN111833393A (en) * 2020-07-05 2020-10-27 桂林电子科技大学 Binocular stereo matching method based on edge information

Also Published As

Publication number Publication date
CN112395961A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112395961B (en) Vision active pedestrian avoidance and water pressure self-adaptive control method for sprinkler
Mehra et al. ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions
CN108921875B (en) Real-time traffic flow detection and tracking method based on aerial photography data
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN111626128B (en) Pedestrian detection method based on improved YOLOv3 in orchard environment
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN109801297B (en) Image panorama segmentation prediction optimization method based on convolution
CN110246151B (en) Underwater robot target tracking method based on deep learning and monocular vision
CN112215074A (en) Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN112560619B (en) Multi-focus image fusion-based multi-distance bird accurate identification method
CN112446436A (en) Anti-fuzzy unmanned vehicle multi-target tracking method based on generation countermeasure network
CN116222577B (en) Closed loop detection method, training method, system, electronic equipment and storage medium
CN114882222A (en) Improved YOLOv5 target detection model construction method and tea tender shoot identification and picking point positioning method
CN112560865A (en) Semantic segmentation method for point cloud under outdoor large scene
CN114943893B (en) Feature enhancement method for land coverage classification
CN116469007A (en) Forest fire identification method
CN115810149A (en) High-resolution remote sensing image building extraction method based on superpixel and image convolution
CN114863266A (en) Land use classification method based on deep space-time mode interactive network
CN114119586A (en) Intelligent detection method for aircraft skin defects based on machine vision
CN117237884A (en) Interactive inspection robot based on berth positioning
CN116071374B (en) Lane line instance segmentation method and system
CN110532892B (en) Method for detecting road vanishing point of single image of unstructured road
Li et al. Multiple linear regression haze-removal model based on dark channel prior
Li et al. Advanced multiple linear regression based dark channel prior applied on dehazing image and generating synthetic haze
CN112528988A (en) License plate angle correction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant