CN110067274B - Equipment control method and excavator - Google Patents

Equipment control method and excavator Download PDF

Info

Publication number
CN110067274B
CN110067274B CN201910358054.8A CN201910358054A CN110067274B CN 110067274 B CN110067274 B CN 110067274B CN 201910358054 A CN201910358054 A CN 201910358054A CN 110067274 B CN110067274 B CN 110067274B
Authority
CN
China
Prior art keywords
target
image data
equipment
target object
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910358054.8A
Other languages
Chinese (zh)
Other versions
CN110067274A (en
Inventor
殷铭
隋少龙
王天娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Builder Intelligent Technology Co ltd
Original Assignee
Beijing Builder Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Builder Intelligent Technology Co ltd filed Critical Beijing Builder Intelligent Technology Co ltd
Priority to CN201910358054.8A priority Critical patent/CN110067274B/en
Publication of CN110067274A publication Critical patent/CN110067274A/en
Application granted granted Critical
Publication of CN110067274B publication Critical patent/CN110067274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F3/00Dredgers; Soil-shifting machines
    • EFIXED CONSTRUCTIONS
    • E02HYDRAULIC ENGINEERING; FOUNDATIONS; SOIL SHIFTING
    • E02FDREDGING; SOIL-SHIFTING
    • E02F9/00Component parts of dredgers or soil-shifting machines, not restricted to one of the kinds covered by groups E02F3/00 - E02F7/00
    • E02F9/20Drives; Control devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Mining & Mineral Resources (AREA)
  • Civil Engineering (AREA)
  • General Engineering & Computer Science (AREA)
  • Structural Engineering (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an equipment control method and an excavator, wherein the equipment control method comprises the following steps: acquiring first image data in a limited area corresponding to target equipment; acquiring second image data of a defined area of the target device; when a target object is detected to exist in the first image data, matching a target image area where the target object is located with each point in the second image data to obtain a target three-dimensional coordinate range of the target image area; acquiring a first distance between a target object and target equipment according to the target three-dimensional coordinate range; and when the first distance is smaller than the set value, controlling the target equipment to stop working or reducing the working speed. By identifying the environment around the equipment, when the target object is identified to possibly exist, the operation of the equipment is reduced or stopped, and the operation safety of the equipment can be improved.

Description

Equipment control method and excavator
Technical Field
The application relates to the technical field of mechanical equipment control, in particular to an equipment control method and an excavator.
Background
In the field of industrial construction machinery, various types of construction equipment and construction personnel may be in the same working environment, and if the construction equipment is in a running state, potential danger exists for the construction personnel in the working environment. Based on this, in the prior art, through the identification of the specific tool features in the acquired image, if the specific tool of the constructor is identified, the operation of the equipment is stopped, and by adopting the processing mode, the working efficiency of the equipment is relatively low.
Disclosure of Invention
In view of the above, an object of the embodiments of the present application is to provide an equipment control method and an excavator.
In a first aspect, an embodiment of the present application provides an apparatus control method, including:
acquiring first image data in a limited area corresponding to target equipment;
acquiring second image data of the defined area of the target device;
when a target object is detected to exist in the first image data, matching a target image area where the target object is located with each point in the second image data to obtain a target three-dimensional coordinate range of the target image area;
acquiring a first distance between the target object and the target equipment according to the target three-dimensional coordinate range;
and when the first distance is smaller than a set value, controlling the target equipment to stop working or reducing the working speed.
The method provided by the embodiment of the application can detect the first image data, judge the first distance between the target object and the equipment if the target object is detected, and enable the target object to have potential danger if the first distance is smaller than a set value. By two rounds of recognition, detection of a dangerous state of the target object may be improved. Furthermore, whether the distance between the target object is in the dangerous area or not is judged, and then the working state of the target equipment is changed, so that compared with the situation that the specific tool of the constructor is used for stopping the working state of the equipment, the safety of each object in the environment when the target equipment is in operation can be improved, meanwhile, the equipment can be prevented from being stopped blindly, and the working efficiency of the target equipment can be improved.
With reference to the first aspect, an embodiment of the present application provides a first possible implementation manner of the first aspect, where: the second image data comprises three-dimensional point cloud data; the step of matching the target image area where the target object is located with each point in the second image data to obtain the target three-dimensional coordinate range of the target image area includes:
projecting the three-dimensional point cloud data into a coordinate system corresponding to the first image data through coordinate conversion to obtain a projection point set;
and mapping the corresponding sub-projection point set in the target image area back to the three-dimensional coordinate system corresponding to the three-dimensional point cloud data to obtain a target three-dimensional coordinate range of the target image area.
Further, the second image data comprises three-dimensional point cloud data; the step of matching the target image area where the target object is located with each point in the second image data to obtain the target three-dimensional coordinate range of the target image area includes:
performing coordinate conversion on the pixel points in the target image area in the first image data to obtain three-dimensional coordinates of the pixel points in the target image area, wherein the three-dimensional coordinates are based on the coordinates of each point in the second image data in a three-dimensional coordinate system;
and matching the three-dimensional coordinates of the pixel points in the target image area with the three-dimensional point cloud data in the second image data to determine the target three-dimensional coordinate range of the pixel points in the target image area.
In the above embodiment, the target three-dimensional coordinate range of the target object is obtained by projecting the target image area into the second image data by performing coordinate conversion on the pixel points in the target image area. The coverage range of the target object in space can be obtained through relatively few calculations, and the mapping of the objects with different dimensions is realized.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where: the step of performing coordinate conversion on the pixel point in the target image region in the first image data to obtain the three-dimensional coordinate of the pixel point in the target image region includes:
coordinate conversion is carried out on the three-dimensional point cloud data by using a conversion matrix between images acquired by a first acquisition device and a second acquisition device to obtain the projection point set, wherein the first acquisition device is a device for acquiring the first image data, and the second acquisition device is a device for acquiring the second image data;
the transformation matrix is determined from a first set of calibration points acquired using the first acquisition device and a second set of calibration points acquired using a second acquisition device.
In the above embodiment, the coordinate conversion is performed on the pixel points in the target image region by using the conversion matrices corresponding to the first and second acquisition devices, so that the three-dimensional coordinates obtained by the conversion are more matched with the coordinate system used by the pixel points in the second image data. In addition, a transformation matrix is determined based on the first set of calibration points acquired by the first acquisition device and the second set of calibration points acquired by the second acquisition device, so that the determined transformation matrix can be correlated with the two acquisition devices.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where: the method further comprises the following steps:
setting the same time system for the first acquisition equipment and the second acquisition equipment;
configuring a first control thread for the first acquisition equipment;
configuring a second control thread for the second acquisition equipment;
in the above embodiment, the first control thread and the second control thread control the first acquisition device and the second acquisition device to acquire data at the same time interval, so that the first image data acquired by the first acquisition device and the second image data acquired by the second acquisition device are time-synchronized.
Through the control of two acquisition devices through the line to and set up the same time system for two acquisition devices, can use the image data that two acquisition devices gathered can match in time, thereby make the testing result of target object also more accurate.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where: the step of obtaining a first distance between the target object and the target device according to the target three-dimensional coordinate range includes:
calculating to obtain a central point coordinate of the target object according to the target three-dimensional coordinate range, wherein the central point coordinate represents a coordinate closest to the target equipment in the target three-dimensional coordinate range;
and calculating the first distance between the target object and the target equipment according to the central point coordinates.
In the above embodiment, the center point of the target object can better represent the position of the target object, so that the distance between the target object and the target device is obtained through the distance between the center point coordinate and the target device, and the position of the target object and the target device can be more accurately represented.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present application provides a fifth possible implementation manner of the first aspect, where: the step of calculating the first distance between the target object and the target device according to the center point coordinates includes:
acquiring two coordinate values in the horizontal direction in the center point coordinate;
and calculating the horizontal distance between the target object and the target equipment according to the two coordinate values in the horizontal direction to serve as the first distance.
In the above embodiment, since the safety of the object affected by the device is mainly reflected in the distance in the horizontal direction, and the distance in the horizontal direction is used as the distance between the target object and the target device, the influence on the distance in the vertical direction can be avoided, so that the object in the set value range can be better identified, and the safety of the personnel in the same working environment with the device can be improved.
With reference to the fourth possible implementation manner of the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where: the step of calculating the coordinates of the central point of the target object according to the target three-dimensional coordinate range comprises the following steps:
performing frame fitting on the target three-dimensional coordinate range to obtain a space cube where the target object is located;
and calculating the central coordinate of the space cube, and taking the central coordinate of the space cube as the central point coordinate of the target object.
In the above embodiment, the target three-dimensional coordinate range is mapped into a geometric shape that facilitates calculation of the central point by the frame fitting, and the central coordinate is calculated based on the obtained geometric shape. In addition, the center coordinates obtained by calculating the geometric shapes fitted by the frame can also better represent the center of the target object.
With reference to the first aspect, an embodiment of the present application provides a seventh possible implementation manner of the first aspect, where: the method further comprises the following steps: and when the first distance is smaller than a set value, generating an alarm message.
In the above embodiment, the potential danger that may exist in the operator or the target object may be reminded through the alarm message, thereby effectively reducing the danger.
With reference to the seventh implementation manner of the first aspect, this application provides an eighth possible implementation manner of the first aspect, where: the target equipment is an excavator, the excavator comprises an excavator arm, and the step of generating the alarm message when the first distance is smaller than the set value comprises the following steps:
generating an alarm message when the first distance is less than a longest extension of the excavator arm.
In a second aspect, an embodiment of the present application further provides an apparatus control device, including:
the first acquisition module is used for acquiring first image data in a limited area corresponding to the target equipment;
a second acquisition module for acquiring second image data of the defined area of the target device;
the matching module is used for matching a target image area where the target object is located with each point in the second image data when the target object is detected to exist in the first image data, so as to obtain a target three-dimensional coordinate range of the target image area;
the acquisition module is used for acquiring a first distance between the target object and the target equipment according to the target three-dimensional coordinate range;
and the control module is used for controlling the target equipment to stop working or reduce the working speed when the first distance is smaller than a set value.
In a third aspect, an embodiment of the present application further provides an excavator, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of the first aspect described above, or any possible implementation of the first aspect.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a block schematic diagram of an excavator provided in an embodiment of the present application.
Fig. 2 is a flowchart of an apparatus control method according to an embodiment of the present application.
Fig. 3 is a detailed flowchart of step S203 of the apparatus control method according to the embodiment of the present application.
Fig. 4 is a functional block diagram of an apparatus control device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In the industrial field of construction, since various kinds of equipment are large and some parts having a relatively high risk factor may be present on the equipment, for example, a bucket of an excavator for excavating earth and the like, construction work is facilitated, but if a person is inadvertently bumped, the person may be put into a dangerous state. Some devices may have an operator who may stop the operation of the device when he sees that a person is present around the device, but sometimes the device may be large, resulting in the operator not being able to see the surrounding environment well. Based on the above-described existing problems, the present inventors have conducted studies on various devices in the industrial field.
Firstly, because constructors can wear clothes with specific characteristics, the surrounding environment of the equipment can be identified by acquiring images of the surrounding environment of the equipment and identifying the acquired images of the surrounding environment. When the presence of the constructor is recognized, the operation of the equipment is stopped, so that the equipment is prevented from colliding with the constructor. However, this method may exist, even if there is a constructor around the equipment, but the constructor is far away from the equipment, and the normal operation of the equipment does not threaten the safety of the constructor, in this case, if the equipment is stopped, the operation efficiency of the equipment is greatly reduced. In addition, there may be situations where it is not possible to identify persons who are not wearing clothing with particular characteristics, resulting in the safety of such persons potentially being dangerous.
Further, in the prior art, it is provided to wear a mobile tag on a constructor, and to identify a distance between an on-vehicle terminal and the mobile tag using a propagation speed of a wireless signal in the air. However, not every person on site has a mobile tag, there may be situations where some people cannot be detected.
In view of the above research, the present embodiment provides an apparatus control method, which can achieve whether a target object (e.g., a constructor) exists in an image by acquiring the image in the surrounding environment of the apparatus. And calculating the distance between the identified target object and the target equipment so as to realize the distance calculation of the target object, control the target equipment according to the distance and further realize the control of the operating environment of the equipment.
To facilitate understanding of the present embodiment, a mechanical apparatus for performing an apparatus control method disclosed in the embodiments of the present application will be described in detail first.
Example one
As shown in fig. 1, fig. 1 is a block schematic diagram illustrating an excavator provided in an embodiment of the present application. The excavator may include: memory 111, memory controller 112, processor 113, peripheral interface 114, input output unit 115, collection equipment 116, bucket 117, fuselage 118. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is merely exemplary and is not intended to limit the configuration of the excavator 100. For example, the shovel 100 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 111, the memory controller 112, the processor 113, the peripheral interface 114, the input/output unit 115 and the acquisition device 116 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
The Memory 111 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 111 is used for storing a program, and the processor 113 executes the program after receiving an execution instruction, and the method executed by the excavator 100 defined by the process disclosed in any embodiment of the present application may be applied to the processor 113, or implemented by the processor 113.
The processor 113 may be an integrated circuit chip having signal processing capabilities. The Processor 113 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The peripheral interface 114 couples various input/output devices to the processor 113 as well as to the memory 111. In some embodiments, the peripheral interface 114, the processor 113, and the memory controller 112 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 116 is used to provide input data to the user. The input/output unit may be, but is not limited to, a key on the device, a mobile remote controller connected to the excavator, and the like.
The collection device 116 is used to collect environmental data around the excavator. The acquisition device 116 may be an industrial camera, a lidar, a binocular camera, etc. In one embodiment, the excavator may comprise two harvesting devices including: a first acquisition device and a second acquisition device. The first collecting device and the second collecting device can be arranged on the same horizontal line or the same vertical line. The installation method may be suitable according to the type of the collecting device 116.
Further, the excavator may further include more units, for example, a display unit. The display unit provides an interactive interface (e.g., a user interface) between the shovel 100 and a user or for displaying image data to a user reference. In this embodiment, the display unit may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations from one or more locations on the touch display at the same time, and the sensed touch operations are sent to the processor 113 for calculation and processing.
Example two
Please refer to fig. 2, which is a flowchart illustrating an apparatus control method according to an embodiment of the present disclosure. In some embodiments, the equipment control method in the present embodiment is applied to the excavator shown in fig. 1. The specific process shown in fig. 2 will be described in detail below.
Step S201, first image data in a limited area corresponding to the target device is acquired.
The first image data may be two-dimensional picture data; or may be video data.
In an alternative embodiment, the first image data may be acquired by a camera. For example, it may be an industrial camera or other common cameras that can achieve image acquisition.
Step S202, acquiring second image data of the limited area of the target device.
The execution sequence of the above step S201 and the step S202 is not limited to the sequence shown in fig. 2, and fig. 2 is only schematic. For example, step S201 may be performed before step S202, after step S202, or simultaneously.
The second image data may be two-dimensional picture data; or video data; but also three-dimensional point cloud data.
Wherein, if the second image data is picture data or video data, the device for acquiring the second image data may be the same device as the device for acquiring the first image data. If the second image data is three-dimensional point cloud data, the device for acquiring the second image data may be a three-dimensional sensor capable of acquiring a distance between a pixel point and the acquisition device, for example, the three-dimensional sensor may be a laser radar, a depth camera, or the like.
Optionally, the first image data and the second image data may be two-dimensional image data, and under this condition, a first acquisition device for acquiring the first image data and a second acquisition device for acquiring the second image data may be installed on the same horizontal line, so that the first acquisition device and the second acquisition device form a binocular camera for acquiring two-dimensional images in the surrounding environment, and the three-dimensional point cloud data may be obtained by matching the two-dimensional images. Optionally, the three-dimensional point cloud data may be obtained line by performing binocular disparity matching on the first image data and the second image data by using a function of reprojectImageTo3D in an OpenCV open source vision library.
Alternatively, the first image data may be two-dimensional image data, and the second image data may be three-dimensional point cloud data. In this case, the first capturing device that captures the first image data and the second capturing device that captures the second image data may be installed on the same vertical line. In an alternative example, the first and second collection devices described above may be installed within an industrial waterproof column. For example, the second collection device is fixed to the upper portion in the waterproof cylinder of industry, and the first collection device is fixed to the waterproof cylinder of industry below to realize that first collection device and second collection device are on same vertical line.
Before step S203, the first image data may be detected by a neural network model, so as to detect whether there is a target object in the first image data.
The neural network model can be obtained by training through a training data set.
The training data set described above may be using the first acquisition device described above. The training data includes images of pictures of constructors and other persons. The training data may also include images taken under different conditions. Wherein the different conditions may include different weather conditions, such as: sunny, rainy, foggy, and snowy; the different conditions may also include different time periods, such as: morning, midday, evening of the day; the different conditions may also include different lighting. Furthermore, the image data acquired by the first acquisition device can be used for labeling the positions of the constructors in the data set by using a labelme labeling tool, so that coordinate labeling data of the two-dimensional circumscribed rectangle of the positions of the corresponding objects such as the constructors in each image are acquired. The training data set may include a training set, a validation set, and a test set. The training set, the verification set, and the test set may have different proportions, and optionally, the training set may have a larger proportion than the verification set and the test set. For example, the ratio of training set, validation set, and test set may be 6: 2: for another example, the ratio of training set, validation set, and test set may be 5: 3: 2.
the model to be trained corresponding to the neural network model can be a model trained by the FastBox algorithm. The FastBox algorithm adds a ROI-Pooling (Region of Interest-Pooling) structure relative to the traditional Yolo algorithm.
The fast box algorithm is divided into two parts, namely an encoder and a decoder, wherein the encoder uses VGG16 to extract features of constructors, the decoder firstly uses 1x1 convolutional layers of a plurality of filters (for example, five hundred filters) to transfer encoding features to generate 39x12x500 tensor, then 6 channels of 39x12 are output through the 1x1 convolutional layers, the first two channels of the tensor generate bounding box, the output result of one channel is a two-dimensional frame, and the other channel is the classification of objects in the two-dimensional frame. The last four channels of the tensor represent the boundary coordinates of the two-dimensional frame, which are respectively the maximum value on the first axis, the minimum value on the first axis, the maximum value on the second axis and the minimum value on the second axis.
And inputting the training set, the verification set and the test set into a model to be trained respectively for calculation, and determining parameters to be determined of the model to be trained after each calculation, so as to form a neural network model for identifying whether a target object exists in the first image data.
Constructor model training is performed through the FastBox algorithm, and an constructor recognition model with the accuracy of 94.8% is obtained through an Adam optimizer and 0.5 dropout for convolution of all 1x 1.
The first image data acquired by the first acquisition equipment is transmitted into the neural network model in real time, and two-dimensional external rectangular coordinates corresponding to a target object (such as a constructor) in the first image data are obtained.
Optionally, the initial model to be trained corresponding to the neural network model may also adopt a model formed by a target detection algorithm such as Yolo, fast rcnn, ssd (single Shot multi box detector).
Step S203, when a target object is detected to exist in the first image data, matching a target image area where the target object is located with each point in the second image data to obtain a target three-dimensional coordinate range of the target image area.
Whether or not a target object exists in the periphery of the target device can be determined by the recognition of the first image data. The target object may be a person, for example, a constructor, other inspector, or the like; or some animals; but also some fixed construction, piled building materials, etc.
In one embodiment, the second image data may be three-dimensional point cloud data. As shown in fig. 3, step S203 may include step S2031 and step S2032.
Step S2031, projecting the three-dimensional point cloud data to a coordinate system corresponding to the first image data through coordinate conversion to obtain a projection point set.
The projection point set is a set of coordinate points projected onto the two-dimensional image space based on the second image data.
Step S2031 may be implemented by: and performing coordinate conversion on the three-dimensional point cloud data by using a conversion matrix between the images acquired by the first acquisition equipment and the second acquisition equipment to obtain the projection point set.
The transformation matrix is determined from a first set of calibration points acquired using the first acquisition device and a second set of calibration points acquired using a second acquisition device.
Specifically, the first acquisition device may be calibrated before the transformation matrix is determined. Optionally, the first acquisition device may be calibrated by using a checkerboard calibration method to obtain an internal reference matrix and a distortion matrix of the first acquisition device.
In order to obtain the three-dimensional coordinate range of the target object in the three-dimensional image by using the matching of the first image data and the second image data, the first acquisition device and the second acquisition device need to be fused. The fusion of the two acquisition devices may include spatial fusion and temporal matching.
The spatial fusion identifier is obtained by matching pixel points in the target image range of the target object in the identified first image data in a three-dimensional scene corresponding to the three-dimensional point cloud data corresponding to the second image data to find out a unique data point corresponding to the unique data point. The spatial fusion of the two acquisition devices is described below, taking as an example that the second acquisition device is a lidar.
Assuming that the three-dimensional coordinate of the calibration reference point in the coordinate system of the laser radar is M (X, Y, Z), the image coordinate of the coordinate system in the first acquisition device is M (u, v), and the transformation relationship between the two coordinate systems can be expressed as:
Figure BDA0002044526120000131
wherein, P3×4A projective transformation matrix, Z, representing the coordinate system from the lidar coordinate system to the coordinate system in the first acquisition devicecFor any scale factor, eliminating ZcThe following can be obtained:
Figure BDA0002044526120000132
wherein p ═ p (p)11,p12,p13,p14,p21,p22,p23,p24,p31,p32,p33,p34)TThen at least 6 calibration points are needed to solve the projective point transformation matrix P3×4. The transformation matrix P can be calculated by using a first set of calibration points acquired by the first acquisition device and a second set of calibration points acquired by the second acquisition device3×4. The first group of calibration point sets corresponds to the second group of calibration point sets one by one, and the number of the first group of calibration point sets is at least six.
In one example, the calibration process may use an auto _ camera _ lidar _ calibration () function in an auto function set to read calibration Point data acquired by the first acquisition device and the lidar, and simultaneously display the calibration Point data in rviz, place a checkerboard in front of the lidar, find a corresponding Point in the image that can match the Point cloud, click a pixel in the Point in the image, and hit the corresponding three-dimensional Point in the lidar image using a Publish Point tool. The above operation is repeated using at least nine different spots. And after the calibration is finished, obtaining a conversion matrix of the two-dimensional pixel points under the coordinate system of the first acquisition equipment and the three-dimensional pixel points under the coordinate system of the laser radar.
The data matching in time is to ensure the synchronization of the laser radar and the camera when the data is collected. Threads are respectively created for the laser radar and the camera, the two sensors are acquired once at the same time interval, and the same GPS time is given (or other navigation system time can be adopted), so that the laser radar and the camera data are synchronously processed in time, namely the time is matched.
Specifically, the data matching of the first acquisition device and the second acquisition device in time may be achieved through the following steps, which may include: setting the same time system for the first acquisition equipment and the second acquisition equipment; configuring a first control thread for the first acquisition equipment; configuring a second control thread for the second acquisition equipment; and controlling the time intervals of the two adjacent data acquisition of the first acquisition equipment and the second acquisition equipment to be the same through the first control thread and the second control thread so as to synchronize the time of the first image data acquired by the first acquisition equipment and the time of the second image data acquired by the second acquisition equipment.
The first control thread and the second control thread acquire image data around the target device according to a set time interval.
Through the control of two acquisition devices through the line to and set up the same time system for two acquisition devices, can use the image data that two acquisition devices gathered can match in time, thereby make the testing result of target object also more accurate.
Step S2032, mapping the corresponding sub-projection point set in the target image area back to the three-dimensional coordinate system corresponding to the three-dimensional point cloud data to obtain a target three-dimensional coordinate range of the target image area.
Specifically, the sub-projection point set may be mapped back to the three-dimensional coordinate system corresponding to the three-dimensional point cloud data according to the recovery matrix corresponding to the transformation matrix.
Optionally, the step S203 may be implemented as: performing coordinate conversion on the pixel points in the target image area in the first image data to obtain three-dimensional coordinates of the pixel points in the target image area; and matching the three-dimensional coordinates of the pixel points in the target image area with the three-dimensional point cloud data in the second image data to determine the target three-dimensional coordinate range of the pixel points in the target image area.
And the three-dimensional coordinates are based on the coordinates of each point in the second image data in the three-dimensional coordinate system.
And projecting the target image area into the second image data by performing coordinate conversion on the pixel points in the target image area to obtain a target three-dimensional coordinate range of the target object. The coverage range of the target object in space can be obtained through relatively few calculations, and the mapping of the objects with different dimensions is realized.
And performing coordinate conversion on the pixel points in the target image area by using the conversion matrixes corresponding to the first acquisition equipment and the second acquisition equipment, so that the three-dimensional coordinates obtained by conversion are more matched with a coordinate system used by the pixel points in the second image data. In addition, a transformation matrix is determined based on the first set of calibration points acquired by the first acquisition device and the second set of calibration points acquired by the second acquisition device, so that the determined transformation matrix can be correlated with the two acquisition devices.
In another embodiment, the second image data may be two-dimensional image data. The first image data and the second image data may be subjected to binocular depth estimation to obtain three-dimensional point cloud data. Alternatively, the binocular depth estimation described above may employ: SAD (Sum of absolute differences, Chinese name: block Matching) algorithm, BM (block Matching, Chinese name: semi-global block Matching) algorithm, SGBM (semi-global block Matching, Chinese name: semi-global block Matching) algorithm, PSmNet (Pyramid Stereo Matching Network, Chinese name: Pyramid Stereo Matching Network) algorithm, etc.
Step S204, a first distance between the target object and the target device is obtained according to the target three-dimensional coordinate range.
The target three-dimensional coordinate range may be a coordinate range corresponding to a coordinate system having the position of the second capturing device as an origin. Alternatively, the distance between any one point in the target three-dimensional coordinate range corresponding to the target object and the coordinate origin may be used as the first distance.
In an alternative embodiment, the distance between the closest point to the origin in the target three-dimensional coordinate range and the origin may be calculated as the first distance.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where: the step of obtaining a first distance between the target object and the target device according to the target three-dimensional coordinate range includes:
calculating to obtain the center point coordinate of the target object according to the target three-dimensional coordinate range; and calculating the first distance between the target object and the target equipment according to the central point coordinates.
For example, the center point coordinates may represent the coordinates closest to the target device in the target three-dimensional coordinate range.
The center point of the target object can better represent the position of the target object, so that the distance between the target object and the target equipment is obtained through the distance between the center point coordinate and the target equipment, and the position of the target object and the target equipment can be more accurately represented.
Since the device affects the safety of the related object mainly in the horizontal direction, and the horizontal direction distance is used as the distance between the target object and the target device, the step S204 may include: acquiring two coordinate values in the horizontal direction in the center point coordinate; and calculating the horizontal distance between the target object and the target equipment according to the two coordinate values in the horizontal direction to serve as the first distance.
By means of the method, the influence on the distance in the vertical direction can be avoided, so that objects in a set value range can be better identified, and the safety of personnel in the same working environment with the equipment is improved.
Optionally, the step of calculating the coordinates of the center point of the target object according to the target three-dimensional coordinate range includes: performing frame fitting on the target three-dimensional coordinate range to obtain a space cube where the target object is located; and calculating the central coordinate of the space cube, and taking the central coordinate of the space cube as the central point coordinate of the target object.
The frame fitting can be carried out by adopting an L-shape and minimum area rectangle method. The method comprises the steps of firstly obtaining a point cloud group of a target three-dimensional coordinate range, traversing each point in the point cloud group, solving two points with the largest distance on a ground projection point, taking a connecting line between the two points as a diagonal line of an external rectangle, calculating the vertical distance between the diagonal lines of other point clouds, taking the point with the largest vertical distance as a frame point of the external rectangle, determining a projection rectangle by using the two diagonal lines and the frame point, and fitting a three-dimensional frame by taking the highest Z value of the point in rectangular projection as the height.
The target three-dimensional coordinate range is mapped into a geometric shape which is convenient for calculating a central point through the placement of frame fitting, and then the central coordinate is calculated and obtained based on the obtained geometric shape. In addition, the center coordinates obtained by calculating the geometric shapes fitted by the frame can also better represent the center of the target object.
Further, the first distance may be calculated directly based on the above-described target object three-dimensional coordinate range.
Considering that the environment data possibly has influence on the identification of the target object in the environment where the target device is located, the method can also perform some preprocessing on the pixel points in the target three-dimensional coordinate range, and then calculate the first distance.
First, ground culling may be performed to distinguish between untraceable objects and uniquely interesting objects, background portions of untraceable objects such as terrain, and the like. The specific implementation flow is as follows: firstly, a coordinate grid graph can be established firstly; then, filtering out pixel points with the height value of the coordinates exceeding a set value based on the grid map; gaussian filtering or calculating the gradient between pixel points of adjacent units of the same channel; filtering discontinuous pixel points through gradient; and separating the ground and overhead pixel points by using median filtering and outlier filtering.
And then, carrying out Euclidean distance clustering on the remaining pixel points after ground rejection processing, wherein the Euclidean distance clustering is to cluster the discrete point cloud in the three-dimensional coordinate range corresponding to the target image area, so that two adjacent discrete points with close distance are combined into one pixel point. And aggregating the three-dimensional coordinate ranges corresponding to the target objects by Euclidean distance clustering.
And step S205, when the first distance is smaller than a set value, controlling the target device to stop working or reduce the working speed.
The set value may be a safety distance set by a user, or may be a safety value set according to the size and length of the work component of the target device.
In one example, the target device may be an excavator, and the set value is not less than the distance between the edge of the excavator bucket and the second collecting device in the extending state of the excavator bucket. The set values can also refer to the actual tonnage of the excavator, the length of the mechanical arm and the like.
In one example, the target device may be a multi-purpose excavator. The multifunctional excavator can comprise a decision-making module and a feedback control module. The decision module can be used for processing and identifying the image data acquired by the acquisition equipment and obtaining whether to control the excavator to stop or reduce the excavator to operate or not according to an identification result. The feedback control module can be used for controlling the excavator bucket of the excavator according to the result obtained by the decision module. When the calculated first distance is smaller than a set value, a decision module of the excavator sends an emergency bit code 0x0001 to a feedback control module, an electromagnetic valve of the excavator is controlled to be opened, the excavator stops working, and buzzing warning is given; the excavator can also be controlled to stop operation through the switch valve. When the calculated first distance is larger than the set value or no object exists in the first image data, the decision module sends the bit code 0x0000 to the feedback control module, and normal work of the excavator is not interfered. Further, if the excavator is controlled to stop at the first time, the excavator can be controlled to continue to work again when no object exists in the first image data acquired at the second time or when the calculated first distance is greater than the set value.
The method provided by the embodiment of the application can detect the first image data, judge the first distance between the target object and the equipment if the target object is detected, and enable the target object to have potential danger if the first distance is smaller than a set value. By two rounds of recognition, detection of a dangerous state of the target object may be improved. Furthermore, when the target object possibly has potential danger, the working state of the target equipment is changed, so that the safety of each object in the environment when the target equipment operates can be improved, and the operation state of the equipment can be reduced when the clothes of constructors are identified, and the working efficiency of the target equipment can be improved.
On the basis shown in fig. 2, the device control method in this embodiment may further include: and when the first distance is smaller than a set value, generating an alarm message.
The alarm message may include a flashing light alarm, a buzzer alarm light, etc. Of course, the alarm message may be a voice alarm lamp. For example: the voice alarm may be "you are now in a dangerous location".
Illustratively, the target equipment is an excavator, the excavator comprises an excavator arm, and the set value may be the longest extension length of the excavator arm. Of course, the above-mentioned set value may be a value larger than the longest extension of the excavator arm. Alternatively, the set value may be set according to the operation state, the operation locus, the operation flexibility, and the like of the target device.
Potential dangers which may exist in the operating personnel or the target object can be reminded through the alarm message, and therefore dangerousness is effectively reduced.
EXAMPLE III
Based on the same application concept, the embodiment of the present application further provides a device control apparatus corresponding to the device control method, and since the principle of the apparatus in the embodiment of the present application for solving the problem is similar to that of the device control method in the embodiment of the present application, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not repeated.
Please refer to fig. 4, which is a schematic diagram of functional modules of an apparatus control device according to an embodiment of the present disclosure. The device control apparatus in this embodiment is configured to execute each step in the method in the second embodiment. The device control apparatus in this embodiment includes: a first acquisition module 301, a second acquisition module 302, a matching module 303, an acquisition module 304, and a control module 305; wherein the content of the first and second substances,
a first acquisition module 301, configured to acquire first image data in a limited area corresponding to a target device;
a second acquisition module 302 for acquiring second image data of the defined area of the target device;
a matching module 303, configured to, when it is detected that a target object exists in the first image data, match a target image area where the target object is located with each point in the second image data, so as to obtain a target three-dimensional coordinate range of the target image area;
an obtaining module 304, configured to obtain a first distance between the target object and the target device according to the target three-dimensional coordinate range;
and the control module 305 is used for controlling the target device to stop working or reduce the working speed when the first distance is smaller than a set value.
In a possible implementation, the matching module 303 is further configured to:
performing coordinate conversion on the pixel points in the target image area in the first image data to obtain three-dimensional coordinates of the pixel points in the target image area, wherein the three-dimensional coordinates are based on the coordinates of each point in the second image data in a three-dimensional coordinate system;
and matching the three-dimensional coordinates of the pixel points in the target image area with the three-dimensional point cloud data in the second image data to determine the target three-dimensional coordinate range of the pixel points in the target image area.
In a possible implementation, the matching module 303 is further configured to:
coordinate conversion is carried out on the three-dimensional point cloud data by using a conversion matrix between images acquired by a first acquisition device and a second acquisition device to obtain the projection point set, wherein the first acquisition device is a device for acquiring the first image data, and the second acquisition device is a device for acquiring the second image data;
the transformation matrix is determined from a first set of calibration points acquired using the first acquisition device and a second set of calibration points acquired using a second acquisition device.
In one possible embodiment, the device control apparatus further includes: a configuration module to:
setting the same time system for the first acquisition equipment and the second acquisition equipment;
configuring a first control thread for the first acquisition equipment;
configuring a second control thread for the second acquisition equipment;
and controlling the time intervals of the two adjacent data acquisition of the first acquisition equipment and the second acquisition equipment to be the same through the first control thread and the second control thread so as to synchronize the time of the first image data acquired by the first acquisition equipment and the time of the second image data acquired by the second acquisition equipment.
In a possible implementation, the obtaining module 304 is further configured to:
calculating to obtain the center point coordinate of the target object according to the target three-dimensional coordinate range;
and calculating the first distance between the target object and the target equipment according to the central point coordinates.
In a possible implementation, the obtaining module 304 is further configured to:
acquiring two coordinate values in the horizontal direction in the center point coordinate;
and calculating the horizontal distance between the target object and the target equipment according to the two coordinate values in the horizontal direction to serve as the first distance.
In a possible implementation, the obtaining module 304 is further configured to:
performing frame fitting on the target three-dimensional coordinate range to obtain a space cube where the target object is located;
and calculating the central coordinate of the space cube, and taking the central coordinate of the space cube as the central point coordinate of the target object.
In one possible embodiment, the device control apparatus further includes: and the alarm module is used for generating an alarm message when the first distance is smaller than a set value.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the device control method in the above method embodiment.
The computer program product of the device control method provided in the embodiment of the present application includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the device control method in the above method embodiment, which may be specifically referred to in the above method embodiment, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (7)

1. An apparatus control method characterized by comprising:
acquiring first image data in a limited area corresponding to target equipment;
acquiring second image data of the defined area of the target device;
when a target object is detected to exist in the first image data, matching a target image area where the target object is located with each point in the second image data to obtain a target three-dimensional coordinate range of the target image area;
acquiring a first distance between the target object and the target equipment according to the target three-dimensional coordinate range;
when the first distance is smaller than a set value, controlling the target equipment to stop working or reduce the working speed;
the step of obtaining a first distance between the target object and the target device according to the target three-dimensional coordinate range includes:
calculating to obtain a central point coordinate of the target object according to the target three-dimensional coordinate range, wherein the central point coordinate represents a coordinate closest to the target equipment in the target three-dimensional coordinate range;
calculating the first distance between the target object and the target equipment according to the central point coordinate;
the second image data comprises three-dimensional point cloud data; the step of matching the target image area where the target object is located with each point in the second image data to obtain the target three-dimensional coordinate range of the target image area includes:
projecting the three-dimensional point cloud data into a coordinate system corresponding to the first image data through coordinate conversion to obtain a projection point set;
mapping the corresponding sub-projection point set in the target image area back to a three-dimensional coordinate system corresponding to the three-dimensional point cloud data to obtain a target three-dimensional coordinate range of the target image area;
the step of performing coordinate conversion on the pixel point in the target image region in the first image data to obtain the three-dimensional coordinate of the pixel point in the target image region includes:
coordinate conversion is carried out on the three-dimensional point cloud data by using a conversion matrix between images acquired by a first acquisition device and a second acquisition device to obtain the projection point set, wherein the first acquisition device is a device for acquiring the first image data, and the second acquisition device is a device for acquiring the second image data;
the transformation matrix is determined from a first set of calibration points acquired using the first acquisition device and a second set of calibration points acquired using a second acquisition device.
2. The method of claim 1, further comprising:
setting the same time system for the first acquisition equipment and the second acquisition equipment;
configuring a first control thread for the first acquisition equipment;
configuring a second control thread for the second acquisition equipment;
and controlling the time intervals of the two adjacent data acquisition of the first acquisition equipment and the second acquisition equipment to be the same through the first control thread and the second control thread so as to synchronize the time of the first image data acquired by the first acquisition equipment and the time of the second image data acquired by the second acquisition equipment.
3. The method of claim 1, wherein the step of calculating the first distance between the target object and the target device according to the center point coordinates comprises:
acquiring two coordinate values in the horizontal direction in the center point coordinate;
and calculating the horizontal distance between the target object and the target equipment according to the two coordinate values in the horizontal direction to serve as the first distance.
4. The method of claim 1, wherein the step of calculating the coordinates of the center point of the target object according to the target three-dimensional coordinate range comprises:
performing frame fitting on the target three-dimensional coordinate range to obtain a space cube where the target object is located;
and calculating the central coordinate of the space cube, and taking the central coordinate of the space cube as the central point coordinate of the target object.
5. The method of claim 1, further comprising:
and when the first distance is smaller than a set value, generating an alarm message.
6. The method of claim 5, wherein the target device is an excavator, the excavator including an excavator arm, and wherein the step of generating an alarm message when the first distance is less than a set value comprises:
generating an alarm message when the first distance is less than a longest extension of the excavator arm.
7. An excavator, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is operating, the machine-readable instructions when executed by the processor performing the steps of the method of any of claims 1 to 6.
CN201910358054.8A 2019-04-29 2019-04-29 Equipment control method and excavator Active CN110067274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910358054.8A CN110067274B (en) 2019-04-29 2019-04-29 Equipment control method and excavator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910358054.8A CN110067274B (en) 2019-04-29 2019-04-29 Equipment control method and excavator

Publications (2)

Publication Number Publication Date
CN110067274A CN110067274A (en) 2019-07-30
CN110067274B true CN110067274B (en) 2021-08-13

Family

ID=67369703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910358054.8A Active CN110067274B (en) 2019-04-29 2019-04-29 Equipment control method and excavator

Country Status (1)

Country Link
CN (1) CN110067274B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766623A (en) * 2019-10-12 2020-02-07 北京工业大学 Stereo image restoration method based on deep learning
CN114066739A (en) * 2020-08-05 2022-02-18 北京万集科技股份有限公司 Background point cloud filtering method and device, computer equipment and storage medium
CN111968102B (en) * 2020-08-27 2023-04-07 中冶赛迪信息技术(重庆)有限公司 Target equipment detection method, system, medium and electronic terminal
CN113055821B (en) * 2021-03-15 2023-01-31 北京京东乾石科技有限公司 Method and apparatus for transmitting information
CN113463718A (en) * 2021-06-30 2021-10-01 广西柳工机械股份有限公司 Anti-collision control system and control method for loader
CN114710228B (en) * 2022-05-31 2022-09-09 杭州闪马智擎科技有限公司 Time synchronization method and device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03228929A (en) * 1990-02-02 1991-10-09 Yutani Heavy Ind Ltd Interference avoidance device for working machine
EP2395764B1 (en) * 2010-06-14 2016-02-17 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
CN109252563A (en) * 2017-07-14 2019-01-22 神钢建机株式会社 engineering machinery
CN109472831A (en) * 2018-11-19 2019-03-15 东南大学 Obstacle recognition range-measurement system and method towards road roller work progress

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03228929A (en) * 1990-02-02 1991-10-09 Yutani Heavy Ind Ltd Interference avoidance device for working machine
EP2395764B1 (en) * 2010-06-14 2016-02-17 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
CN109252563A (en) * 2017-07-14 2019-01-22 神钢建机株式会社 engineering machinery
CN109472831A (en) * 2018-11-19 2019-03-15 东南大学 Obstacle recognition range-measurement system and method towards road roller work progress

Also Published As

Publication number Publication date
CN110067274A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110067274B (en) Equipment control method and excavator
US11657595B1 (en) Detecting and locating actors in scenes based on degraded or supersaturated depth data
CN110660186B (en) Method and device for identifying target object in video image based on radar signal
US10949995B2 (en) Image capture direction recognition method and server, surveillance method and system and image capture device
Rottensteiner et al. The ISPRS benchmark on urban object classification and 3D building reconstruction
Zhou et al. Seamless fusion of LiDAR and aerial imagery for building extraction
JP6554169B2 (en) Object recognition device and object recognition system
CN104902246A (en) Video monitoring method and device
Xiao et al. Change detection in 3d point clouds acquired by a mobile mapping system
CN111753609A (en) Target identification method and device and camera
CN104966062A (en) Video monitoring method and device
US20160202071A1 (en) A Method of Determining The Location of A Point of Interest and The System Thereof
CN112270253A (en) High-altitude parabolic detection method and device
CN107396037A (en) Video frequency monitoring method and device
Gong et al. Automated road extraction from LiDAR data based on intensity and aerial photo
KR20180092591A (en) Detect algorithm for structure shape change using UAV image matching technology
CN109697428B (en) Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network
JPH10255057A (en) Mobile object extracting device
CN111194015A (en) Outdoor positioning method and device based on building and mobile equipment
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
Chen et al. True orthophoto generation using multi-view aerial images
CN111753587A (en) Method and device for detecting falling to ground
CN115407338A (en) Vehicle environment information sensing method and system
WO2015073347A1 (en) Photovoltaic shade impact prediction
CN110910379B (en) Incomplete detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant