CN113156453A - Moving object detection method, apparatus, device and storage medium - Google Patents

Moving object detection method, apparatus, device and storage medium Download PDF

Info

Publication number
CN113156453A
CN113156453A CN202110387061.8A CN202110387061A CN113156453A CN 113156453 A CN113156453 A CN 113156453A CN 202110387061 A CN202110387061 A CN 202110387061A CN 113156453 A CN113156453 A CN 113156453A
Authority
CN
China
Prior art keywords
difference
matrix
projection matrix
determining
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110387061.8A
Other languages
Chinese (zh)
Inventor
朱仁杰
蔡宾
向阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Lianyi Heli Technology Co Ltd
Original Assignee
Wuhan Lianyi Heli Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Lianyi Heli Technology Co Ltd filed Critical Wuhan Lianyi Heli Technology Co Ltd
Priority to CN202110387061.8A priority Critical patent/CN113156453A/en
Publication of CN113156453A publication Critical patent/CN113156453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of detection, and discloses a moving object detection method, a moving object detection device, moving object detection equipment and a storage medium, wherein the method comprises the following steps: acquiring point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data; acquiring a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix; and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements. Compared with the method for detecting the moving object by additionally arranging the camera, the method is easily influenced by the observation angle and illumination, improves the accuracy of detecting the moving object, and does not need to additionally arrange other sensors.

Description

Moving object detection method, apparatus, device and storage medium
Technical Field
The present invention relates to the field of detection technologies, and in particular, to a method, an apparatus, a device, and a storage medium for detecting a moving object.
Background
With the continuous improvement of automation technology, people have higher and higher requirements on automation and intellectualization of living facilities. In recent years, robots of various life services, such as cleaning robots, tour guide robots, and epidemic prevention robots, have come into the market like bamboo shoots in the spring after rain. The aim of these robots is to reduce human intervention, performing a series of monotonously repeated tasks. For example, when a task such as disinfection is performed, which may affect surrounding personnel, even a toxic and harmful task, the task may be performed by detecting the surrounding environment to ensure the safety of the surrounding environment without unstable factors. In order to reduce human intervention as much as possible, the robot needs to autonomously detect the environment, judge the change of the surrounding environment, and determine whether to execute a task.
The traditional moving object detection generally adopts methods such as image and infrared. The image-based method needs to capture images by additionally arranging a camera, has a limited observation angle, cannot capture the environment in all directions, is sensitive to environmental illumination, and cannot complete tasks at night or in an environment with illumination change. The infrared-based method needs to be additionally provided with infrared sensors in multiple directions, and infrared is easy to report by mistake.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a moving object detection method, a moving object detection device, a moving object detection equipment and a storage medium, and aims to solve the technical problem of low accuracy of moving object detection in the prior art.
In order to achieve the above object, the present invention provides a moving object detection method, including the steps of:
acquiring point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data;
acquiring a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix;
and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements.
Optionally, the acquiring point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data includes:
acquiring point cloud data of a current environment through a laser radar, and determining a detection point angle and a detection point distance corresponding to each laser point cloud in the point cloud data;
determining detection point coordinates of each laser point cloud according to a preset resolution and the detection point angles;
and determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinates.
Optionally, the obtaining a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix include:
acquiring a background projection matrix, and acquiring a target element corresponding to any background element in the background projection matrix in the current projection matrix;
when the difference value between the background element value of the background element and the target element value of the target element is greater than a preset difference value, marking the target element as a difference element;
and determining an initial matrix according to the difference elements, and carrying out corrosion expansion treatment on the initial matrix to obtain a difference matrix.
Optionally, the determining the coordinates of the detection point according to the preset resolution and the angle of the detection point includes:
reading a preset abscissa resolution from a preset resolution, and determining an abscissa of a detection point according to the preset abscissa resolution and the detection point angle;
reading a preset vertical coordinate resolution from a preset resolution, and determining a vertical coordinate of a detection point according to the preset vertical coordinate resolution and the detection point angle;
and determining the coordinates of the detection points according to the abscissa and the ordinate of the detection points.
Optionally, the determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinate includes:
determining an initial projection matrix according to the detection point distance and the detection point coordinates;
acquiring a plurality of matrix elements from the initial projection matrix, and selecting the element with the minimum distance of detection points from the plurality of matrix elements;
and determining a current projection matrix corresponding to the point cloud data according to the element with the minimum distance between the detection points.
Optionally, before the step of obtaining point cloud data of a current environment through a laser radar and determining a current projection matrix according to the point cloud data, the method further includes:
acquiring a plurality of background point cloud data, and generating a plurality of initial background projection matrixes according to the plurality of background point cloud data;
and performing outlier removal and average taking processing on the plurality of initial background projection matrixes to obtain a background projection matrix.
Optionally, before the step of obtaining the number of difference elements in the difference matrix and determining whether a moving object exists in the current environment according to the number of difference elements, the method further includes:
continuously acquiring a plurality of difference matrixes, and determining a preset difference threshold according to the number of difference elements in the plurality of difference matrixes.
Further, to achieve the above object, the present invention also proposes a moving object detecting apparatus comprising:
the acquisition module is used for acquiring point cloud data of a current environment through a laser radar and determining a current projection matrix according to the point cloud data;
the determining module is used for acquiring a background projection matrix and determining a difference matrix according to the background projection matrix and the current projection matrix;
and the judging module is used for acquiring the quantity of the difference elements in the difference matrix and judging whether a moving object exists in the current environment according to the quantity of the difference elements.
In addition, in order to achieve the above object, the present invention also provides a moving object detection apparatus, including: a memory, a processor and a moving object detection program stored on the memory and executable on the processor, the moving object detection program being configured to implement the steps of the moving object detection method as described above.
Furthermore, to achieve the above object, the present invention also proposes a storage medium having stored thereon a moving object detection program which, when executed by a processor, implements the steps of the moving object detection method as described above.
The method comprises the steps of obtaining point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data; acquiring a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix; and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements. According to the invention, the point cloud data obtained by laser radar detection is converted into the projection matrix, the projection matrix and the background projection matrix are compared to determine the difference matrix, and whether a moving object exists in the current environment is judged according to the quantity of the difference elements in the difference matrix, so that the accuracy of detecting the moving object is improved and other sensors are not required to be additionally installed.
Drawings
Fig. 1 is a schematic configuration diagram of a mobile object detection apparatus in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a moving object detecting method according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a moving object detecting method according to a second embodiment of the present invention;
FIG. 4 is a block diagram of a mobile object detecting device according to a first embodiment of the present invention;
FIG. 5 is a schematic view of a point cloud data projection obtained according to a laser radar according to the present invention;
FIG. 6 is a schematic diagram illustrating the determination of the angle of the detection point according to the present invention;
FIG. 7 is a schematic diagram illustrating the determination of a current projection matrix from an initial projection matrix according to the present invention;
FIG. 8 is a schematic view of an initial matrix etch process according to the present invention;
FIG. 9 is a schematic diagram of the initial matrix expansion process of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile object detection apparatus in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the moving object detecting apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in figure 1 does not constitute a limitation of the moving object detection apparatus and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a moving object detection program.
In the moving object detection apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the moving object detection apparatus of the present invention may be provided in the moving object detection apparatus, which calls the moving object detection program stored in the memory 1005 through the processor 1001 and performs the moving object detection method provided by the embodiment of the present invention.
Referring to fig. 2, fig. 2 is a flowchart illustrating a moving object detecting method according to a first embodiment of the present invention.
In this embodiment, the moving object detection method includes the steps of:
step S10: acquiring point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data;
it should be noted that, the executing body of the present embodiment is a mobile robot, such as a cleaning robot, a tour guide robot, an epidemic prevention robot, or other devices with the same or similar functions, which is not limited in this embodiment, and the present embodiment is described by taking the mobile robot as an example.
It can be understood that the laser radar can obtain the point cloud data of the current environment by scanning the surrounding environment, and the point cloud data is projected to a plane coordinate system by a projection method to obtain a current projection matrix.
It should be understood that, referring to fig. 5, the current projection matrix is composed of an X-axis corresponding to the time when the laser beam of the laser radar rotates to emit a distance measurement, a Y-axis corresponding to the beam of the laser radar, and the distance measured by the laser radar is stored in the matrix.
In the specific implementation, a laser radar scans the surrounding environment to obtain point cloud data of the current environment, a mobile robot obtains the point cloud data, and the point cloud data is projected to a plane coordinate system to obtain a current projection matrix.
Further, in order to improve the accuracy of the detection of the moving object, before the step S10, the method further includes:
acquiring a plurality of background point cloud data, and generating a plurality of initial background projection matrixes according to the plurality of background point cloud data; and performing outlier removal and average taking processing on the plurality of initial background projection matrixes to obtain a background projection matrix.
It should be understood that the background point cloud data is environmental point cloud data serving as a reference background when moving object detection is performed.
It can be understood that, before the moving object detection, the background extraction is required, that is, a reference background is required to be selected, in this embodiment, the current environment is detected before the moving object detection is performed, so as to obtain the background point cloud data of the reference background.
It should be understood that, in the background extraction process, several detections may be performed on the current environment to obtain several pieces of background point cloud data, and several corresponding initial background projection matrices are obtained according to the several pieces of background point cloud data by projection.
Further, in order to filter interference of uncertain factors in the background extraction process, de-clustering and averaging operations are performed on a plurality of initial background projection matrixes to obtain a final background projection matrix.
It is understood that the term "outlier" refers to a value of outlier in the background projection matrix, where the outlier is also called an escape value, and refers to a value or values in the data that are different from other values.
In the specific implementation, the current environment is scanned for background extraction to obtain a plurality of background point cloud data before the moving object is detected, a plurality of corresponding initial background projection matrixes are obtained according to the plurality of background point cloud data, the matrix elements in the plurality of initial background projection matrixes are subjected to de-clustering and averaging operation to obtain a final background projection matrix, and the finally determined background projection matrix is used in the subsequent moving object detection.
Step S20: acquiring a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix;
it will be appreciated that the background projection matrix, the current projection matrix and the difference matrix are identical in rows and columns.
It should be understood that, during the process of detecting the moving object, the background projection matrix is kept unchanged, the current projection matrix changes with the change of the current environment, the current projection matrix and the background matrix are compared, and a difference matrix can be obtained according to the difference between the current projection matrix and the background matrix.
Further, in order to improve the accuracy of the detection of the moving object, the step S20 includes:
acquiring a background projection matrix, and acquiring a target element corresponding to any background element in the background projection matrix in the current projection matrix; when the difference value between the background element value of the background element and the target element value of the target element is greater than a preset difference value, marking the target element as a difference element; and determining an initial matrix according to the difference elements, and carrying out corrosion expansion treatment on the initial matrix to obtain a difference matrix.
It can be understood that the columns and rows of the background projection matrix are the same as those of the current projection matrix, and the background elements in the background projection matrix correspond to the elements in the current projection matrix one to one, so that after any background element in the background projection matrix is obtained, the target element corresponding to the background element in the current projection matrix can be determined according to the background element, and the target element is an element compared with the background element.
It should be understood that the background element value is a distance value of a corresponding point stored in the background projection matrix, and the target element value is a distance value of a corresponding point stored in the current projection matrix.
It can be understood that, if a moving object exists in the current environment, the background used as a reference is shielded, the laser radar scans the moving object, and at this time, the distance value stored in the current projection matrix is smaller than the distance value stored in the background projection matrix of the corresponding point, that is, the target element value is smaller than the background element value.
It should be understood that in moving object detection, there is interference, so when determining a difference element, a threshold needs to be determined, and when the difference value of the background element value minus the target element value is greater than a preset difference value, the corresponding target element is marked as the difference element.
It can be understood that, referring to fig. 8 and 9, after marking the difference elements, the initial matrix can be determined according to the difference elements, and noise exists in the obtained initial matrix, so that further processing is required on the initial matrix. By the erosion-dilation operation in the image processing, noise can be effectively eliminated. Firstly, corroding the initial matrix to eliminate scattered difference element points generated by noise in the matrix, and then expanding the initial matrix to obtain a difference matrix after the noise is eliminated.
In a specific implementation, a difference operation may be performed on elements in positions corresponding to the current projection matrix and the background projection matrix, when the difference is greater than a preset difference, the target element is marked as a difference element, a corresponding position in the difference matrix is set to 1, and when the difference is less than or equal to the preset difference, the corresponding position in the difference matrix is set to 0, so that a difference matrix in which matrix elements are only 1 and 0 may be obtained, or other setting manners may be adopted, which is not limited in this embodiment.
Step S30: and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements.
It can be understood that, in the process of detecting a moving object, the number of the difference elements in the difference matrix can be obtained statistically according to the difference matrix by continuously reading the current difference matrix.
It should be understood that if the current environment changes and a moving object exists, the number of the difference elements can reflect the difference between the current projection matrix and the background projection matrix, i.e., can reflect the change of the current environment, and the position direction of the moving object can be determined according to the position of the difference elements.
Further, in order to improve the accuracy of the detection of the moving object, before the step S30, the method further includes:
continuously acquiring a plurality of difference matrixes, and determining a preset difference threshold according to the number of difference elements in the plurality of difference matrixes.
It can be understood that, the interference factors in the environment may cause generation of difference elements, thereby causing misjudgment, in order to eliminate errors, a plurality of difference matrices are continuously obtained, the number of difference elements in the plurality of difference matrices is analyzed, a preset difference threshold is determined, and when the number of difference elements of the current environment difference matrix is greater than the preset difference threshold, it is determined that a moving object exists in the current environment.
In a specific implementation, for example, 5 differential matrices are continuously read, the number of the differential elements in the differential matrices is sequentially 800, 1000, 950, 1150, and 1020, when the number of the differential elements is 950, 1000, 1150, and 1020, a moving object exists in the environment, which indicates that the detection is successful, when the number of the differential elements is 800, a moving object does not exist in the environment, which indicates that the detection is failed, at this time, the preset differential threshold may be set to a value greater than or equal to 950, in this embodiment, more differential matrices may also be read to analyze and determine the preset differential threshold, which is not limited in this embodiment.
In the embodiment, point cloud data of a current environment is acquired through a laser radar, and a current projection matrix is determined according to the point cloud data; acquiring a background projection matrix, and acquiring a target element corresponding to any background element in the background projection matrix in the current projection matrix; when the difference value between the background element value of the background element and the target element value of the target element is greater than a preset difference value, marking the target element as a difference element; determining an initial matrix according to the difference elements, and carrying out corrosion expansion treatment on the initial matrix to obtain a difference matrix; and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements. According to the invention, the point cloud data obtained by laser radar detection is converted into the projection matrix, the projection matrix is compared with the background projection matrix to determine the difference elements, the initial matrix is determined according to the difference elements, the difference matrix is obtained after corrosion expansion processing is carried out on the initial matrix, and whether a moving object exists in the current environment is judged according to the number of the difference elements in the difference matrix, so that errors caused by interference factors in moving object detection can be eliminated, the accuracy of the moving object detection is improved, and other sensors are not required to be additionally installed.
Referring to fig. 3, fig. 3 is a flowchart illustrating a moving object detecting method according to a second embodiment of the present invention.
Based on the first embodiment described above, in the present embodiment, the step S10 includes:
step S101: acquiring point cloud data of a current environment through a laser radar, and determining a detection point angle and a detection point distance corresponding to each laser point cloud in the point cloud data;
it can be understood that, referring to fig. 5, the point cloud data obtained by the laser radar includes the detection point angle and the detection point distance, and in the process of converting the laser point cloud data into the projection matrix, the point cloud data observed by the laser radar needs to be projected into the projection matrix with the length of X and the height of Y, wherein the upper and lower bounds of the Y axis are determined according to the maximum pitch angle of the laser radar, and the X axis is 0-360 degrees.
Step S102: determining detection point coordinates of each laser point cloud according to a preset resolution and the detection point angles;
it can be understood that, in the process of projecting the point cloud data to the projection matrix, the size of the projection matrix can be determined through the preset resolution, and the distance of the corresponding detection point is stored in the matrix.
It should be understood that the detection point coordinates are coordinates of the detection points in the projection matrix.
It should be understood that, referring to fig. 6, by reading point cloud data obtained by the laser radar point by point, the detection point angle can be calculated according to the following formula:
r=x2+y2+z2
Figure BDA0003014084330000091
Figure BDA0003014084330000092
wherein x, y and z are space coordinates of the detection points, a is a horizontal angle, b is a vertical angle, and r is a distance between the detection points.
Further, in order to improve the accuracy of the mobile object detection, the step S102 includes:
reading a preset abscissa resolution from a preset resolution, and determining an abscissa of a detection point according to the preset abscissa resolution and the detection point angle; reading a preset vertical coordinate resolution from a preset resolution, and determining a vertical coordinate of a detection point according to the preset vertical coordinate resolution and the detection point angle; and determining the coordinates of the detection points according to the abscissa and the ordinate of the detection points.
It can be understood that the preset resolution is divided into two directions of horizontal coordinate resolution (horizontal resolution) and vertical coordinate resolution (vertical resolution), and the horizontal resolution refers to an included angle formed by two scanned laser points on the left and right; because the laser radar rotates and the laser transmitter is a pulse, a point is shot on the target object, the laser pulse is of fixed frequency, so the resolution in the horizontal direction is only related to the rotation speed of the radar, and the resolution can be very high as long as the speed is slow enough; the vertical resolution refers to the angle of the laser spot formed by the upper and lower beams. The vertical resolution can be adjusted by setting the included angle of the laser points formed by the two wire harnesses, after the preset abscissa resolution is read, the calculated angle a is compared with the preset abscissa resolution, the abscissa of the detection point can be obtained, and the calculated angle b is compared with the ordinate resolution, and the ordinate of the detection point can be obtained.
It will be appreciated that after the detection point abscissa and the detection point ordinate have been determined, the position of the detection point in the projection matrix can be determined.
In a specific implementation, for example, the preset abscissa resolution is 0.1 °, the preset total coordinate resolution is 0.2 °, the calculated angle a is 60 °, the angle b is 5 °, the abscissa is 600 and the total coordinate is 25 by comparing 60/0.1 to 600 and 5/0.2 to 25, and in practical application, a suitable preset resolution may be selected according to a use scene.
Step S103: and determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinates.
It can be understood that the length of the projection matrix is x, the height is y, the distance of the detection point is stored in the projection matrix, after the coordinates of the detection point are determined, the position of the detection point in the projection matrix can be determined, and the distance of the corresponding detection point is stored in the corresponding position.
Further, in order to reduce the interference of the error in the observation process, the step S103 includes:
determining an initial projection matrix according to the detection point distance and the detection point coordinates; acquiring a plurality of matrix elements from the initial projection matrix, and selecting the element with the minimum distance of detection points from the plurality of matrix elements; and determining a current projection matrix corresponding to the point cloud data according to the element with the minimum distance between the detection points.
It can be understood that due to the influence of interference factors in the environment, the point cloud data acquired by the laser radar has errors, so that the accuracy of the detection of the moving object is influenced, and therefore, an initial projection matrix is determined according to the distance between the detection points and the coordinates of the detection points.
It should be understood that, referring to fig. 7, the matrix elements in the initial projection matrix are the detection point distances, and several adjacent matrix elements are obtained from the initial projection matrix, that is, a sub-matrix is selected from the initial projection matrix, and the smallest matrix element is selected, and each matrix point is set as the minimum distance corresponding to the multiple beams of laser observation.
It can be understood that after the element with the minimum distance of the detection point is selected from a plurality of matrix elements, other elements can be discarded, so as to determine the current projection matrix.
In the embodiment, point cloud data of a current environment are acquired through a laser radar, and a detection point angle and a detection point distance corresponding to each laser point cloud in the point cloud data are determined; determining detection point coordinates of each laser point cloud according to a preset resolution and the detection point angles; and determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinates. In the embodiment, the detection point angle and the detection point distance are determined through the laser point cloud data, the position of the detection point in the current projection matrix is obtained by comparing the detection point angle with the preset resolution, the detection point distance is stored in the corresponding position to obtain the initial projection matrix, each matrix point of the initial projection matrix is set to be the minimum distance observed by the corresponding multiple beams of lasers to obtain the current projection matrix, and data errors caused by interference factors in the environment can be filtered, so that the accuracy of detecting the moving object is improved.
Furthermore, an embodiment of the present invention further provides a storage medium having a moving object detection program stored thereon, where the moving object detection program is executed by a processor to implement the steps of the moving object detection method as described above.
Referring to fig. 4, fig. 4 is a block diagram illustrating a structure of the moving object detecting device according to the first embodiment of the present invention.
As shown in fig. 4, the moving object detecting apparatus according to the embodiment of the present invention includes: the device comprises an acquisition module 10, a determination module 20 and a judgment module 30.
The acquisition module 10 is configured to acquire point cloud data of a current environment through a laser radar, and determine a current projection matrix according to the point cloud data;
the determining module 20 is configured to obtain a background projection matrix, and determine a difference matrix according to the background projection matrix and the current projection matrix;
the determining module 30 is configured to obtain the number of difference elements in the difference matrix, and determine whether a moving object exists in the current environment according to the number of difference elements.
In the embodiment, the acquisition module 10 acquires point cloud data of a current environment through a laser radar, and determines a current projection matrix according to the point cloud data; the determining module 20 obtains a background projection matrix, and determines a difference matrix according to the background projection matrix and the current projection matrix; the determining module 30 obtains the number of the difference elements in the difference matrix, and determines whether a moving object exists in the current environment according to the number of the difference elements. In the embodiment, the point cloud data obtained by laser radar detection is converted into the projection matrix, the projection matrix and the background projection matrix are compared to determine the difference matrix, and whether the moving object exists in the current environment is judged according to the number of the difference elements in the difference matrix, so that the accuracy of detecting the moving object is improved, and other sensors are not required to be additionally installed.
Further, in order to improve the accuracy of detecting the moving object, the obtaining module 10 is further configured to obtain a plurality of background point cloud data, and generate a plurality of initial background projection matrices according to the plurality of background point cloud data; and performing outlier removal and average taking processing on the plurality of initial background projection matrixes to obtain a background projection matrix.
Further, in order to improve the accuracy of detecting the moving object, the determining module 20 is further configured to obtain a background projection matrix, and obtain a target element corresponding to any background element in the background projection matrix in the current projection matrix; when the difference value between the background element value of the background element and the target element value of the target element is greater than a preset difference value, marking the target element as a difference element; and determining an initial matrix according to the difference elements, and carrying out corrosion expansion treatment on the initial matrix to obtain a difference matrix.
Further, in order to improve the accuracy of detecting the moving object, the determining module 20 is further configured to continuously obtain a plurality of difference matrices, and determine the preset difference threshold according to the number of difference elements in the plurality of difference matrices.
Further, in order to improve the accuracy of detecting the moving object, the acquisition module is further configured to acquire point cloud data of a current environment through a laser radar, and determine a detection point angle and a detection point distance corresponding to each laser point cloud in the point cloud data; determining detection point coordinates of each laser point cloud according to a preset resolution and the detection point angles; and determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinates.
Further, in order to improve the accuracy of detecting the moving object, the obtaining module 10 is further configured to read a preset abscissa resolution from the preset resolution, and determine the abscissa of the detection point according to the preset abscissa resolution and the detection point angle; reading a preset vertical coordinate resolution from a preset resolution, and determining a vertical coordinate of a detection point according to the preset vertical coordinate resolution and the detection point angle; and determining the coordinates of the detection points according to the abscissa and the ordinate of the detection points.
Further, in order to reduce the interference of errors in the observation process, the obtaining module 10 is further configured to determine an initial projection matrix according to the detection point distances and the detection point coordinates; acquiring a plurality of matrix elements from the initial projection matrix, and selecting the element with the minimum distance of detection points from the plurality of matrix elements; and determining a current projection matrix corresponding to the point cloud data according to the element with the minimum distance between the detection points.
Other embodiments or specific implementations of the moving object detection apparatus of the present invention refer to the above method embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., a rom/ram, a magnetic disk, an optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A moving object detection method, characterized in that the method comprises:
acquiring point cloud data of a current environment through a laser radar, and determining a current projection matrix according to the point cloud data;
acquiring a background projection matrix, and determining a difference matrix according to the background projection matrix and the current projection matrix;
and acquiring the quantity of the difference elements in the difference matrix, and judging whether a moving object exists in the current environment according to the quantity of the difference elements.
2. The method of claim 1, wherein the acquiring point cloud data of a current environment by lidar and determining a current projection matrix from the point cloud data comprises:
acquiring point cloud data of a current environment through a laser radar, and determining a detection point angle and a detection point distance corresponding to each laser point cloud in the point cloud data;
determining detection point coordinates of each laser point cloud according to a preset resolution and the detection point angles;
and determining a current projection matrix corresponding to the point cloud data according to the detection point distance and the detection point coordinates.
3. The method of claim 1, wherein the obtaining a background projection matrix and determining a difference matrix from the background projection matrix and the current projection matrix comprises:
acquiring a background projection matrix, and acquiring a target element corresponding to any background element in the background projection matrix in the current projection matrix;
when the difference value between the background element value of the background element and the target element value of the target element is greater than a preset difference value, marking the target element as a difference element;
and determining an initial matrix according to the difference elements, and carrying out corrosion expansion treatment on the initial matrix to obtain a difference matrix.
4. The method of claim 2, wherein the determining the detection point coordinates based on the preset resolution and the detection point angle comprises:
reading a preset abscissa resolution from a preset resolution, and determining an abscissa of a detection point according to the preset abscissa resolution and the detection point angle;
reading a preset vertical coordinate resolution from a preset resolution, and determining a vertical coordinate of a detection point according to the preset vertical coordinate resolution and the detection point angle;
and determining the coordinates of the detection points according to the abscissa and the ordinate of the detection points.
5. The method of claim 2, wherein the determining a current projection matrix corresponding to the point cloud data from the detection point distances and the detection point coordinates comprises:
determining an initial projection matrix according to the detection point distance and the detection point coordinates;
acquiring a plurality of matrix elements from the initial projection matrix, and selecting the element with the minimum distance of detection points from the plurality of matrix elements;
and determining a current projection matrix corresponding to the point cloud data according to the element with the minimum distance between the detection points.
6. The method of claim 1, wherein prior to the steps of obtaining point cloud data for a current environment via lidar and determining a current projection matrix from the point cloud data, the method further comprises:
acquiring a plurality of background point cloud data, and generating a plurality of initial background projection matrixes according to the plurality of background point cloud data;
and performing outlier removal and average taking processing on the plurality of initial background projection matrixes to obtain a background projection matrix.
7. The method as claimed in claim 1, wherein before the step of obtaining the number of difference elements in the difference matrix and determining whether there is a moving object in the current environment according to the number of difference elements, the method further comprises:
continuously acquiring a plurality of difference matrixes, and determining a preset difference threshold according to the number of difference elements in the plurality of difference matrixes.
8. A moving object detecting apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring point cloud data of a current environment through a laser radar and determining a current projection matrix according to the point cloud data;
the determining module is used for acquiring a background projection matrix and determining a difference matrix according to the background projection matrix and the current projection matrix;
and the judging module is used for acquiring the quantity of the difference elements in the difference matrix and judging whether a moving object exists in the current environment according to the quantity of the difference elements.
9. A moving object detection apparatus, characterized in that the apparatus comprises: a memory, a processor and a moving object detection program stored on the memory and executable on the processor, the moving object detection program being configured to implement the steps of the moving object detection method according to any one of claims 1 to 7.
10. A storage medium having stored thereon a moving object detection program which, when executed by a processor, implements the steps of the moving object detection method according to any one of claims 1 to 7.
CN202110387061.8A 2021-04-09 2021-04-09 Moving object detection method, apparatus, device and storage medium Pending CN113156453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110387061.8A CN113156453A (en) 2021-04-09 2021-04-09 Moving object detection method, apparatus, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110387061.8A CN113156453A (en) 2021-04-09 2021-04-09 Moving object detection method, apparatus, device and storage medium

Publications (1)

Publication Number Publication Date
CN113156453A true CN113156453A (en) 2021-07-23

Family

ID=76889928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110387061.8A Pending CN113156453A (en) 2021-04-09 2021-04-09 Moving object detection method, apparatus, device and storage medium

Country Status (1)

Country Link
CN (1) CN113156453A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887388A (en) * 2021-09-29 2022-01-04 云南特可科技有限公司 Dynamic target recognition and human body behavior analysis system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
CN109444905A (en) * 2018-09-12 2019-03-08 深圳市杉川机器人有限公司 A kind of dynamic object detection method, device and terminal device based on laser
US20190146062A1 (en) * 2017-11-15 2019-05-16 Baidu Online Network Technology (Beijing) Co., Ltd Laser point cloud positioning method and system
CN111753623A (en) * 2020-03-12 2020-10-09 北京京东乾石科技有限公司 Method, device and equipment for detecting moving object and storage medium
CN112070804A (en) * 2020-09-09 2020-12-11 无锡威莱斯电子有限公司 Moving target detection method based on TOF camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190146062A1 (en) * 2017-11-15 2019-05-16 Baidu Online Network Technology (Beijing) Co., Ltd Laser point cloud positioning method and system
CN107845095A (en) * 2017-11-20 2018-03-27 维坤智能科技(上海)有限公司 Mobile object real time detection algorithm based on three-dimensional laser point cloud
CN109444905A (en) * 2018-09-12 2019-03-08 深圳市杉川机器人有限公司 A kind of dynamic object detection method, device and terminal device based on laser
CN111753623A (en) * 2020-03-12 2020-10-09 北京京东乾石科技有限公司 Method, device and equipment for detecting moving object and storage medium
CN112070804A (en) * 2020-09-09 2020-12-11 无锡威莱斯电子有限公司 Moving target detection method based on TOF camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
范小辉: "基于激光雷达的行人目标检测与识别", 中国优秀硕士学位论文全文库信息科技辑, no. 01, pages 17 - 18 *
邢亚蒙;钱东海;赵伟;徐慧慧;左万全;: "基于改进Hough变换的激光雷达点云特征提取方法研究", 制造业自动化, no. 01 *
陶谦;熊风光;刘涛;况立群;韩燮;梁振斌;常敏;: "多幅点云数据与纹理序列间的自动配准方法", 计算机工程, no. 10 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887388A (en) * 2021-09-29 2022-01-04 云南特可科技有限公司 Dynamic target recognition and human body behavior analysis system
CN113887388B (en) * 2021-09-29 2022-09-02 云南特可科技有限公司 Dynamic target recognition and human body behavior analysis system

Similar Documents

Publication Publication Date Title
US10203402B2 (en) Method of error correction for 3D imaging device
CN109118542B (en) Calibration method, device, equipment and storage medium between laser radar and camera
CN111754578B (en) Combined calibration method for laser radar and camera, system and electronic equipment thereof
CN111627072B (en) Method, device and storage medium for calibrating multiple sensors
CN112146848B (en) Method and device for determining distortion parameter of camera
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN112432647B (en) Carriage positioning method, device and system and computer readable storage medium
JP6115214B2 (en) Pattern processing apparatus, pattern processing method, and pattern processing program
CN111383261B (en) Mobile robot, pose estimation method thereof and pose estimation device
JP6601613B2 (en) POSITION ESTIMATION METHOD, POSITION ESTIMATION DEVICE, AND POSITION ESTIMATION PROGRAM
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
CN109143167B (en) Obstacle information acquisition device and method
CN112287869A (en) Image data detection method and device
CN116958145B (en) Image processing method and device, visual detection system and electronic equipment
CN112686842B (en) Light spot detection method and device, electronic equipment and readable storage medium
CN113156453A (en) Moving object detection method, apparatus, device and storage medium
CN114511489B (en) Beam divergence angle detection method and system of VCSEL chip and electronic equipment
CN115793893B (en) Touch writing handwriting generation method and device, electronic equipment and storage medium
JP2016206909A (en) Information processor, and information processing method
CN111985266A (en) Scale map determination method, device, equipment and storage medium
CN111260781B (en) Method and device for generating image information and electronic equipment
CN113706508A (en) Method and apparatus for analyzing beam quality, beam analyzing system, and storage medium
CN111742349B (en) Information processing apparatus, information processing method, and information processing storage medium
CN110807423A (en) Method and device for processing fingerprint image under screen and electronic equipment
CN113673284B (en) Depth camera snapshot method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination