CN117011839A - Security check method and device based on point cloud processing and robot - Google Patents
Security check method and device based on point cloud processing and robot Download PDFInfo
- Publication number
- CN117011839A CN117011839A CN202310811511.0A CN202310811511A CN117011839A CN 117011839 A CN117011839 A CN 117011839A CN 202310811511 A CN202310811511 A CN 202310811511A CN 117011839 A CN117011839 A CN 117011839A
- Authority
- CN
- China
- Prior art keywords
- license plate
- robot
- point cloud
- image data
- central axis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000007689 inspection Methods 0.000 claims abstract description 51
- 238000001514 detection method Methods 0.000 claims abstract description 33
- 238000006243 chemical reaction Methods 0.000 claims abstract description 21
- 238000004364 calculation method Methods 0.000 claims description 35
- 238000004590 computer program Methods 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 235000016936 Dendrocalamus strictus Nutrition 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001502 supplementing effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention is applicable to the technical field of point cloud data processing, and provides a safety inspection method, a safety inspection device and a robot based on point cloud processing, wherein the method comprises the following steps: acquiring true color image data and depth image data; detecting whether license plates exist in the true color image data; under the condition that license plates exist in the true color image data, acquiring a detection frame of the license plates; performing conversion processing according to pixel points of depth image data of a corresponding area of a detection frame of the license plate to obtain point cloud data of the license plate; calculating a normal vector of the point cloud data; calculating the distance between the robot and the central axis of the license plate; judging the azimuth of the robot relative to the license plate; calculating an included angle between the robot and the central axis of the license plate; moving to the central axis of the license plate; and the vehicle bottom image is acquired when the vehicle bottom passes through the vehicle bottom so as to carry out safety inspection through the vehicle bottom image.
Description
Technical Field
The invention belongs to the technical field of point cloud data processing, and particularly relates to a safety inspection method and device based on point cloud processing and a robot.
Background
With the development of economy, the quantity of automobiles is increased, and the factors such as automobile height and structure cause the bottom of the automobile to be difficult to check, so that lawbreakers can hide dangerous bans at the bottom of the automobile, and the bottom safety inspection of the vehicles entering and exiting is required at important checkpoints such as ports and customs in order to ensure safety and avoid the entrance of dangerous bans.
Currently, important checkpoints such as ports and customs mainly rely on manual work to carry out human eye inspection in sequence, so that the inspection efficiency is low and the personal safety of security inspectors is difficult to guarantee.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a security inspection method, apparatus, robot and computer readable storage medium based on point cloud processing, so as to solve the technical problems that the conventional technology mainly relies on manual work to perform human eye inspection in sequence when performing vehicle bottom inspection, the inspection efficiency is low, and the personal safety of security inspectors is difficult to be ensured.
A first aspect of an embodiment of the present invention provides a security inspection method based on point cloud processing, applied to a robot, including:
a first acquisition step: acquiring true color image data and depth image data;
the detection step comprises: detecting whether a license plate exists in the true color image data;
a second acquisition step: under the condition that a license plate exists in the true color image data, acquiring a detection frame of the license plate;
the conversion step: performing conversion processing according to pixel points of depth image data of a region corresponding to a detection frame of the license plate to obtain point cloud data of the license plate;
a first calculation step: calculating the normal vector of the point cloud data;
a second calculation step: calculating the distance between the robot and the central axis of the license plate;
judging: judging the azimuth of the robot relative to the license plate;
a third calculation step: calculating an included angle between the robot and the central axis of the license plate;
and (3) moving: according to the distance between the robot and the central axis of the license plate, the azimuth of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate, the robot and the central axis of the license plate are moved to the central axis of the license plate;
the acquisition step: and the vehicle bottom image is acquired when the vehicle bottom passes through the vehicle bottom so as to perform safety check through the vehicle bottom image.
A second aspect of an embodiment of the present invention provides a security inspection device based on point cloud processing, applied to a robot, including:
the first acquisition module is used for acquiring true color image data and depth image data;
the detection module is used for detecting whether a license plate exists in the true color image data;
the second acquisition module is used for acquiring a detection frame of the license plate under the condition that the license plate exists in the true color image data;
the conversion module is used for carrying out conversion processing according to the pixel points of the depth image data of the corresponding area of the detection frame of the license plate to obtain point cloud data of the license plate;
the first calculation module is used for calculating the normal vector of the point cloud data;
the second calculation module is used for calculating the distance between the robot and the central axis of the license plate;
the judging module is used for judging the azimuth of the robot relative to the license plate;
the third calculation module is used for calculating an included angle between the robot and the central axis of the license plate;
the mobile module is used for moving to the central axis of the license plate according to the distance between the robot and the central axis of the license plate, the orientation of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate;
and the acquisition module is used for moving through the bottom of the vehicle along the central axis of the license plate, and acquiring an image of the bottom of the vehicle when passing through the bottom of the vehicle so as to perform safety inspection through the image of the bottom of the vehicle.
A third aspect of an embodiment of the present invention provides a robot, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the security inspection method based on point cloud processing according to the first aspect when the processor executes the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program, which when executed by a processor, implements the steps of the security inspection method based on point cloud processing described in the first aspect.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: according to the invention, the license plate is identified, the robot is precisely controlled according to the point cloud data of the license plate, the robot is controlled to move through the bottom of the vehicle along the central axis of the license plate, and when the robot passes through the bottom of the vehicle, the bottom image is acquired, so that more basis is provided for the bottom safety inspection, the bottom safety inspection of ports and customs is assisted, the inspection efficiency is improved, and the personal safety of security inspectors is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
Fig. 1 shows a schematic flow chart of a security inspection method based on point cloud processing provided by the invention;
fig. 2 shows a schematic structural diagram of a security inspection device based on point cloud processing according to the present invention;
fig. 3 shows a schematic structural diagram of a robot according to the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
Firstly, the invention provides a security inspection method based on point cloud processing, which is applied to a robot.
In one possible embodiment, the robot is provided with a camera, a line camera and a light filling lamp. The linear array camera and the light supplementing lamp are arranged at the top of the robot and used for collecting the vehicle bottom image after the robot enters the vehicle bottom, providing more basis for vehicle bottom safety inspection and assisting in carrying out vehicle bottom safety inspection at ports and customs.
Referring to fig. 1, fig. 1 shows a flow chart of a security inspection method based on point cloud processing according to the present invention. As shown in fig. 1, the security inspection method based on the point cloud processing may include the following steps:
a first acquisition step: and acquiring true color image data and depth image data.
The true color image data is also called RGB image data, and can be obtained through shooting by a common camera. The depth image data may be acquired by a depth camera that uses special techniques (such as structured light, time of flight, or binocular vision) to measure the distance of objects in the scene to the sensor and generate a corresponding depth image.
The true color image data and the depth image data can be acquired simultaneously by the same camera or respectively by different cameras.
In one possible embodiment, after the first obtaining step, the method further includes:
and (3) aligning: and aligning the true color image data and the depth image data.
If the true color image data and the depth image data are acquired simultaneously by the same camera, the true color image data and the depth image data can be aligned in an internal reference alignment mode. Specifically, the true color image and the depth image are aligned in a camera coordinate system using internal parameters including a focal length of the camera, an optical center (image center), distortion parameters, and the like.
If the true color image data and the depth image data are acquired by different cameras, respectively, the true color image and the depth image may be aligned in a world coordinate system. Firstly, converting pixel points in the depth image into three-dimensional points in a world coordinate system, and utilizing the depth values and the external parameter information. The pixel coordinates in the real color image and the camera's extrinsic information are then used to convert the points in the real color image to three-dimensional points in the world coordinate system. Finally, according to the coordinate correspondence, the true color image and the depth image can be aligned.
By aligning the true color image data and the depth image data, it is further ensured that license plate regions determined in the true color image data can be accurately mapped into the depth image data.
The detection step comprises: and detecting whether a license plate exists in the true color image data.
Alternatively, license plate detection algorithms may be implemented using an open source computer vision library (e.g., openCV) or a deep learning framework (e.g., tensorFlow, pyTorch).
Specifically, first, features in the image are extracted using image processing and computer vision techniques to aid license plate detection. Some commonly used features include color information, shape features, and edge information. Image segmentation techniques or object detection methods are used to extract image regions that may contain license plates, which typically have color, shape, or edge features of the license plates, based on the feature information. In the extracted area, candidate license plate frames are generated, and possible license plate boundaries can be found by using an edge detection algorithm or a connected area analysis and other technologies. And verifying the generated candidate license plate frames by a feature analysis, pattern matching, machine learning or deep learning method to determine whether the license plate is really contained.
Further, if no license plate exists in the true color image data, ending the flow.
A second acquisition step: and under the condition that a license plate exists in the true color image data, acquiring a detection frame of the license plate.
Specifically, the detection frame of the license plate can be obtained through a target detection algorithm (such as Faster R-CNN, YOLO, SSD and the like) based on deep learning.
The conversion step: and performing conversion processing according to pixel points of the depth image data of the region corresponding to the detection frame of the license plate to obtain point cloud data of the license plate.
Optionally, the pixel points of the depth image data of the license plate region are converted into point cloud data. The depth value of each pixel point may be represented as a Z coordinate in the point cloud, while the abscissa and ordinate of the pixel point may be converted into X and Y coordinates in the point cloud by the pixel coordinates of the depth image.
In one possible embodiment, the converting step includes:
a conversion sub-step: traversing the pixel points of the depth image data of the corresponding region of the detection frame of the license plate, and performing conversion processing on the pixel points of the depth image data according to a formula 1 to obtain point cloud data of the license plate:
wherein cx, cy, fx, fy represents the internal parameters of the true color image data, depth represents the depth image data, and i and j represent the column coordinates and row coordinates of the pixel point, respectively.
And (3) a statistics substep: counting the number of points in the point cloud data, and ending the flow when the number of points is smaller than the preset number.
It should be noted that, by counting the number of points in the point cloud data and ending the flow when the number of points is smaller than the preset number, the deficiency of the point cloud data can be identified in advance, unnecessary calculation and analysis processes are avoided, the quality and accuracy of the point cloud data are ensured, and the overall processing efficiency is improved.
A first calculation step: and calculating the normal vector of the point cloud data.
The normal vector of the point cloud data is a basis for calculating the distance between the robot and the central axis of the license plate, the azimuth of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate.
In one possible implementation, the first calculating step includes:
a first calculation sub-step: calculating the normal vector (alpha, beta, gamma) of the point cloud data by the formula 2 T :
Wherein, pointCloud represents the point cloud data set, C represents the covariance matrix, P center Represents the center point of the license plate, n represents the number of point cloud data, and P i Three-site information representing the ith point cloud data, and lambda is the minimum characteristic value.
In the present invention, the center point of the license plate is taken as the midpoint in the width direction of the automobile body.
It should be noted that, the coordinates of the central point of the license plate can be calculated by calculating the coordinates of the upper left corner and the coordinates of the lower right corner of the detection frame of the license plate.
Further, the calculated normal vector may be smoothed to improve its accuracy and continuity. Common smoothing methods include weighted average or least squares fitting, etc.
A second calculation step: and calculating the distance between the robot and the central axis of the license plate.
In one possible implementation, the second calculating step includes:
a second calculation sub-step: calculating the distance d between the robot and the central axis of the license plate through a formula 3:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center X-axis coordinate values of (a).
Judging: and judging the azimuth of the robot relative to the license plate.
In one possible implementation manner, the determining step includes:
a third calculation sub-step: calculating an intersection point x0 of the central axis of the license plate and the x axis of the world coordinate axis through a formula 4:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center X-axis coordinate values of (a).
Judging the sub-steps: and under the condition that the intersection point x0>0, judging that the robot is positioned on the right side of the license plate. And under the condition that the intersection point x0 is less than 0, judging that the robot is positioned at the left side of the license plate.
A third calculation step: and calculating an included angle between the robot and the central axis of the license plate.
In one possible implementation manner, the third calculation step includes:
a fourth calculation sub-step: calculating an included angle ang of the robot and the central axis of the license plate through a formula 5:
ang=arctan (αγ) equation 5.
And (3) moving: and moving the robot to the central axis of the license plate according to the distance d between the robot and the central axis of the license plate, the azimuth of the robot relative to the license plate and the included angle ang between the robot and the central axis of the license plate.
In the practical application process, if the robot is on the right side of the central axis of the license plate, rotating pi/2-ang degrees rightwards, then linearly moving forwards for d distance, and finally rotating pi/2 degrees leftwards; if x0 is smaller than zero, the device rotates to the left by pi/2-ang degrees, then moves forward by a distance d in a straight line, and finally rotates to the right by pi/2 degrees.
In the practical application process, due to the influence of environmental factors such as illumination intensity, whether the road surface skids or not, and the like, the vehicle license plate can be moved to the central axis of the vehicle license plate at one time to be in an ideal state, and fine adjustment processing is performed in the practical processing flow, so that the practicability of an algorithm is improved.
The acquisition step: and the vehicle bottom image is acquired when the vehicle bottom passes through the vehicle bottom along the central axis of the license plate, so that the safety inspection is performed through the vehicle bottom image.
Alternatively, the vehicle bottom image is acquired by a line camera mounted on top of the robot. Further, a light supplementing lamp can be arranged at the top of the robot and used in dark environment. And recording the image data of the vehicle bottom when the vehicle passes through the vehicle bottom, stopping moving when the vehicle bottom is driven out, generating a vehicle bottom image, displaying the vehicle bottom image at a terminal, and judging that the security inspector is a hidden prohibited article through the vehicle bottom image.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: according to the invention, the license plate is identified, the robot is precisely controlled according to the point cloud data of the license plate, the robot is controlled to move through the bottom of the vehicle along the central axis of the license plate, and when the robot passes through the bottom of the vehicle, the bottom image is acquired, so that more basis is provided for the bottom safety inspection, the bottom safety inspection of ports and customs is assisted, the inspection efficiency is improved, and the personal safety of security inspectors is ensured.
The invention provides a security inspection device 10 based on point cloud processing, which is applied to a robot, as shown in fig. 2.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a security inspection device based on point cloud processing according to the present invention, and as shown in fig. 2, a security inspection device 20 based on point cloud processing includes:
a first acquisition module 201, configured to acquire true color image data and depth image data;
the detection module 202 is configured to detect whether a license plate exists in the true color image data;
the second obtaining module 203 is configured to obtain a detection frame of the license plate when the license plate exists in the true color image data;
the conversion module 204 is configured to perform conversion processing according to pixel points of depth image data of a region corresponding to a detection frame of the license plate, so as to obtain point cloud data of the license plate;
a first calculation module 205, configured to calculate a normal vector of the point cloud data;
the second calculating module 206 is configured to calculate a distance between the robot and a central axis of the license plate;
a judging module 207, configured to judge an azimuth of the robot relative to the license plate;
a third calculation module 208, configured to calculate an included angle between the robot and a central axis of the license plate;
the moving module 209 is configured to move to a central axis of the license plate according to a distance between the robot and the central axis of the license plate, a position of the robot relative to the license plate, and an included angle between the robot and the central axis of the license plate;
the acquisition module 210 is configured to move through the bottom of the vehicle along the central axis of the license plate, and acquire an image of the bottom of the vehicle when passing through the bottom of the vehicle, so as to perform a security check through the image of the bottom of the vehicle.
In one possible implementation, the security inspection device 20 based on the point cloud processing further includes:
and the alignment module is used for aligning the true color image data and the depth image data.
In one possible implementation, the conversion module 204 includes:
the conversion sub-module is used for traversing the pixel points of the depth image data of the corresponding area of the detection frame of the license plate, and converting the pixel points of the depth image data according to a formula 1 to obtain the point cloud data of the license plate:
wherein cx, cy, fx, fy represents an internal parameter of the true color image data, depth represents the depth image data, and i and j represent column coordinates and row coordinates of the pixel point respectively;
and the counting sub-module is used for counting the number of points in the point cloud data, and ending the flow when the number of points is smaller than the preset number.
In one possible implementation, the first computing module 205 includes:
a first calculation sub-module for calculating a normal vector (α, β, γ) of the point cloud data by equation 2 T :
Wherein, pointCloud represents the point cloud data set, C represents the covariance matrix, P center Represents the center point of the license plate, n represents the number of point cloud data, and P i Three-site information representing the ith point cloud data, and lambda is the minimum characteristic value.
In one possible implementation, the second computing module 206 includes:
the second calculation submodule is used for calculating the distance d between the robot and the central axis of the license plate through a formula 3:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center X-axis coordinate values of (a).
In one possible implementation manner, the determining module 207 includes:
the third calculation sub-module is used for calculating an intersection point x0 of the central axis of the license plate and the x axis of the world coordinate axis through a formula 4:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center Is a coordinate value of x-axis of (a);
the judging sub-module is used for judging that the robot is positioned on the right side of the license plate under the condition that the intersection point x0 is more than 0; and under the condition that the intersection point x0 is less than 0, judging that the robot is positioned at the left side of the license plate.
In one possible implementation, the third computing module 208 includes:
the fourth calculation sub-module is used for calculating an included angle ang of the robot and the central axis of the license plate through a formula 5:
ang=arctan (αγ) equation 5.
The security inspection device 20 based on the point cloud processing provided by the invention can implement each process implemented in the above method embodiment, and in order to avoid repetition, the description is omitted here.
The virtual device provided by the invention can be a terminal, and can also be a component, an integrated circuit or a chip in the terminal.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: according to the invention, the license plate is identified, the robot is precisely controlled according to the point cloud data of the license plate, the robot is controlled to move through the bottom of the vehicle along the central axis of the license plate, and when the robot passes through the bottom of the vehicle, the bottom image is acquired, so that more basis is provided for the bottom safety inspection, the bottom safety inspection of ports and customs is assisted, the inspection efficiency is improved, and the personal safety of security inspectors is ensured.
Fig. 3 is a schematic view of a robot according to an embodiment of the present invention. As shown in fig. 3, a robot 30 of this embodiment includes: a processor 300, a memory 301 and a computer program 302 stored in said memory 301 and executable on said processor 300, for example a security check method program based on point cloud processing. The processor 300, when executing the computer program 302, implements the steps of the various embodiments of the security inspection method based on point cloud processing described above. Alternatively, the processor 300, when executing the computer program 302, performs the functions of the units in the above-described device embodiments.
Illustratively, the computer program 302 may be partitioned into one or more units that are stored in the memory 301 and executed by the processor 300 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 302 in the one robot 30. For example, the specific functions of the computer program 302 that may be partitioned into modules are as follows:
the first acquisition module is used for acquiring true color image data and depth image data;
the detection module is used for detecting whether a license plate exists in the true color image data;
the second acquisition module is used for acquiring a detection frame of the license plate under the condition that the license plate exists in the true color image data;
the conversion module is used for carrying out conversion processing according to the pixel points of the depth image data of the corresponding area of the detection frame of the license plate to obtain point cloud data of the license plate;
the first calculation module is used for calculating the normal vector of the point cloud data;
the second calculation module is used for calculating the distance between the robot and the central axis of the license plate;
the judging module is used for judging the azimuth of the robot relative to the license plate;
the third calculation module is used for calculating an included angle between the robot and the central axis of the license plate;
the mobile module is used for moving to the central axis of the license plate according to the distance between the robot and the central axis of the license plate, the orientation of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate;
and the acquisition module is used for moving through the bottom of the vehicle along the central axis of the license plate, and acquiring an image of the bottom of the vehicle when passing through the bottom of the vehicle so as to perform safety inspection through the image of the bottom of the vehicle.
Including but not limited to a processor 300 and a memory 301. It will be appreciated by those skilled in the art that fig. 3 is merely an example of one type of robot 30 and is not meant to be limiting of one type of robot 30, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the one type of robot may also include input and output devices, network access devices, buses, etc.
The processor 300 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 301 may be an internal storage unit of the one type of robot 30, for example, a hard disk or a memory of the one type of robot 30. The memory 301 may also be an external storage device of the one type of robot 30, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the one type of robot 30. Further, the memory 301 may also include both an internal memory unit and an external memory device of the one type of robot 30. The memory 301 is used to store the computer program and other programs and data required for the one roaming control device. The memory 301 may also be used to temporarily store data that has been output or is to be output.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Embodiments of the present invention also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present invention provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/robot, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RandomAccess Memory, RAM), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to a detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is monitored" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon monitoring a [ described condition or event ]" or "in response to monitoring a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.
Claims (10)
1. The security inspection method based on the point cloud processing is characterized by being applied to a robot and comprising the following steps of:
a first acquisition step: acquiring true color image data and depth image data;
the detection step comprises: detecting whether a license plate exists in the true color image data;
a second acquisition step: under the condition that a license plate exists in the true color image data, acquiring a detection frame of the license plate;
the conversion step: performing conversion processing according to pixel points of depth image data of a region corresponding to a detection frame of the license plate to obtain point cloud data of the license plate;
a first calculation step: calculating the normal vector of the point cloud data;
a second calculation step: calculating the distance between the robot and the central axis of the license plate;
judging: judging the azimuth of the robot relative to the license plate;
a third calculation step: calculating an included angle between the robot and the central axis of the license plate;
and (3) moving: according to the distance between the robot and the central axis of the license plate, the azimuth of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate, the robot and the central axis of the license plate are moved to the central axis of the license plate;
the acquisition step: and the vehicle bottom image is acquired when the vehicle bottom passes through the vehicle bottom so as to perform safety check through the vehicle bottom image.
2. The security inspection method based on point cloud processing according to claim 1, further comprising, after the first obtaining step:
and (3) aligning: and aligning the true color image data and the depth image data.
3. The security inspection method based on point cloud processing according to claim 1, wherein the converting step includes:
a conversion sub-step: traversing the pixel points of the depth image data of the corresponding region of the detection frame of the license plate, and performing conversion processing on the pixel points of the depth image data according to a formula 1 to obtain point cloud data of the license plate:
wherein cx, cy, fx, fy represents an internal parameter of the true color image data, depth represents the depth image data, and i and j represent column coordinates and row coordinates of the pixel point respectively;
and (3) a statistics substep: counting the number of points in the point cloud data, and ending the flow when the number of points is smaller than the preset number.
4. The security inspection method based on point cloud processing according to claim 3, wherein the first calculating step includes:
a first calculation sub-step: calculating the normal vector (alpha, beta, gamma) of the point cloud data by the formula 2 T :
Wherein, pointCloud represents the point cloud data set, C represents the covariance matrix, P center Represents the center point of the license plate, n represents the number of point cloud data, and P i Represents the ithAnd three-site information of the point cloud data, wherein lambda is a minimum characteristic value.
5. The security inspection method based on point cloud processing according to claim 4, wherein the second calculating step includes:
a second calculation sub-step: calculating the distance d between the robot and the central axis of the license plate through a formula 3:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center X-axis coordinate values of (a).
6. The security inspection method based on point cloud processing according to claim 4, wherein the judging step includes:
a third calculation sub-step: calculating an intersection point x0 of the central axis of the license plate and the x axis of the world coordinate axis through a formula 4:
wherein,representing the center point P of the license plate center Z-axis coordinate value, ">Representing the center point P of the license plate center Is a coordinate value of x-axis of (a);
judging the sub-steps: under the condition that the intersection point x0 is larger than 0, judging that the robot is positioned on the right side of the license plate; and under the condition that the intersection point x0 is less than 0, judging that the robot is positioned at the left side of the license plate.
7. The security inspection method based on point cloud processing according to claim 4, wherein the third calculation step includes:
a fourth calculation sub-step: calculating an included angle ang of the robot and the central axis of the license plate through a formula 5:
ang=arctan (αγ) equation 5.
8. A security inspection device based on point cloud processing, characterized in that it is applied to a robot, comprising:
the first acquisition module is used for acquiring true color image data and depth image data;
the detection module is used for detecting whether a license plate exists in the true color image data;
the second acquisition module is used for acquiring a detection frame of the license plate under the condition that the license plate exists in the true color image data;
the conversion module is used for carrying out conversion processing according to the pixel points of the depth image data of the corresponding area of the detection frame of the license plate to obtain point cloud data of the license plate;
the first calculation module is used for calculating the normal vector of the point cloud data;
the second calculation module is used for calculating the distance between the robot and the central axis of the license plate;
the judging module is used for judging the azimuth of the robot relative to the license plate;
the third calculation module is used for calculating an included angle between the robot and the central axis of the license plate;
the mobile module is used for moving to the central axis of the license plate according to the distance between the robot and the central axis of the license plate, the orientation of the robot relative to the license plate and the included angle between the robot and the central axis of the license plate;
and the acquisition module is used for moving through the bottom of the vehicle along the central axis of the license plate, and acquiring an image of the bottom of the vehicle when passing through the bottom of the vehicle so as to perform safety inspection through the image of the bottom of the vehicle.
9. Robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, realizes the steps of the point cloud processing based security inspection method according to any of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the point cloud processing based security inspection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310811511.0A CN117011839A (en) | 2023-07-04 | 2023-07-04 | Security check method and device based on point cloud processing and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310811511.0A CN117011839A (en) | 2023-07-04 | 2023-07-04 | Security check method and device based on point cloud processing and robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117011839A true CN117011839A (en) | 2023-11-07 |
Family
ID=88571925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310811511.0A Pending CN117011839A (en) | 2023-07-04 | 2023-07-04 | Security check method and device based on point cloud processing and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117011839A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118225770A (en) * | 2024-05-22 | 2024-06-21 | 盛视科技股份有限公司 | Vehicle bottom centering checking method and vehicle bottom checking system based on intelligent vision |
CN118226458A (en) * | 2024-05-22 | 2024-06-21 | 盛视科技股份有限公司 | Vehicle bottom centering checking method and vehicle bottom checking system based on laser radar |
-
2023
- 2023-07-04 CN CN202310811511.0A patent/CN117011839A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118225770A (en) * | 2024-05-22 | 2024-06-21 | 盛视科技股份有限公司 | Vehicle bottom centering checking method and vehicle bottom checking system based on intelligent vision |
CN118226458A (en) * | 2024-05-22 | 2024-06-21 | 盛视科技股份有限公司 | Vehicle bottom centering checking method and vehicle bottom checking system based on laser radar |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117011839A (en) | Security check method and device based on point cloud processing and robot | |
US7889887B2 (en) | Lane recognition apparatus | |
CN106128115B (en) | Fusion method for detecting road traffic information based on double cameras | |
CN107796373B (en) | Distance measurement method based on monocular vision of front vehicle driven by lane plane geometric model | |
CN113034612B (en) | Calibration device, method and depth camera | |
US20160343143A1 (en) | Edge detection apparatus, edge detection method, and computer readable medium | |
CN107688174A (en) | A kind of image distance-finding method, system, storage medium and vehicle-mounted visually-perceptible equipment | |
Farag | A comprehensive real-time road-lanes tracking technique for autonomous driving | |
CN112683228A (en) | Monocular camera ranging method and device | |
CN112233076A (en) | Structural vibration displacement measurement method and device based on red round target image processing | |
CN114972531B (en) | Corner detection method, equipment and readable storage medium | |
CN114202588A (en) | Method and device for quickly and automatically calibrating vehicle-mounted panoramic camera | |
CN110197104B (en) | Distance measurement method and device based on vehicle | |
CN109443319A (en) | Barrier range-measurement system and its distance measuring method based on monocular vision | |
Barua et al. | An Efficient Method of Lane Detection and Tracking for Highway Safety | |
EP3428876A1 (en) | Image processing device, apparatus control system, imaging device, image processing method, and program | |
CN116863124A (en) | Vehicle attitude determination method, controller and storage medium | |
CN112183485B (en) | Deep learning-based traffic cone detection positioning method, system and storage medium | |
CN114415129A (en) | Visual and millimeter wave radar combined calibration method and device based on polynomial model | |
CN111539279A (en) | Road height limit height detection method, device, equipment and storage medium | |
Wang et al. | A monocular ranging algorithm for detecting illegal vehicle jumping | |
Yan et al. | Lane Line Detection based on Machine Vision | |
CN113129363A (en) | Image distance information extraction method based on characteristic object and perspective transformation | |
CN111251994B (en) | Method and system for detecting objects around vehicle | |
Długosz et al. | Static camera calibration for advanced driver assistance system used in trucks-robust detector of calibration points |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |