CN111580128A - Method for automatic detection and modeling of motor vehicle driver examination field - Google Patents

Method for automatic detection and modeling of motor vehicle driver examination field Download PDF

Info

Publication number
CN111580128A
CN111580128A CN202010245228.2A CN202010245228A CN111580128A CN 111580128 A CN111580128 A CN 111580128A CN 202010245228 A CN202010245228 A CN 202010245228A CN 111580128 A CN111580128 A CN 111580128A
Authority
CN
China
Prior art keywords
examination
motor vehicle
point cloud
dimensional model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010245228.2A
Other languages
Chinese (zh)
Inventor
巩建国
赵立波
索子剑
于鹏程
饶众博
刘晓晨
王秋鸿
柴蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Original Assignee
Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China filed Critical Road Traffic Safety Research Center Ministry Of Public Security Of People's Republic Of China
Priority to CN202010245228.2A priority Critical patent/CN111580128A/en
Publication of CN111580128A publication Critical patent/CN111580128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/16Control of vehicles or other craft
    • G09B19/167Control of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Business, Economics & Management (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Administration (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Educational Technology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a method for automatic detection and modeling of an examination place of a motor vehicle driver, which comprises the following steps: respectively acquiring image data and laser point cloud data of an examination field of a driver of a motor vehicle to be detected by using an unmanned aerial vehicle, and transmitting the image data and the laser point cloud data to an image processing cloud platform; modeling is carried out according to the image data and the laser point cloud data through the image processing cloud platform to obtain a three-dimensional model of the examination site, and a detection result is obtained according to examination site quantitative feature information in the three-dimensional model and corresponding examination site quantitative feature information in a pre-established examination site acceptance standard three-dimensional model. According to the method for detecting the motor vehicle driver examination site, the motor vehicle driver examination site and facilities thereof are accurately detected, the problems of large workload, high time cost and the like of manual site detection are solved, the examination site detection efficiency is improved, and the detection accuracy, objectivity and justice are ensured.

Description

Method for automatic detection and modeling of motor vehicle driver examination field
Technical Field
The application relates to the technical field of image processing, in particular to a method for automatically detecting and modeling an examination place of a motor vehicle driver.
Background
With the continuous and rapid increase of the quantity of motor vehicles, automobiles become a transportation tool for ordinary people, and the requirement for taking examination by drivers of motor vehicles to obtain driving licenses is increased year by year. The motor vehicle driver examination comprises a motor vehicle driver site driving skill examination (namely a subject two examination), and an examination room with qualified acceptance is a fair and fair basis of the subject two examination, however, the acceptance detection work of the motor vehicle driver examination room in China is mainly carried out by a manual measurement and manual judgment method, and the problems of large detection workload, high time cost, poor accuracy and the like exist in manual site acceptance detection, for example, the detection of a single examination room may take 2 to 4 days or even longer, and if the review is needed, the examination room consumes more time and labor.
Disclosure of Invention
In view of the above, the present application is proposed to provide a method for automatic test modeling of an examination site for a driver of a motor vehicle, which overcomes or at least partially solves the above-mentioned problems, thereby saving time and labor spent on examination in the examination site, realizing automation of examination and acceptance in examination site, and ensuring accuracy, objectivity and justice of examination and acceptance in the examination site.
According to an aspect of the application, a method for automatic detection and modeling of a driver examination site of a motor vehicle is provided, and the method for automatic detection and modeling of the driver examination site of the motor vehicle comprises the following steps:
respectively acquiring image data and laser point cloud data of an examination field of a driver of a motor vehicle to be detected by using an unmanned aerial vehicle, and transmitting the image data and the laser point cloud data to an image processing cloud platform; the unmanned aerial vehicle carries a camera and a laser radar (LIDAR);
modeling according to the image data and the laser point cloud data by using the image processing cloud platform to obtain a three-dimensional model of the examination field of the motor vehicle driver, wherein the three-dimensional model comprises examination field quantitative characteristic information;
and obtaining a detection result according to the examination room quantitative characteristic information in the three-dimensional model and the corresponding examination room quantitative characteristic information in the pre-established examination room acceptance standard three-dimensional model.
According to another aspect of the application, there is provided a computer readable storage medium storing one or more programs which, when executed by a processor, implement a method as described above.
According to the technical scheme of the embodiment of the application, the image data and the laser point cloud data of the examination field of the motor vehicle driver to be detected are collected, modeling is carried out according to the image data and the laser point cloud data, a three-dimensional model of the examination field of the motor vehicle driver is obtained, and comparison is carried out according to examination field quantitative characteristic information in the three-dimensional model and corresponding examination field quantitative characteristic information in an examination field acceptance standard three-dimensional model, so that a detection result is obtained. On one hand, compared with the traditional manual examination room scheme, the method has the advantages that the examination room and facilities of the motor vehicle driver are accurately detected based on image modeling and feature comparison, so that automation and digitization of examination room detection are realized, the problems of large detection workload, high time cost and the like in manual field detection are effectively solved, and the detection efficiency is improved; on the other hand, the image processing platform carries out detection modeling based on two data of image data and laser point cloud, so that the accuracy, objectivity and justice of detection are guaranteed, and management, monitoring and later-stage rechecking are facilitated.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 illustrates a schematic flow chart of a method for automated inspection modeling of an automotive driver test field according to one embodiment of the present application;
FIG. 2 shows a schematic diagram of an examination room image acquisition process according to an embodiment of the application;
FIG. 3 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
At present, the inspection and acceptance detection work of a motor vehicle driver examination room in China is mainly performed by a manual measurement method, and the detection task comprises the steps of manually measuring graphs, marking lines and examination room facilities of the motor vehicle driver examination project and manually comparing the graphs, the marking lines and the examination room facilities with the motor vehicle driver examination room and facility setting specifications thereof (GA 029-2017). The existing manual acceptance detection method has the problems of large detection workload, high time cost and the like, and the time for accepting and detecting a single examination room can be 2 to 4 days or even longer, and the time and the labor are more consumed if rechecking is needed. In view of the above, the embodiment of the present application provides a method for automatic detection and modeling of an examination place for a driver of a motor vehicle, which includes acquiring image data and laser point cloud data, constructing a three-dimensional model of the examination place according to the image data and the laser point cloud data, and correspondingly matching quantization features in the three-dimensional model of the examination place with quantization features in a standard three-dimensional model of the examination place to obtain a detection result, thereby saving time and labor consumed by examination of the examination place, enabling examination and acceptance of examination place detection to be digitalized and automated, ensuring precision and accuracy of detection, and facilitating monitoring management and later review.
Fig. 1 is a schematic flow chart of a method for automatic detection and modeling of an examination site of a driver of a motor vehicle according to an embodiment of the present application, and referring to fig. 1, the method of the embodiment of the present application includes the following steps:
step S101, respectively acquiring image data and laser point cloud data of an examination field of a driver of a motor vehicle to be detected by using an unmanned aerial vehicle, and transmitting the image data and the laser point cloud data to an image processing cloud platform; the unmanned aerial vehicle carries a camera and a laser radar (LIDAR). That is, an unmanned aerial vehicle equipped with a camera dedicated for mapping and a laser radar lidar (light detection and ranging) is used to collect and store an image and a laser point cloud of an examination site of a driver of a vehicle to be detected, and then a 5G (5th-Generation) or 4G (4th-Generation) mobile communication mode is used to transmit the image.
Step S102, modeling is carried out by utilizing the image processing cloud platform according to the image data and the laser point cloud data, and a three-dimensional model of the examination field of the motor vehicle driver is obtained, wherein the three-dimensional model comprises examination field quantitative characteristic information; the image processing cloud platform is a cloud platform applying an AI (Artificial Intelligence) algorithm, and is mainly used for splicing image data and laser point cloud data and identifying a target. The three-dimensional model is, for example, an examination room stereo model generated by fusing the 2D planar image and the laser point cloud data in the previous step. The examination room quantitative characteristic information refers to examination room characteristics which can be represented by numerical values, such as the length and the width of a garage of an examination room, the width of a lane of the examination room and the like.
And S103, obtaining a detection result according to the examination room quantitative characteristic information in the three-dimensional model of the three-dimensional model and the corresponding examination room quantitative characteristic information in the pre-established examination room acceptance standard three-dimensional model of the three-dimensional model.
In the step, the examination room to be detected is determined to be qualified or unqualified through characteristic comparison, and a detection result is obtained. In practical application, in order to improve and ensure accuracy of a detection result, the embodiment of the application may further include that a part of comparison functions are realized through the image processing platform, and then manual review and confirmation are performed, for example, a vehicle management staff logs in the image processing cloud platform in an account number encryption code manner for real-time execution.
As shown in fig. 1, according to the method for automatically detecting and modeling an examination site of a driver of a motor vehicle in the embodiment of the application, the image data and the laser point cloud data of the examination site of the driver are collected, modeling is performed according to the image data and the laser point cloud data, a three-dimensional model of an examination room is obtained, and the examination room is determined to be qualified or unqualified according to the comparison result of examination room quantitative feature information in the three-dimensional model and corresponding examination room quantitative feature information in a standard three-dimensional model. Therefore, compared with the traditional manual on-site detection, the examination site and facilities of the motor vehicle driver are accurately detected based on data modeling and model comparison, the problems of large workload, high time cost and the like of manual on-site detection are solved, and the detection accuracy is ensured; moreover, the automation, the digitization level and the detection efficiency of the detection are improved.
Fig. 2 is a schematic diagram illustrating examination room image acquisition and processing according to an embodiment of the present application, and the following describes, with reference to fig. 2, a method for automatic detection and modeling of a test site of a driver in a motor vehicle according to an embodiment of the present application, which generally includes three steps: namely, firstly, collecting data; secondly, processing data; and thirdly, detecting the result. The following description will be made separately.
Firstly, data are collected.
Referring to the left side of fig. 2, the data acquisition here is to acquire image data and laser point cloud data of an examination site of a driver of a motor vehicle to be detected by using an unmanned aerial vehicle, store the image data and the laser point cloud data synchronously on the unmanned aerial vehicle, and transmit the image data and the lidar data (namely the laser point cloud data) synchronously back. Specifically, in the embodiment of the application, an unmanned aerial vehicle carrying a camera special for surveying and mapping and a laser radar LIDAR (same LIDAR) is used for respectively acquiring the image and the laser point cloud of the examination field of the driver of the motor vehicle to be detected. In order to improve the accuracy, a standard scale is arranged in an aerial survey scene (namely, the physical environment of an examination field) and used as an accuracy standard to judge the accuracy of aerial survey.
After acquiring image data and laser point cloud data, the unmanned aerial vehicle transmits the image data and the laser point cloud data to an image processing cloud platform in real time through a fifth generation mobile communication 5G or a fourth generation mobile communication 4G; or transmitting the image data and the laser point cloud data to an image processing cloud platform in an off-line data mode. That is to say, utilize unmanned aerial vehicle to gather examination room image and laser point cloud, carry out image acquisition and laser point cloud data acquisition to the figure, the marking etc. of each examination project, then carry out the real-time passback and the ground real time monitoring that unmanned aerial vehicle gathered the image through 5G/4G network to avoid because objects such as trees shelter from the unclear, incomplete problem of examination room image acquisition that leads to, improved image acquisition's validity and precision.
The embodiment of the application utilizes the camera on the unmanned aerial vehicle to collect the image of the examination field of the driver of the motor vehicle to be detected and combines with the 5G network to transmit the image in real time, solves the problem that the existing handheld shooting equipment can not restore the examination field and the examination items in proportion, improves the automation level of examination field detection, and ensures the accuracy of detection.
In practical application, the acquisition of the image data of the examination field of the driver of the motor vehicle to be detected by using the unmanned aerial vehicle comprises the following steps: the method comprises the steps of utilizing a first camera carried on an unmanned aerial vehicle to perpendicularly shoot a motor vehicle driver examination field to be detected in a downward direction to obtain a shot image, utilizing a second camera carried on the unmanned aerial vehicle to incline to preset an angle, and laterally shooting the height and gradient information of the motor vehicle driver examination field to be detected to obtain a laterally shot image. For example, unmanned aerial vehicle carries on five fixed focus cameras and carries out data acquisition, and wherein, a fixed focus camera (i.e. first camera) is bent down perpendicularly and is shot examination hall detail, obtains the image of bending down, and four fixed focus cameras (i.e. second camera) inclination 35 degrees measure examination hall height, slope, obtain the image of taking a side, thereby fuse the concatenation through the synchronous image to five camera collections and realize three-dimensional image acquisition.
And secondly, processing data.
Referring to the middle of fig. 2, the image data acquired in the previous step is modeled by using an image processing cloud platform (mainly used for automatic splicing and identification), so as to obtain a three-dimensional model of the examination site of the driver of the motor vehicle. The examination room three-dimensional model comprises: and the graphic marking information and the examination room facility information corresponding to the examination items. In one embodiment, the designated test items include one or more of the following test items: parking in a reverse mode, parking on the side, parking and starting on a slope at a fixed point, turning at a right angle, driving on a curve, driving in a pile, driving through a unilateral bridge, passing through continuous obstacles, passing through a width-limiting door, driving on an undulating road surface, turning around on a narrow road, simulating the driving of a highway and simulating the driving of a continuous sharp-curved mountain road. The graphic marking information corresponding to each appointed examination item comprises one or more of the following quantitative information: the length of the garage, the width of a lane, the distance between the garage and a control line, and the distance between a start line and a stop line and the outside line of the garage are respectively; the examination room facility information comprises one or more of the following quantitative information: lane width, lateral clearance width, curb belt clearance height.
That is to say, the examination site detection of the driver of the motor vehicle mainly detects examination items such as backing and warehousing, side parking, slope fixed-point parking and starting, quarter turning, curve driving, pile examination, passing through a unilateral bridge, passing through continuous obstacles, passing through a width-limited door, driving on a rough road surface, turning around on a narrow road, simulating the driving of a highway, simulating the driving of a continuous sharp-curved mountain road and the like. The detection data comprises the library length, the library width, the lane width, the distance between the garage and the control line, the distance between the start line and the stop line and the outer side line of the library and the like corresponding to each test item. And the examination area road building boundary comprises data such as lane width, lateral clearance width, curb belt clearance height and the like.
It is emphasized that one embodiment of the application of the method of the present application to the testing of a driver test field of a motor vehicle is described herein. In other embodiments, more or less test items than those described above may be tested according to standard specifications and requirements for testing on test sites for drivers of motor vehicles. Similarly, the foregoing description is given by way of example for specific test items, and it will be understood by those skilled in the art that the test items and test contents, such as the test items, should be determined according to the actual conditions of the test items, and should not be limited by the foregoing description. That is, the above-described examples of the contents of detection and the items of detection do not represent all the contents of detection and all the items of detection.
In addition, the data are quantization characteristic data which are key to examination and acceptance of the examination room, the examination and acceptance result is influenced, and the quantization characteristic data are collected and displayed on the three-dimensional model of the examination room, so that the data comparison is convenient to carry out subsequently.
It should be noted that the construction of the three-dimensional model of the examination room is the key for model comparison and examination room detection, so the description is focused here.
In one embodiment, the modeling by the image processing cloud platform according to the image data and the laser point cloud data to obtain the three-dimensional model of the examination site of the driver of the motor vehicle comprises: calculating according to the three-dimensional position information of the spatial point of the examination field of the driver of the motor vehicle to be detected, the camera internal parameter, the rotation matrix and the translation vector to obtain the position coordinates of the characteristic point corresponding to the spatial point on the image; reconstructing a sparse point cloud according to the position coordinates of the feature points, and constructing a dense point cloud according to the sparse point cloud and a reference point, wherein the reference point is determined by laser point cloud data collected by a ground laser radar; and constructing a point cloud grid according to the dense point cloud, constructing a three-dimensional semantic model according to the point cloud grid, constructing a three-dimensional vector model according to the three-dimensional semantic model, and taking the three-dimensional vector model as the three-dimensional model of the motor vehicle driver examination field.
That is, the automatic detection modeling can be divided into the following three steps:
first, spatial point data acquisition. Here including image acquisition and ground excitationTwo situations are collected by the optical radar. The image acquisition is carried out by five fixed-focus cameras carried by an unmanned aerial vehicle, wherein one camera is used for shooting examination room details, and the four cameras are used for measuring the images of the height and the gradient of the examination room at an inclination angle of 35 degrees. The three-dimensional image acquisition can be realized by fusing and splicing the synchronous images acquired by the five cameras. The ground laser radar collection is to use the laser data of the ground laser radar LIDAR to perform fusion and completion aiming at the interference factors such as shielding, reflection, shadow, shielding and the like which may be encountered in the video image reconstruction. And accurate acquisition is realized through geographical physical point location reflection information, and the information is transmitted to an image processing platform through a mobile communication network. Note: the difference of ground laser radar and airborne laser radar is that the mounted position is different, and airborne laser radar installs on unmanned aerial vehicle, and ground laser radar sets up on waiting to detect the subaerial in examination place, and both all are used for gathering laser point cloud data.
The second step is that: calculating image feature points. By the following formula (1) and formula (2), camera internal reference, camera rotation and translation data of a camera on the unmanned aerial vehicle and space point information in a scene are integrated, and position coordinates of feature points on any image in a downward shot image or a side shot image are calculated and obtained:
Figure BDA0002433823110000071
Figure BDA0002433823110000072
wherein, the formula (2) is obtained by transforming the formula (1), and the matrix in the formula (2)
Figure BDA0002433823110000073
The position coordinates of the feature points on the image are represented, K represents camera internal parameters, R represents a camera rotation matrix, t represents a camera translation vector,
Figure BDA0002433823110000074
representing the position coordinates of the spatial points.
The third step:and finishing three-dimensional modeling reconstruction.And performing sparse point cloud reconstruction, dense point cloud reconstruction, point cloud network modeling, three-dimensional semantic modeling and three-dimensional vector modeling on the image feature points calculated in the second step, and finally realizing three-dimensional reconstruction. The method and the device have the advantages that aiming at interference factors such as shielding, reflection, shadow, shielding and the like existing in the actual examination room environment measurement, in the dense point cloud reconstruction process, the laser point cloud data collected by the ground laser radar LIDAR are introduced as reference points, and therefore the reconstruction accuracy of the whole model is greatly improved. In practical application, laser point cloud data collected by the airborne laser radar can be adopted according to different projects and geographic conditions.
After the three-dimensional model of the examination room is constructed, the image processing platform of the embodiment of the application carries out target identification and processes the problems of image distortion, target shielding and the like so as to improve the precision and accuracy of the automatic detection modeling method of the embodiment of the application.
In one embodiment, the modeling by the image processing cloud platform according to the image data and the laser point cloud data to obtain the three-dimensional model of the examination site of the driver of the motor vehicle comprises: the image processing cloud platform identifies a target examination item in a three-dimensional model of the motor vehicle driver examination field according to an identifier arranged around the target examination item, wherein the identifier comprises a color identifier and a shape identifier; or the image processing cloud platform identifies a quarter turn test item and a curve driving test item in a three-dimensional model of the motor vehicle driver test site according to a curvature threshold value; or the image processing cloud platform identifies an undulating road surface and a ramp in the three-dimensional model of the motor vehicle driver examination field according to a gradient threshold value.
Specifically, when performing target identification, the embodiment of the present application mainly adopts several ways, one is to set markers around the examination item, for example, to add color markers and shape markers around the examination item which is difficult to distinguish, so that in the acquired image, the target identification of the examination item can be performed through the markers. And secondly, identifying the characteristic of the curvature of the curve, presetting the threshold of the curvature of the curve during the right-angle turning and the curve driving of the examination item, and identifying whether the curve driving is the right-angle turning or the curve driving according to the threshold. And thirdly, gradient feature identification. A gradient threshold value is preset so as to identify and file the rough road surface, the slope angle and the like.
To mitigate image distortion: the camera of the embodiment of the application adopts the fixed-focus mapping lens, and sets the lens distortion parameter value in advance to ensure the accurate reduction of the image at different shooting heights and angles.
In one embodiment, the modeling of the automatic detection of the test field of the driver of the motor vehicle further comprises: if the examination item is determined to be partially shielded according to the image data acquired by the unmanned aerial vehicle, adjusting the shooting height and angle of the unmanned aerial vehicle; and if the examination item is completely shielded according to the image data acquired by the unmanned aerial vehicle, acquiring laser point cloud data through a ground laser radar of the examination field of the motor vehicle driver to be detected, so that the image processing cloud platform splices the laser point cloud data with the image data acquired by the unmanned aerial vehicle.
That is to say, in order to deal with the technical problem that an examination item is blocked, when the examination item is blocked by an object with a certain height and a small area, the problem is solved by controlling the unmanned aerial vehicle to adjust the shooting height and angle; when the examination item is completely shielded, the laser point cloud data are collected through the ground laser radar and then spliced with the image data collected by the unmanned aerial vehicle. The position information of the ground lidar is obtained by a GPS (Global positioning system). Note: before actual use, the calibration of the ground laser radar and the unmanned aerial vehicle camera can be carried out, so that the unification of a coordinate system is completed.
In order to reduce errors, the method according to the embodiment of the application, after the image processing cloud platform performs modeling according to the image data and the laser point cloud data to obtain a three-dimensional model of the motor vehicle driver examination site, further includes: detecting the reconstruction data error values of all reference poles in the three-dimensional model of the motor vehicle driver examination field; averaging the error values of the reconstructed data to obtain a comprehensive error value of the image data; when the comprehensive error value is not greater than a preset comprehensive error threshold value, a three-dimensional model of the motor vehicle driver examination field is reserved; the number of the reference marker posts is determined according to the area and the terrain of the motor vehicle driver examination site, and the reference marker posts are randomly placed in the range of the motor vehicle driver examination site.
For example, more than three reference poles are randomly placed in the range of an examination room according to the area and the terrain of the examination room, the error values of reconstruction data of all the reference poles in three-dimensional modeling are detected, the average value is obtained to obtain the comprehensive error value of an image, and when the comprehensive error value is not more than a preset comprehensive error threshold value, the three-dimensional model of the examination room of the motor vehicle driver is reserved, so that the image acquisition accuracy is improved.
And thirdly, detecting the result.
In the embodiment of the application, the detection result is obtained according to the examination room quantitative characteristic information in the three-dimensional model and the corresponding examination room quantitative characteristic information in the pre-established examination room acceptance standard three-dimensional model. Specifically, the method comprises the following steps: matching the graphic marking information corresponding to each specified examination item with the graphic marking information corresponding to the corresponding examination item in the examination room acceptance standard three-dimensional model, and obtaining a qualified detection result of the examination room when the matching result is within an allowable error range; and when the matching result exceeds the allowable error range, obtaining the detection result that the examination room is unqualified. Therefore, the technical problem that the acquired images cannot be automatically compared with the motor vehicle driver examination site and facility setting standards thereof is solved, and the beneficial effect that whether the examination items and examination site facilities automatically detect meet the requirements of the motor vehicle driver examination site and facility setting standards thereof is achieved. Moreover, through the fusion of the image data and the laser point cloud data, the influence of distortion and shielding is reduced, and the detection accuracy, objectivity and justice are ensured.
It should be noted that, due to the influence of factors such as image acquisition, image transmission, matching processing, etc., an error may exist between the detection result of the embodiment of the present application and the actual examination room, so that an allowable error range is set in advance according to an empirical value in the embodiment of the present application, and if the difference between the graphic reticle of the examination room to be detected and the image reticle of the standard examination room is within the error range, the graphic reticle of the examination room to be detected and the image reticle of the standard examination room are considered to be matched or the reticle of the examination room is qualified.
Referring to the right side of fig. 2, after the model is built, the embodiment of the present application can perform ground stitching result viewing and plotting locally. After the image processing platform realizes part of comparison functions, manual review and confirmation are performed, for example, vehicle management staff logs in the image processing cloud platform in an account number encryption code mode to check comparison results, and review and confirmation are performed.
Further, the method of the embodiment of the present application further includes: and when the detection result indicates that the examination room is unqualified, marking the unqualified examination room facilities and/or the examination item graphic marked lines and generating a detection report. For example, when the measurement error is within 5%, the measurement is considered to be qualified, and if the measurement error is out of the error range, the embodiment of the application can automatically identify a white line and automatically perform measurement plotting on the relevant data.
That is to say, if the examination room facilities are determined to be not in accordance with the requirements through comparison, the examination room facilities are marked, if the examination item graphic marking lines are determined to be not in accordance with the requirements through comparison, the examination item graphic marking lines are marked, and if the examination room facilities and the examination item graphic marking lines are determined to be not in accordance with the requirements through comparison, the examination room facilities and the examination item graphic marking lines are marked, so that unqualified information is clear at a glance, and subsequent pertinence rectification is facilitated.
The examination room facilities are, for example, bidirectional two-lane roads, observation points, energy dissipation objects or facilities arranged in the lateral clearance range of the roads, and the like. The examination item graphic marking lines comprise examination item graphic lines and examination item marking lines, and the examination item graphic marking lines are different according to different examination items, for example, the graphic lines of the examination item of backing up and warehousing are obviously different from the graphic lines and marking lines of the item of marking lines and curve driving.
Therefore, according to the method for automatically detecting and modeling the examination place of the motor vehicle driver, the system program is compared with the examination place acceptance standard, the marks, the marked lines and the examination place facilities which do not meet the standard are found out, corresponding marking is carried out (for example, yellow highlighting is used), the examination time of unqualified information of the examination place is saved, and the detection precision is improved.
In summary, according to the method for automatically detecting and modeling the examination site of the motor vehicle driver, the image and the laser point cloud data of the examination site to be detected are collected, modeling is performed according to the image and the laser point cloud data to obtain a three-dimensional model of the examination site, and the examination site quantitative characteristic information and the corresponding examination site quantitative characteristic information in the examination standard three-dimensional model of the examination site are compared to obtain a detection result. Therefore, on one hand, the problems of large workload, high time cost and the like of manual field detection are solved, and the detection accuracy is ensured; on the other hand, the automation and the digitization level of examination room detection and the examination room detection efficiency are improved, and the examination room detection system is convenient to manage and monitor.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components in accordance with embodiments of the present application. The present application may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
FIG. 3 shows a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present application. The computer readable storage medium 300, which stores computer readable program code for performing the method according to the present application, is readable by a processor of an electronic device, and when the computer readable program code is executed by the electronic device, causes the electronic device to perform the steps of the method described above, and in particular, the computer readable program code stored by the computer readable storage medium may perform the method shown in the above embodiments. The computer readable program code may be compressed in a suitable form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While the foregoing is directed to embodiments of the present invention, other modifications and variations of the present invention may be devised by those skilled in the art in light of the above teachings. It should be understood by those skilled in the art that the foregoing detailed description is for the purpose of illustrating the invention rather than the foregoing detailed description, and that the scope of the invention is defined by the claims.

Claims (10)

1. A method for automatically detecting and modeling a motor vehicle driver examination field is characterized by comprising the following steps:
respectively acquiring image data and laser point cloud data of an examination field of a driver of a motor vehicle to be detected by using an unmanned aerial vehicle, and transmitting the image data and the laser point cloud data to an image processing cloud platform; the unmanned aerial vehicle carries a camera and a laser radar (LIDAR);
modeling according to the image data and the laser point cloud data by using the image processing cloud platform to obtain a three-dimensional model of the examination field of the motor vehicle driver, wherein the three-dimensional model comprises examination field quantitative characteristic information;
and obtaining a detection result according to the examination room quantitative characteristic information in the three-dimensional model and the corresponding examination room quantitative characteristic information in the pre-established examination room acceptance standard three-dimensional model.
2. The method of claim 1, wherein the examination room quantification characteristic information comprises graphic marking information corresponding to each designated examination item;
obtaining a detection result according to the examination room quantitative characteristic information in the three-dimensional model and the corresponding examination room quantitative characteristic information in the pre-established examination room acceptance standard three-dimensional model, wherein the detection result comprises the following steps:
matching the graphic marking information corresponding to each designated examination item with the graphic marking information corresponding to the corresponding examination item in the examination room acceptance standard three-dimensional model,
when the matching result is within the allowable error range, obtaining the qualified detection result of the examination room;
and when the matching result exceeds the allowable error range, obtaining the detection result that the examination room is unqualified.
3. The method of claim 2, wherein the examination room quantitative characteristics information further comprises examination room facilities information;
the designated test items include one or more of the following test items:
backing and warehousing, side parking, slope fixed-point parking and starting, quarter turning, curve driving, pile driving, driving through a unilateral bridge, continuous obstacles, a width-limiting door, undulating road driving, narrow road turning, freeway driving simulation and continuous sharp-bend mountain road driving simulation;
the graphic marking information corresponding to each appointed examination item comprises one or more of the following quantitative information: the length of the garage, the width of a lane, the distance between the garage and a control line, and the distance between a start line and a stop line and the outside line of the garage are respectively;
the examination room facility information comprises one or more of the following quantitative information: lane width, lateral clearance width, curb belt clearance height.
4. The method of claim 3, further comprising:
and when the detection result indicates that the examination room is unqualified, marking the unqualified examination room facilities and/or the examination item graphic marked lines and generating a detection report.
5. The method of claim 1, wherein the transmitting the image data and the laser point cloud data to an image processing cloud platform comprises:
transmitting the image data and the laser point cloud data to an image processing cloud platform in real time through fifth-generation mobile communication 5G or fourth-generation mobile communication 4G; or transmitting the image data and the laser point cloud data to an image processing cloud platform in an off-line data mode;
the method for acquiring the image data of the examination field of the driver of the motor vehicle to be detected by using the unmanned aerial vehicle comprises the following steps:
the method comprises the steps of utilizing a first camera carried on an unmanned aerial vehicle to perpendicularly shoot a motor vehicle driver examination field to be detected in a downward direction to obtain a shot image, utilizing a second camera carried on the unmanned aerial vehicle to incline to preset an angle, and laterally shooting the height and gradient information of the motor vehicle driver examination field to be detected to obtain a laterally shot image.
6. The method of claim 1, wherein modeling from the image data and the laser point cloud data using the image processing cloud platform to obtain a three-dimensional model of the automotive driver examination site comprises:
calculating according to the three-dimensional position information of the spatial point of the examination field of the driver of the motor vehicle to be detected, the camera internal parameter, the rotation matrix and the translation vector to obtain the position coordinates of the characteristic point corresponding to the spatial point on the image;
reconstructing a sparse point cloud according to the position coordinates of the feature points, and constructing a dense point cloud according to the sparse point cloud and a reference point, wherein the reference point is determined by laser point cloud data collected by a ground laser radar;
and constructing a point cloud grid according to the dense point cloud, constructing a three-dimensional semantic model according to the point cloud grid, constructing a three-dimensional vector model according to the three-dimensional semantic model, and taking the three-dimensional vector model as the three-dimensional model of the motor vehicle driver examination field.
7. The method of claim 6, wherein modeling from the image data and the laser point cloud data using the image processing cloud platform to obtain a three-dimensional model of the automotive driver examination site comprises:
the image processing cloud platform identifies a target examination item in a three-dimensional model of the motor vehicle driver examination field according to an identifier arranged around the target examination item, wherein the identifier comprises a color identifier and a shape identifier;
or the image processing cloud platform identifies a quarter turn test item and a curve driving test item in a three-dimensional model of the motor vehicle driver test site according to a curvature threshold value;
or the image processing cloud platform identifies an undulating road surface and a ramp in the three-dimensional model of the motor vehicle driver examination field according to a gradient threshold value.
8. The method of claim 1, wherein after obtaining the three-dimensional model of the automotive driver examination site using the image processing cloud platform for modeling from the image data and the laser point cloud data, the method further comprises:
detecting the reconstruction data error values of all reference poles in the three-dimensional model of the motor vehicle driver examination field;
averaging the error values of the reconstructed data to obtain a comprehensive error value of the image data;
when the comprehensive error value is not greater than a preset comprehensive error threshold value, a three-dimensional model of the motor vehicle driver examination field is reserved;
the number of the reference marker posts is determined according to the area and the terrain of the motor vehicle driver examination site, and the reference marker posts are randomly placed in the range of the motor vehicle driver examination site.
9. The method of any one of claims 1-8, further comprising:
if the examination item is determined to be partially shielded according to the image data acquired by the unmanned aerial vehicle, adjusting the shooting height and angle of the unmanned aerial vehicle;
and if the examination item is completely shielded according to the image data acquired by the unmanned aerial vehicle, acquiring laser point cloud data through a ground laser radar of an examination field of the motor vehicle driver to be detected, so that the image processing cloud platform splices the laser point cloud data and the image data acquired by the unmanned aerial vehicle.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which, when executed by a processor, implement the method of any of claims 1-9.
CN202010245228.2A 2020-03-31 2020-03-31 Method for automatic detection and modeling of motor vehicle driver examination field Pending CN111580128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245228.2A CN111580128A (en) 2020-03-31 2020-03-31 Method for automatic detection and modeling of motor vehicle driver examination field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245228.2A CN111580128A (en) 2020-03-31 2020-03-31 Method for automatic detection and modeling of motor vehicle driver examination field

Publications (1)

Publication Number Publication Date
CN111580128A true CN111580128A (en) 2020-08-25

Family

ID=72120518

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245228.2A Pending CN111580128A (en) 2020-03-31 2020-03-31 Method for automatic detection and modeling of motor vehicle driver examination field

Country Status (1)

Country Link
CN (1) CN111580128A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819774A (en) * 2021-01-28 2021-05-18 上海工程技术大学 Large-scale component shape error detection method based on three-dimensional reconstruction technology and application thereof
CN114386223A (en) * 2021-11-29 2022-04-22 武汉未来幻影科技有限公司 Real scene-based driving test simulator examination room model creation method
WO2023000337A1 (en) * 2021-07-23 2023-01-26 华为技术有限公司 Road gradient determination method and apparatus, lane line projection method and apparatus, and lane line display method and apparatus

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819774A (en) * 2021-01-28 2021-05-18 上海工程技术大学 Large-scale component shape error detection method based on three-dimensional reconstruction technology and application thereof
WO2023000337A1 (en) * 2021-07-23 2023-01-26 华为技术有限公司 Road gradient determination method and apparatus, lane line projection method and apparatus, and lane line display method and apparatus
CN114386223A (en) * 2021-11-29 2022-04-22 武汉未来幻影科技有限公司 Real scene-based driving test simulator examination room model creation method

Similar Documents

Publication Publication Date Title
CN111855664B (en) Adjustable three-dimensional tunnel defect detection system
CN111540048B (en) Fine live-action three-dimensional modeling method based on space-ground fusion
Duque et al. Bridge deterioration quantification protocol using UAV
CN110503080B (en) Investigation method based on unmanned aerial vehicle oblique photography auxiliary sewage draining exit
CN111580128A (en) Method for automatic detection and modeling of motor vehicle driver examination field
CN109389578A (en) Railroad track abnormality detection
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
CN102980510B (en) A kind of laser light chi image assize device and tree survey method thereof
Erickson et al. The accuracy of photo-based three-dimensional scanning for collision reconstruction using 123D catch
CN112308913B (en) Vehicle positioning method and device based on vision and vehicle-mounted terminal
Mahmoudzadeh et al. Kinect, a novel cutting edge tool in pavement data collection
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
CN112254670A (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
Guan et al. Full field-of-view pavement stereo reconstruction under dynamic traffic conditions: Incorporating height-adaptive vehicle detection and multi-view occlusion optimization
CN111145260B (en) Vehicle-mounted-based double-target setting method
CN112446915B (en) Picture construction method and device based on image group
CN112254671B (en) Multi-time combined 3D acquisition system and method
CN115601517A (en) Rock mass structural plane information acquisition method and device, electronic equipment and storage medium
CN114519732A (en) Road detection method and system based on infrared binocular structured light
CN113418510A (en) High-standard farmland acceptance method based on multi-rotor unmanned aerial vehicle
CN112254679A (en) Multi-position combined 3D acquisition system and method
CN112254677A (en) Multi-position combined 3D acquisition system and method based on handheld device
CN212301893U (en) Automatic detection system for motor vehicle driver examination field
Dabous et al. Three-dimensional modeling and defect quantification of existing concrete bridges based on photogrammetry and computer aided design
RU2616103C2 (en) Automated method of charting road traffic accident by using global positioning system and cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination