CN112907973B - High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes - Google Patents

High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes Download PDF

Info

Publication number
CN112907973B
CN112907973B CN202110070353.9A CN202110070353A CN112907973B CN 112907973 B CN112907973 B CN 112907973B CN 202110070353 A CN202110070353 A CN 202110070353A CN 112907973 B CN112907973 B CN 112907973B
Authority
CN
China
Prior art keywords
code
engraving
image
etching
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110070353.9A
Other languages
Chinese (zh)
Other versions
CN112907973A (en
Inventor
张洪斌
刘伟
陈代斌
康博文
李�学
张亮
杨文星
缑柏虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Stardon Technology Co ltd
Original Assignee
Sichuan Stardon Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Stardon Technology Co ltd filed Critical Sichuan Stardon Technology Co ltd
Priority to CN202110070353.9A priority Critical patent/CN112907973B/en
Publication of CN112907973A publication Critical patent/CN112907973A/en
Application granted granted Critical
Publication of CN112907973B publication Critical patent/CN112907973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a system and a method for high-precision complete information acquisition and real 3D morphology restoration comparison of motor vehicle engraving codes; the system comprises a marking code complete information acquisition front end, a marking code 3D visual service cluster, a checking vehicle management system and a checking auditing terminal which are connected through a network; the marking code complete information acquisition front end comprises a mobile intelligent terminal and an acquisition device which are connected into a whole; the acquisition device can acquire original high-resolution color appearance information of the engraving code and surface high-density 3D shape surface information with high precision; the marking code 3D vision service cluster provides background support for the front end of the marking code complete information acquisition. The method can realize the core functions of real-time image intelligent identification, real 3D morphology reconstruction, historical 3D morphology comparison, 1:1 original size reduction image generation and the like of the complete information data of the engraving code, and support high-fidelity morphology restoration and omnibearing intelligent verification of the engraving code of the motor vehicle.

Description

High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes
Technical Field
The invention relates to the technical field of motor vehicle information acquisition, in particular to a system and a method for comparing high-precision complete information acquisition of motor vehicle engraving codes with real 3D morphology restoration.
Background
The motor vehicle engraving code mainly comprises a vehicle identification number and an engine number. The vehicle identification number (Vehicle Identification Number, abbreviated as VIN in english) is a group of seventeen english digits, which is a unique identification number for each vehicle, and is usually marked on a frame of the vehicle in an engine compartment, under a driver seat, or on a side of a chassis of the vehicle by a manufacturer of the vehicle. VIN accompanies the whole life cycle of the vehicle, from manufacturing, to intermediate registration, vehicle circulation (transit, transfer, etc.), security technical inspection, and even final supervision and destruction are all based on VIN. The engine number is similar to VIN code, is the unique mark of the engine, and is marked on the internal combustion engine or electric engine of the motor vehicle, so that the engine can be traced, and the engine can be used as an important basis in the process of engine maintenance, replacement, motor vehicle inspection and pin supervision.
Because of a plurality of manufacturers of motor vehicles and engines, the design and the engraving modes of different manufacturers lead the characters of the engraving codes to have various characteristics such as the size, the font form, the arrangement form and the like; in addition, the influence of random factors in different processing equipment and the engraving process also has the slight difference of the depth of the nicks and the like of each specific code engraved by the same manufacturer, so that the engraving codes of the motor vehicle are difficult to completely reproduce. Therefore, the motor vehicle engraving code is required to be strictly collected, recorded and compared and verified in the production and use processes, and the method becomes an important point for controlling the motor vehicle.
The motor vehicles in China have huge conservation quantity and the speed increase is continuously increased, so that the management departments of the vehicles in various places are faced with great difficulties and challenges. In the existing motor vehicle checking process, the motor vehicle inscription codes are collected in a traditional manual paper inscription mode, and then are compared with the inscription code inscription film in the vehicle history file. The defects are that: in business operation, the vehicle checking efficiency is low and the cost is high; in terms of information integrity, the rubbing operation loses the information such as color, shape and surface of the inscription number; in the aspect of file management, the paper rubbing film is unfavorable for the persistence and the electronization of files, the paper rubbing is fuzzy in mark aging along with time, and the paper files are required to be transferred for checking by business handling, so that great obstruction is formed for realizing the online handling of the business.
Along with the popularization of inspection PDAs with cameras, shooting and collecting the engraving codes of the motor vehicles becomes a conventional mode, and electronic pictures provide visual information such as colors and the like, and are convenient for archiving, transmission and reference. However, because of different distances and angles of handheld shooting, the size information of the engraving code is lost, and geometric deformation is introduced, so that the PDA shooting mode can only be used complementarily with the paper rubbing.
In recent years, VIN image restoration devices based on line structured light or simple depth acquisition technology are presented, and a method of projecting and acquiring several line structured lights is adopted to estimate VIN shape surface parameters based on standard geometric model, although the geometric dimension can be restored to a certain extent, and a printed image similar to a paper rubbing film is output, because the depth sampling does not cover the whole VIN actual surface or the sampling grid is too sparse, the high-precision actual 3D complete morphology cannot be obtained.
In addition, the current imprinting coding comparison method is still limited by the inertia thinking of the two-dimensional image comparison of the traditional rubbing film, and paper printing comparison or two-dimensional electronic image superposition comparison is performed by taking the 1:1 original size reduction image of the output substitute rubbing film as a target. However, in reality, there is a problem of geometric distortion caused by forced flattening of an inextensible curved surface (such as a spherical surface); and meanwhile, the complete morphology data acquired by 3D is not fully utilized for omnibearing verification, so that compared information is incomplete, and the problems of image fake making and the like cannot be effectively detected and avoided.
Disclosure of Invention
The invention aims to provide a high-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes, which are used for solving the problems of VIN image restoration by adopting a PDA with a camera, line structured light-based or simple depth acquisition technology.
The invention provides a high-precision complete information acquisition and real 3D morphology restoration comparison system for motor vehicle marking codes, which comprises a marking code complete information acquisition front end, a marking code 3D vision service cluster, a check vehicle management system and a check terminal, wherein the marking code complete information acquisition front end, the marking code 3D vision service cluster, the check vehicle management system and the check terminal are connected through a network;
the marking code complete information acquisition front end comprises a mobile intelligent terminal and an acquisition device which are connected into a whole; the mobile intelligent terminal is used for acquiring high-precision complete information of the engraving code of the motor vehicle to be detected through controlling the acquisition device and sending the information to the 3D vision service cluster of the engraving code;
the marking code 3D visual service cluster is used for analyzing and processing the marking code high-precision complete information to obtain marking code visual analysis results and marking code visual analysis data; the visual analysis result of the etching code is required to be returned to the front end of the complete information acquisition of the etching code for confirmation;
The checking car management system files business data on the visual analysis result of the etching code confirmed by the front end of the etching code complete information acquisition;
the checking and auditing terminal is used for retrieving the business data comprising the marking code visual analysis result from the checking and auditing system, retrieving the marking code visual analysis data and auditing through the marking code inquiry service module of the marking code 3D visual service cluster, and printing the related business record form checking business with the marking code 1:1 original size restored image according to the auditing result.
Further, the acquisition device comprises an upper box body of the equipment, a lower box body of the equipment and an internal control board; the internal control board is arranged in a cavity formed by matching and connecting the upper box body of the equipment and the lower box body of the equipment;
a horizontal holding area, a vertical holding area and a camera shooting area are arranged on the upper box body of the equipment; the camera shooting area comprises an adjustable uniform soft light supplementing module, a spectrum fusion 3D module and a high-resolution RGB color module;
the lower box body of the equipment is provided with a clamping structure, a terminal connecting wire and a charging opening; the clamping structure is used for clamping the mobile intelligent terminal; the terminal connection line is used for connecting the mobile intelligent terminal;
The internal control board is provided with a state control switch, an interface chip, a storage and control chip, a terminal connecting line interface and an external charging interface which are connected with the state control switch, a 3D module interface and a color module interface which are connected with the interface chip, and a light supplementing module interface which is connected with the storage and control chip which are connected with each other in series; the terminal connection interface is connected with the mobile intelligent terminal through a terminal connection penetrating through the inside and the outside of the acquisition device; the external charging interface is used for connecting an external charger through the charging opening; the light supplementing module interface, the 3D module interface and the color module interface are respectively and correspondingly connected with the adjustable uniform soft light supplementing module, the spectrum fusion 3D module and the high-resolution RGB color module through internal connecting cables; the state control switch is a double-throw toggle switch and is used for controlling the acquisition device to switch between the marking coding visual information acquisition state and the charging state of the mobile intelligent terminal.
Further, the spectrum fusion 3D module comprises a first spectrum camera, a second spectrum camera and an infrared laser area array projector, and a 3D image control chip connected with the first spectrum camera, the second spectrum camera and the infrared laser area array projector; the first and second spectral cameras form a binocular stereoscopic vision structure, have resolution not lower than high definition, and have a base line length between them of more than 40mm; the 3D image control chip is used for realizing hardware synchronous control and synchronous image acquisition of the first broad spectrum camera, the second broad spectrum camera and the infrared laser area array projector; the first spectrum camera, the second spectrum camera, the infrared laser area array projector and the 3D image control chip are integrated on the 3D acquisition PCB, and are connected with the 3D module interface of the internal control board through the 3D acquisition PCB.
Further, the high-resolution RGB color module comprises a special close-up lens, a high-resolution sensor chip and a color master control chip connected with the special close-up lens and the high-resolution sensor chip; the special close-up lens, the high-resolution sensor chip and the color master control chip are integrated on an RGB color PCB and connected with the color module through the RGB color PCB.
Furthermore, the 3D image control chip and the color main control chip are respectively provided with an exposure synchronous electric signal input/output pin, and the exposure synchronous electric signal input/output pins corresponding to the two are connected through a signal wire and are used for realizing hardware synchronous acquisition between the hyperspectral fusion 3D module and the high-resolution RGB color module.
Further, the adjustable uniform soft light supplementing module comprises a plurality of soft light beads; the soft light beads are integrated on the light supplementing PCB and are connected with the light supplementing module interface through the light supplementing PCB; and all of the soft beads are controlled by coded signals from the same serial signal line of the memory and control chip.
Furthermore, the light supplementing PCB of the adjustable uniform flexible light supplementing module, the 3D acquisition PCB of the spectrum fusion 3D module and the RGB color PCB of the high-resolution RGB color module are all positioned on the same special-shaped metal assembly plate; the special-shaped metal assembly plate is fixed in a cavity formed after the upper box body of the equipment and the lower box body of the equipment are connected in a matched mode.
Further, the clamping structure is an adjustable clamping structure; the adjustable clamping structure comprises a first protective wing, a second protective wing, an adjusting screw and a tail end knob; the first protective wing and the second protective wing are in threaded connection with an adjusting screw, and one end of the adjusting screw is fixedly connected with a terminal knob; the end knob is used for adjusting the distance between the first protecting wing and the second protecting wing by rotating the adjusting screw. Furthermore, the marking code 3D vision service cluster consists of a plurality of GPU servers and comprises a load balancing scheduling module, a marking code image intelligent identification module, a marking code real 3D appearance restoration module, a marking code history information data access module, a marking code real 3D appearance comparison module, a marking code 1:1 original size reduction image generation module and a marking code inquiry service module.
The invention provides a method for comparing high-precision complete information acquisition of motor vehicle engraving codes with real 3D morphology restoration, which comprises the following steps:
s1, a checking operator uses a marking code complete information acquisition front end to acquire marking code high-precision complete information images of a motor vehicle to be checked on site, and uploads characteristic parameters of an acquisition device contained in the marking code complete information acquisition front end and the acquired marking code high-precision complete information images as marking code high-precision complete information data to a marking code 3D visual service cluster; the high-precision complete information image of the engraving code comprises a high-resolution color image and a corresponding high-definition binocular floodlight spectrum image pair;
S2, analyzing and processing the high-resolution color image by adopting an intelligent identification module of the engraving code image of the 3D vision service cluster to obtain an actual engraving code image area, specific text content and a corresponding text segmentation connected domain;
s3, adopting an engraving code real 3D morphology restoration module of the engraving code 3D vision service cluster to analyze and process the high-resolution color image and the high-definition binocular spectral image pair, and reconstructing a restored engraving code real 3D morphology model and a corresponding engraving code ideal shape surface parameter model;
s4, according to the specific text content of the etching code output in the step S2, a historical etching code real 3D morphology model is called from an etching code historical information data access module of the etching code 3D vision service cluster;
s5, comparing the real 3D morphology model of the engraving code restored in the step S3 with the real 3D morphology model of the history engraving code acquired in the step S4 by adopting an actual 3D morphology comparison module of the engraving code 3D vision service cluster, and returning a 3D morphology comparison result;
s6, adopting an engraving code 1:1 original size reduction image generation module of the engraving code 3D visual service cluster to analyze and process the engraving code real 3D morphology model restored in the step S3 and the corresponding engraving code ideal shape surface parameter model to generate an engraving code 1:1 original size reduction image;
S7, the 3D visual service cluster of the etching code returns the visual analysis result of the etching code to the front end of the complete information acquisition of the etching code, and stores the visual analysis data of the etching code; the visual analysis result of the etching code comprises specific text content of the etching code, a 3D morphology comparison result and a 1:1 original size reduction image of the etching code; the visual analysis data of the engraving code comprise an engraving code visual analysis time stamp, an engraving code visual analysis result, a restored engraving code real 3D morphology model, a corresponding engraving code ideal shape surface parameter model and high-precision complete information of the engraving code.
S8, the front end of the complete information collection of the etching code receives and displays the etching code visual analysis result returned by the 3D visual service cluster of the etching code, and the etching code visual analysis result is submitted to a checking vehicle management system for business data archiving after being confirmed by checking operators;
s9, the check and verification post personnel uses the check and verification terminal to retrieve the business data comprising the visual analysis result of the marking code from the check and verification car management system, and reads the visual analysis data of the marking code and verifies through the marking code inquiry service module of the marking code 3D visual service cluster, and prints the related business record form verification business with the 1:1 original size restored image of the marking code according to the verification result.
Further, the method for calibrating the acquisition device, which is included in the front end of the complete information acquisition of the marking code used in the step S1, is to perform one-time calibration after the production and assembly are completed, and includes the following sub-steps:
s111, calibrating a binocular structure formed by the first and second hyperspectral cameras to obtain distortion coefficients and internal parameters of the first and second hyperspectral cameras and external pose parameters between the first and second hyperspectral cameras;
s112, calibrating a binocular structure formed by the first hyperspectral camera and the high-resolution RGB color module to obtain a distortion coefficient and internal parameters of the high-resolution RGB color module and external pose parameters between the first hyperspectral camera and the high-resolution RGB color module;
s113, uniformly encoding the acquired parameters of the camera system and the serial numbers of the acquisition devices, generating parameter verification information, forming characteristic parameters of the acquisition devices, and writing the characteristic parameters of the acquisition devices into a storage and control chip of the acquisition devices; the camera system parameters comprise distortion coefficients and internal parameters of the first and second spectrum cameras and the high-resolution RGB color module, external pose parameters between the first and second spectrum cameras, and external pose parameters between the first spectrum camera and the high-resolution RGB color module.
Further, in step S1, the method for acquiring the high-precision complete information of the etching code of the motor vehicle to be checked by using the front end of the complete information acquisition of the etching code in site includes the following sub-steps:
s121, a checking operator holds the etching code complete information acquisition front end by hand, and opens an etching code acquisition APP pre-installed on the mobile intelligent terminal;
s122, the marking code acquisition APP automatically loads and verifies the characteristic parameters of the acquisition device stored in the acquisition device, and step S123 is executed after verification is successful;
s123, the engraving code acquisition APP automatically starts continuous synchronous exposure acquisition of the high-resolution RGB color module and the spectrum fusion 3D module, and real-time processing is performed to generate a corresponding video stream real-time preview; the video stream real-time preview means that a high-resolution color image from a high-resolution RGB color module is displayed on a touch screen of a mobile intelligent terminal in real time in the acquisition process; meanwhile, the high-definition binocular spectral image pair from the spectral fusion 3D module is superimposed on a high-resolution color image after being subjected to real-time structural facula feature point extraction, binocular image feature point matching, triangulation point cloud generation, coordinate system conversion between a first spectral camera and a high-resolution RGB color module and perspective imaging projection into a depth map;
S124, adjusting the shooting angle and the light supplementing illumination, and clicking a shooting button on the engraving code acquisition APP to finish synchronous snapshot of the engraving code high-precision complete information image;
s125, clicking a visual analysis button in the etching code acquisition APP, namely uploading the etching code high-precision complete information data to the etching code 3D visual service cluster through a wireless network for analysis processing.
Further, step S2 includes the following sub-steps:
s21, calling an engraving code intelligent detection algorithm to divide a candidate engraving code area image from the high-resolution color image; the steps of the engraving code intelligent detection algorithm comprise: firstly, carrying out image preprocessing on a high-resolution color image; then, the images are sent into a motor vehicle engraving code detection model to detect the engraving code image areas and the corresponding text segmentation connected areas; then, screening candidate carved coding region images from the detected carved coding image regions according to the size, the aspect ratio and the position threshold value; the motor vehicle engraving code detection model is a detection model which is obtained by training through an artificial intelligence algorithm and can detect the engraving codes of motor vehicles in various shapes;
S22, calling an intelligent identification algorithm of the engraving code to identify an actual engraving code region image, specific text content and a corresponding text segmentation connected domain from the candidate engraving code region image; the steps of the engraving code intelligent identification algorithm comprise: firstly, performing inclination correction and size normalization on candidate carved coding region images; then, the specific text content is identified by a marking code identification model of the motor vehicle; finally, screening and merging recognition results based on the priori knowledge of the carved coding and the position relation of the candidate region to obtain an actual carved coding image region, specific text content and a corresponding text segmentation connected region; the motor vehicle engraving code recognition model is a motor vehicle engraving code recognition model which is obtained through training by utilizing an artificial intelligence algorithm and can recognize various fonts in different backgrounds.
Further, step S3 includes the following sub-steps:
s31, a 3D space reconstruction algorithm is called to process a high-definition binocular spectral image pair, and a high-density three-dimensional point cloud around the etching code is reconstructed; the 3D space reconstruction algorithm comprises the following substeps:
(1) According to the camera distortion model and the distortion parameters of the first and second standard spectrum cameras, respectively carrying out corresponding distortion correction on the high-definition binocular spectrum image pair to obtain a de-distortion spectrum image pair excluding the influence of radial and tangential distortion;
(2) Detecting a fixed-mode artificial laser array spot projected to the carved surface by an infrared laser projector on the undistorted and ubiquitted spectrum image pair by using a spot detection operator, and positioning sub-pixel coordinates; matching and screening binocular feature points based on an image polar geometric constraint relationship between the first and second multispectral cameras and a laser spot feature descriptor; thirdly, calculating by using a triangulation principle to obtain a first 3D space point cloud which covers the whole etching coding surrounding space and takes a first pan-spectrum camera coordinate system as a world coordinate system, namely, the whole three-dimensional point cloud of the etching coding surrounding space;
(3) On the undistorted spectral image pair, detecting characteristic points comprising strokes of the text characters of the inscription code and inherent textures of the peripheral surface by using a characteristic point detection operator, and expressing by using a characteristic descriptor; combining an image polar line geometric constraint relation between the first and second spectrum cameras, a feature point descriptor and a position sequence constraint relation relative to the detected laser spots to complete the matching of visible light feature points; thirdly, calculating by using a triangulation principle to obtain a second 3D space point cloud which embodies the visible features of the etching code and takes the first pan-spectrum camera coordinate system as a world coordinate system, namely a three-dimensional point cloud of the space features around the etching code;
(4) Combining the first 3D space point cloud and the second 3D space point cloud, and storing point cloud data according to ascending order of Y coordinate values and X coordinate values to obtain a third 3D space point cloud which densely covers the space surface near the etching code, namely the high-density three-dimensional point cloud around the etching code;
s32, calling a 3D appearance reconstruction algorithm, and combining high-resolution color image information on the basis of a third 3D space point cloud to restore an actual 3D appearance model of the engraving code and a corresponding ideal shape surface parameter model of the engraving code; the 3D appearance reconstruction algorithm comprises the following substeps;
(1) According to external pose parameters between the first pan-spectrum camera and the high-resolution RGB color module, converting the third 3D space point cloud into a fourth 3D space point cloud taking a high-resolution RGB color module coordinate system as a world coordinate system;
(2) Carrying out distortion correction on the high-resolution color image of the engraving code according to a camera distortion model and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted high-resolution color image with the radial and tangential distortion effects removed;
(3) Carrying out distortion correction on the text segmentation connected domain output in the step S2 according to the distortion model of the camera and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted text segmentation connected domain;
(4) Projecting a fourth 3D space point cloud onto the de-distorted high-resolution color image by combining the camera model and the internal parameters of the calibrated high-resolution RGB color module, wherein 3D space points falling into the range of the de-distorted high-resolution color image form a fifth 3D space point cloud, and 3D space points falling into the segmentation communication domain of the de-distorted text form a sixth 3D space point cloud, namely, the 3D space point cloud of the coded text is carved;
(5) Using a sixth 3D space point cloud as an initial point set, and performing space growth in the fifth 3D space point cloud under the conditions of space proximity constraint and curved surface smoothness constraint to generate a continuous, smooth and stable seventh 3D space point cloud, namely, a 3D point cloud of a shape surface where the carving code is positioned;
(6) Performing gridding treatment on the seventh 3D space point cloud to generate a first 3D shape surface model, namely an initial mesh model of the shape surface where the engraving code is positioned;
(7) Fitting a seventh 3D space point cloud with the ideal shape surface model, and determining the type of the ideal shape surface model of the shape surface where the engraving code is positioned according to the matching degree of the ideal shape surface model and the seventh 3D space point cloud to obtain a corresponding first ideal shape surface parameter model;
(8) Performing internal space point interpolation on grids with larger meshes in the first 3D shape surface model by combining the first ideal shape surface parameter model, adding grid vertices and refining the grids to obtain a second 3D shape surface model, namely, a shape surface refining mesh model where the engraving codes are positioned;
(9) Projecting all grid vertexes of the second 3D shape surface model onto the undistorted high-resolution color image, and dividing the undistorted high-resolution color image by using fine plane grids formed by the projection points to obtain a high-resolution color image patch set;
(10) Texture pasting is carried out on the second 3D shape surface model by using a high-resolution color image patch set, so that a first 3D shape model which takes a high-resolution RGB color module coordinate system as a world coordinate system and has complete XYZ space geometrical information and fine RGB color appearance is obtained;
(11) Generating an imprinting code real 3D morphology model taking the imprinting code itself as a coordinate system and a corresponding imprinting code ideal shape surface parameter model: firstly, calculating a minimum bounding box of a sixth 3D space point cloud; then, the center of the minimum bounding box, the long axis direction, the first short axis corresponding to the text height and the second short axis corresponding to the text depth are respectively used as the original point, the horizontal X axis, the vertical Y axis and the depth Z axis of a new 3D space coordinate system, and coordinate system conversion is carried out on the first 3D morphology model and the corresponding first ideal shape surface parameter model to obtain a second 3D morphology model and the corresponding second ideal shape surface parameter model, wherein the shooting visual angle and the distance difference between the second 3D morphology model and the corresponding second ideal shape surface parameter model are eliminated; the second 3D morphology model is a true 3D morphology model of the engraving code, and the second ideal morphology parameter model is an ideal morphology parameter model of the engraving code.
Further, step S5 includes the following sub-steps:
s51, respectively generating corresponding sample orthographic projection original RGB images for the restored and historical carved coded real 3D morphology models;
s52, carrying out graying and sample orthographic projection feature point extraction and matching on the restored and historical sample orthographic projection original RGB image to obtain a restored and historical sample orthographic projection initial matching point pair set;
s53, reversely forward projecting and backtracking 3D shape surface space coordinates of the image forward projection matching point;
s54, registering and aligning the real 3D morphology model based on the 3D surface space coordinates of the morphology orthographic projection matching points.
S55, comparing the consistency of the 3D shape surfaces between the registered and aligned real 3D shape models;
s56, comparing consistency of appearance between the registered aligned real 3D appearance models.
Further, step S6 includes the following sub-steps:
s61, analyzing and processing the restored sample orthographic projection original RGB image to determine an ideal shape surface area corresponding to the engraving code;
s62, meshing and dividing ideal shape surface areas corresponding to the engraving codes at uniform physical length intervals to generate a coordinate mapping relation set between a two-dimensional unfolding plane and a three-dimensional ideal shape surface;
s63, based on a coordinate mapping relation set between a two-dimensional expansion plane and a three-dimensional ideal shape plane, projecting a sampled engraving code real 3D shape model to generate an engraving code two-dimensional expansion image;
S64, geometrically correcting the two-dimensional expansion image of the etching code to generate a two-dimensional expansion correction image of the etching code;
s65, performing printing configuration on the two-dimensional expansion correction image of the engraving code to generate a final original size reduction image of the engraving code 1:1.
Further, before step S2, the load balancing scheduling module of the 3D vision service cluster for etching codes dynamically schedules the GPU server for performing subsequent analysis according to the use condition of the computing resources of the whole 3D vision service cluster for etching codes.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
1. the invention discloses a stereo image pair and a high-resolution RGB color image of a natural visible light inherent texture feature and an artificial infrared light spot mode of a fusion engraving code synchronously acquired by an acquisition device under adjustable and controllable uniform soft supplementary light, which are complete information data comprising high-density 3D information and high-resolution color features covering the whole engraving code surface, and are true 3D morphology reconstruction comparison and accurate 1 of the engraving code of a motor vehicle: the 1-full-size 2D image restoration provides high-precision data.
2. The intelligent terminal is matched with the acquisition device, so that the mobile intelligent terminal for the conventional inspection is conveniently and economically upgraded to be a portable integrated marking code complete information acquisition front end, the on-site single-hand, short-distance and high-precision acquisition of marking code complete information by an inspection operator is facilitated, and the disposable high-precision electronization of the motor vehicle marking code is realized.
3. The invention provides background support for the front end of the complete information acquisition of the marking code in a wide administrative area through the extensible marking code 3D visual service cluster architecture, can realize the core functions of real-time image intelligent identification, real 3D morphology reconstruction, historical 3D morphology comparison, 1:1 original size reduction image generation and the like of the complete information of the marking code through a series of analysis and processing, and can realize high-fidelity morphology restoration and omnibearing intelligent verification of the marking code of the motor vehicle.
4. The system and the method for comparing the high-precision complete information acquisition of the engraving codes of the motor vehicle with the real 3D morphology restoration provide powerful technical support for efficient electronic complete acquisition of the engraving codes in the inspection service, high-fidelity 3D morphology restoration comparison and paperless one-net general auditing, and can be used for conveniently and effectively improving the quality and the efficiency of the inspection work of the motor vehicle.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly describe the drawings in the embodiments, it being understood that the following drawings only illustrate some embodiments of the present invention and should not be considered as limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a system for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code according to embodiment 1 of the present invention.
Fig. 2 is a schematic side view of the collecting device of embodiment 1 of the present invention.
Fig. 3 is a schematic top view of the upper case of the collection device in embodiment 1 of the present invention.
Fig. 4 is a schematic block diagram of an internal control board circuit of the acquisition device of embodiment 1 of the present invention.
Fig. 5 is a schematic front view of the internal control board of the collecting device of embodiment 1 of the present invention.
Fig. 6 is a schematic back view of the internal control board of the collecting device of embodiment 1 of the present invention.
Fig. 7 is a schematic bottom view of the lower case of the collecting device in embodiment 1 of the present invention.
Fig. 8 is a flowchart of a method for comparing the high-precision complete information acquisition of the motor vehicle engraving code with the actual 3D morphology restoration according to embodiment 2 of the present invention.
FIG. 9 is an example of a scale-encoded 1:1 full-size reduction image according to embodiment 2 of the present invention.
Icon:
100-equipment upper box body, 110-camera shooting area, 111-high resolution RGB color module, 112-first floodlight camera, 113-infrared laser area array projector, 114-second floodlight camera, 115-first soft light bead, 116-second soft light bead, 117-third soft light bead, 118-fourth soft light bead, 120-horizontal holding area and 130-vertical holding area;
200-equipment lower box body, 210-adjustable clamping structure, 211-first protection wing, 212-second protection wing, 213-end knob, 220-heat radiation fin and 230-terminal connection wire;
300-internal control board, 310-interface chip, 320-memory and control chip, 331-light supplementing module interface, 332-3D module interface, 333-color module interface, 340-terminal connection interface, 350-external charging interface, 360-state control switch.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, the embodiment provides a system for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle etching code, which comprises an etching code complete information acquisition front end, an etching code 3D vision service cluster, a check vehicle management system and a check terminal which are connected through a network;
the marking code complete information acquisition front end comprises a mobile intelligent terminal and an acquisition device which are connected into a whole; the mobile intelligent terminal is used for acquiring high-precision complete information of the engraving code of the motor vehicle to be detected through controlling the acquisition device and sending the information to the 3D vision service cluster of the engraving code;
the marking code 3D visual service cluster is used for analyzing and processing the marking code high-precision complete information to obtain marking code visual analysis results and marking code visual analysis data; the visual analysis result of the etching code is required to be returned to the front end of the complete information acquisition of the etching code for confirmation;
the checking car management system is used for archiving business data of the visual analysis result of the etching code confirmed by the front end of the complete information acquisition of the etching code;
the checking and auditing terminal is used for retrieving the business data comprising the marking code visual analysis result from the checking and auditing system, retrieving the marking code visual analysis data and auditing through the marking code inquiry service module of the marking code 3D visual service cluster, and printing the related business record form checking business with the marking code 1:1 original size restored image according to the auditing result.
The following specifically describes a high-precision complete information acquisition and real 3D morphology restoration comparison system in this embodiment:
1. marking code complete information acquisition front end
Referring to fig. 2-7, in the front end of the full information collection of the engraving code adopted in the present embodiment, the collection device includes an upper device box 100, a lower device box 200, and an internal control board 300; the internal control board 300 is arranged in a cavity formed after the upper box body 100 and the lower box body 200 of the device are connected in a matching way; specifically:
(1) Equipment upper case 100
Referring to fig. 2 and 3, the upper box 100 of the device is provided with a horizontal holding area 120, a vertical holding area 130 and a camera shooting area 110; the camera area 110 comprises an adjustable uniform soft light supplementing module, a spectrum fusion 3D module and a high-resolution RGB color module 111;
the horizontal holding area 120 is located at the right side of the box body 100 on the device in fig. 3, and is shaped as a block-shaped bump, and is similar to a holding handle of a single-lens reflex camera, and is used for realizing that the acquisition device is clamped to a mobile intelligent terminal to firmly hold a single-hand horizontal screen when shooting a marking code of a motor vehicle. Further, the inner edge surface of the transverse holding area 120 is provided with a certain gradient and is provided with a series of grooves, so that the effects of skid resistance and convenience in firm holding by fingers are achieved. When the mobile intelligent terminal is horizontally held, the upper end and the lower end of the horizontal holding area 120 can be held by the index finger and the little finger of a single hand, the middle finger and the ring finger can buckle the inner prismatic surface, and the thumb can conveniently click the touch screen of the mobile intelligent terminal to complete shooting.
The vertical holding area 130 is a middle lower area of the upper box 100 in the apparatus in fig. 3, and is used for holding the collection device by fingers when the collection device is clamped to a vertical screen on the mobile intelligent terminal. The vertical holding area 130 has a series of concave-convex stripes and high-low blocks, so as to achieve the functions of skid resistance and stable holding. When the mobile intelligent terminal is vertically held, the vertical holding area 130 can be held by the index finger and the middle finger of a single hand, the gravity center of the whole acquisition device is supported, the acquisition device is clamped by the ring finger, the little finger and the tiger mouth, and the touch screen of the mobile intelligent terminal can be operated by the thumb.
The image capturing area 110 is located at the upper left part of the upper box 100 in the device in fig. 3, and the main optical components are reasonably designed according to the requirements of the application of the marking coding image capturing of the motor vehicle, so as to realize optimized light supplementing, projection and multispectral imaging collection of the multiple cameras. Wherein:
the spectrum fusion 3D module comprises a first spectrum camera 112, a second spectrum camera 114 and an infrared laser area array projector 113, and a 3D image control chip connected with the first spectrum camera 112, the second spectrum camera 114 and the infrared laser area array projector 113; the first pan-spectrum camera 112 and the second pan-spectrum camera 114 form a binocular stereoscopic vision structure, are high-definition cameras of the same type, have resolution not lower than high definition (i.e. the resolution is 720P and above, including 720P, 1080P, 2K, 4K, 8K, etc., the picture proportion can be 3:2, 4:3, 16:9, 16:10, etc., and the like, and are selected according to the requirement), and the photosensitive wave band covers the visible light and near infrared laser spectrum, so as to collect visible light texture and structured light spot fusion images with abundant features, so as to support the restoration of dense 3D point cloud; at the same time, the base line length between them is not less than 50mm to provide significant parallax in shooting the inscription encoding to ensure high accuracy of binocular measurement. The infrared laser area array projector 113 is used for projecting laser dot array spots with fixed modes to the photographed front space, and richer artificial feature points are added on the original shape surface of the engraving code. The 3D image control chip is configured to implement hardware synchronous control and image synchronous acquisition of the first broad spectrum camera 112, the second broad spectrum camera 114, and the infrared laser area array projector 113. The first pan-spectrum camera 112, the second pan-spectrum camera 114, the infrared laser area array projector 113 and the 3D image control chip are integrated on a 3D acquisition PCB board and connected with the 3D module interface 332 of the internal control board 300 through the 3D acquisition PCB board.
The high-resolution RGB color module 111 is designed specifically for handheld shooting of the engraving code in the short-distance and low-illumination working scene, and comprises a special close-up lens, a high-resolution sensor chip and a color master control chip connected with the special close-up lens and the high-resolution sensor chip, wherein the special close-up lens, the high-resolution sensor chip and the color master control chip are integrated on an RGB color PCB board and are connected with the color module interface 333 through the RGB color PCB board. The special close-up lens is used for realizing close-up high-resolution color focusing imaging, is an ultra-short focal length large-aperture low-distortion lens, provides high-resolution imaging resolution covering a depth of field range of 9-18 cm, and has bandpass characteristics for visible light wave bands through infrared cut-off filter coating films, so that light pollution and interference of ambient stray light and laser infrared rays on color images are effectively prevented. The high-resolution sensor chip has wide dynamic characteristic and high signal to noise ratio under low illumination, can effectively highlight image visual information, and realizes high-quality color photosensitive imaging with the pixel value not lower than 500 ten thousand. The color main control chip and the RGB color PCB jointly realize exposure control, ISP processing and image conversion output of the high-resolution sensor chip.
Particularly, between the spectrum fusion 3D module and the high-resolution RGB color module, the 3D image control chip and the color main control chip are respectively provided with exposure synchronous electric signal input and output pins, and the exposure synchronous electric signal input and output pins corresponding to the two are connected through a signal wire and used for realizing hardware synchronous acquisition between the spectrum fusion 3D module and the high-resolution RGB color module, thereby ensuring strict synchronous acquisition of color and 3D image frames and avoiding handheld shooting dislocation errors caused by asynchronous acquisition. In this embodiment, the color master control chip receives the exposure synchronization electrical signal from the spectrum fusion 3D module through the exposure synchronization electrical signal input pin, so as to control the exposure start time of the high resolution sensor chip.
The adjustable uniform soft light supplementing module is used for providing specific mode light supplementing illumination under the condition of poor ambient illumination or the requirement of enhancing the stereoscopic impression of a color image and comprises a plurality of soft light lamp beads; the soft light beads are integrated on the light supplementing PCB and connected with the light supplementing mode interface 331 through the light supplementing PCB, and all the soft light beads are controlled by the coded signals from the same serial signal line of the storage and control chip 320. In this embodiment, the plurality of soft light beads includes a first soft light bead 115, a second soft light bead 116, a third soft light bead 117, and a fourth soft light bead 118; the first soft light bead 115, the second soft light bead 116, the third soft light bead 117 and the fourth soft light bead 118 are soft light beads subjected to surface optical atomization treatment, so that the projected light is uniformly distributed, the information loss caused by local overexposure of an image due to excessive concentration of the light in the center axis direction of the light beads is effectively avoided, and each soft light bead is a miniature intelligent external control light source integrating a light emitting circuit and a control circuit. The first soft light bead 115, the second soft light bead 116, the third soft light bead 117 and the fourth soft light bead 118 are controlled by the coding signals from the same serial signal line of the storage and control chip 320, so that the luminous intensity of each soft light bead can be independently regulated, a specific overall light supplementing illumination mode is generated, overexposure caused by the superposition of the same light effect of a plurality of soft light beads in a central area can be avoided, and a visual effect image which can best embody the details of the carved coding strokes can be formed.
Referring to fig. 3, the soft light beads are arranged in a straight line; the first pan-spectrum camera 112, the second pan-spectrum camera 114, the infrared laser area array projector 113 and the high-resolution RGB color module 111 are also arranged in a straight line and are parallel to the straight line formed by the soft light beads, so that complete illumination and shooting of the inherent long-strip-shaped character area with the carved code are formed. It should be noted that the positions of the soft light beads are pulled apart as far as possible in the arrangement direction so as to realize complete light filling coverage of the strip-shaped engraving codes, and the positions of the soft light beads are staggered with the positions of the cameras as far as possible, so that the specular reflection of the engraving code surfaces with mirror-like characteristics is effectively weakened.
The light supplementing PCB of the adjustable and controllable uniform soft light supplementing module, the 3D acquisition PCB of the spectrum fusion 3D module and the RGB color PCB of the high-resolution RGB color module 111 are all positioned on the same special-shaped metal assembly plate; the special-shaped metal assembly plate is fixed in a cavity formed after the upper box body 100 and the lower box body 200 of the equipment are connected in a matched mode, and the relative physical position and optical relation of each optical component are fixed once the assembly is completed, so that the assembly of the acquisition device and the calibration of the system are facilitated, and the optical stability and high precision of the acquisition device in the use process are ensured.
The top of the image capturing area 110 is covered with a whole anti-reflection high-transmittance protective glass. The hardness of the protective glass slide adopted in the embodiment is not lower than 9H, so that the internal optical components are effectively protected, the double-sided anti-reflection coating of the protective glass slide has light transmittance not lower than 98%, and excellent light projection and receiving effects are provided. The two sides of the image capturing area 110 are provided with grids for heat dissipation, so that good heat convection heat dissipation conditions are provided for internal devices.
(2) Internal control panel 300
Referring to fig. 4 to 6, the internal control board 300 is provided with a status control switch 360, an interface chip 310, and a storage and control chip 320 connected in series, and a terminal connection interface 340 and an external charging interface 350 connected to the status control switch 360, and a 3D module interface 332 and a color module interface 333 connected to the interface chip 310, and a light supplementing module interface 331 connected to the storage and control chip 320; the terminal connection interface 340 is connected with the mobile intelligent terminal through a terminal connection penetrating through the inside and the outside of the acquisition device; the external charging interface 350 is used to connect an external charger through a charging opening; the light supplementing mode group interface 331, the 3D module interface 332 and the color module interface 333 are respectively and correspondingly connected with the adjustable uniform soft light supplementing module, the spectrum fusion 3D module and the high-resolution RGB color module 111 through internal connecting cables; the state control switch 360 is a double-throw toggle switch, and is used for controlling the acquisition device to switch between the marking coding visual information acquisition state and the mobile intelligent terminal charging state.
That is, the internal control board 300 is adapted to the mobile intelligent terminal for external connection to electrically connect, store parameters and control the camera shooting for the high-precision complete information acquisition of the motor vehicle engraving code by the mobile intelligent terminal operation acquisition device to the main optical components (including the adjustable and controllable uniform soft light compensation module, the universal spectrum fusion 3D module and the high-resolution RGB color module 111) of the camera shooting area 300 of the internal integrated acquisition device. At the same time, a status control switch 360 on the internal control board 300 provides for the switching of the operational mode and the charging mode of the acquisition device. Specifically: when switching to the working mode, the mobile intelligent terminal is conducted with the interface chip 310 through the terminal connection line 230 (wherein, one end of the terminal connection line 230 connected with the mobile intelligent terminal can be a Type-C, micro USB or Lightning interface to support various types of mobile intelligent terminals) and the terminal connection line interface 340, and at this time, the acquisition device can be controlled by the mobile intelligent terminal to acquire the marking coding visual information; when switching to the charging mode, the mobile intelligent terminal is conducted with an external charging interface 350 (for convenience of use, the external charging interface 350 may also be a Type-C, micro USB or Lightning interface, that is, the mobile intelligent terminal, such as a smart phone, and a matched charger, may be directly used) through the terminal connection line 230 and the terminal connection interface 340, so as to charge the mobile intelligent terminal. Therefore, the mobile intelligent terminal can be charged without pulling out the terminal connection wire 230 from the mobile intelligent terminal through the state control switch 360, and interface loss caused by repeated pulling-out and inserting is avoided.
(3) Lower box 200 of equipment
Referring to fig. 7, the lower case 200 of the apparatus is provided with a clamping structure 210, a terminal connection line 230 and a charging opening; the clamping structure 210 is used for clamping the mobile intelligent terminal; the terminal connection 230 is used for connecting to a mobile intelligent terminal. Referring again to fig. 3 and 7, the lower case 200 of the apparatus is provided with mounting screw holes for matching connection with the upper case 100 of the apparatus, so as to realize a good cover together and protect the internal optical elements and the circuit board wires.
As a preferred manner, the clamping structure 210 is an adjustable clamping structure 210, so that the collecting device can be firmly clamped to any mobile intelligent terminal (such as various types of PDAs or mobile phones) to form a portable integrated collecting front end; the adjustable clamping structure 210 includes a first wing 211, a second wing 212, an adjustment screw, and an end knob 213; the first protecting wing 211 and the second protecting wing 212 are in threaded connection with an adjusting screw, and one end of the adjusting screw is fixedly connected with the tail end knob 213; the end knob 213 is used for adjusting the distance between the first wing 211 and the second wing 212 by rotating the adjusting screw, so that the collecting device is firmly clamped to the long sides of the two sides of the mobile intelligent terminals with different sizes. Preferably, the inner sides of the first and second wings 211 and 212 are slightly inclined inwards and are attached with rubber pads, so that the mobile intelligent terminal can be firmly and safely clamped.
In order to achieve good heat dissipation of the collecting device, the lower box 200 of the device is provided with heat dissipation fins 220; the heat dissipation fins 220 are distributed on two sides of the adjustable clamping structure 210 in parallel, so that the surface heat dissipation area of the collecting device can be increased to facilitate heat conduction and dissipation of internal devices, and a space between the collecting device and the mobile intelligent terminal can be reserved to facilitate natural heat dissipation of the collecting device and the mobile intelligent terminal.
2. Marking coded 3D vision service cluster
The marking code 3D visual service cluster consists of a plurality of GPU servers and comprises a load balancing scheduling module, a marking code image intelligent identification module, a marking code real 3D appearance restoration module, a marking code historical information data access module, a marking code real 3D appearance comparison module, a marking code 1:1 original size reduction image generation module, a marking code inquiry service module and other core function modules. When the marking code 3D vision service cluster receives marking code high-precision complete information data of the motor vehicle to be checked from the marking code complete information acquisition front end, firstly caching and dynamically scheduling a GPU server by using a load balancing scheduling module according to the use condition of the computing power resources of the whole marking code 3D vision service cluster so as to carry out subsequent analysis processing through each functional module.
3. Check vehicle management system
The system is a core service system for checking the application scene of the motor vehicle and consists of an application server and a database server, and is provided with a Web service module, a motor vehicle information and service data access module, a service handling module, a supervision and audit module and other service modules so as to realize service data archiving on the visual analysis result of the etching code confirmed by the front end of the complete information acquisition of the etching code.
4. Checking terminal
The checking and checking terminal is a networking host provided with a display, a mouse, a keyboard, a printer and other peripheral equipment and having checking and checking service networking inquiry, handling and checking functions, so that service data including a checking code visual analysis result is retrieved from a checking and checking vehicle management system for archiving and checking, the checking code visual analysis data is checked and checked through a checking code inquiry service module of a checking code 3D visual service cluster, and related service record form checking service with a checking code 1:1 original size restored image is printed according to the checking result.
Example 2
Based on the high-precision complete information acquisition and real 3D morphology restoration comparison system of the motor vehicle engraving code of the embodiment 1, the embodiment provides a high-precision complete information acquisition and real 3D morphology restoration comparison method of the motor vehicle engraving code, see fig. 8, comprising the following steps:
S1, a checking operator uses a marking code complete information acquisition front end to acquire marking code high-precision complete information images of a motor vehicle to be checked on site, and uploads characteristic parameters of an acquisition device contained in the marking code complete information acquisition front end and the acquired marking code high-precision complete information images as marking code high-precision complete information data to a marking code 3D visual service cluster; the high-precision complete information image of the engraving code comprises a high-resolution color image and a corresponding high-definition binocular floodlight spectrum image pair;
firstly, the acquisition device included in the front end of the complete information acquisition of the marking code used in the step S1 is calibrated at one time after the production and assembly are completed, and the method for calibrating the acquisition device comprises the following sub-steps:
s111, calibrating a binocular structure formed by the first and second hyperspectral cameras to obtain distortion coefficients and internal parameters of the first and second hyperspectral cameras and external pose parameters between the first and second hyperspectral cameras;
s112, calibrating a binocular structure formed by the first hyperspectral camera and the high-resolution RGB color module to obtain a distortion coefficient and internal parameters of the high-resolution RGB color module and external pose parameters between the first hyperspectral camera and the high-resolution RGB color module;
S113, uniformly encoding the acquired parameters of the camera system and the serial numbers of the acquisition devices, generating parameter verification information, forming characteristic parameters of the acquisition devices, and writing the characteristic parameters of the acquisition devices into a storage and control chip of the acquisition devices; the camera system parameters comprise distortion coefficients and internal parameters of the first and second spectrum cameras and the high-resolution RGB color module, external pose parameters between the first and second spectrum cameras, and external pose parameters between the first spectrum camera and the high-resolution RGB color module.
In the step S1, the method for acquiring the high-precision complete information of the etching code of the motor vehicle to be checked by using the front end of the etching code complete information acquisition on site comprises the following sub-steps:
s121, a checking operator holds the etching code complete information acquisition front end by hand, and opens an etching code acquisition APP pre-installed on the mobile intelligent terminal;
s122, the marking code acquisition APP automatically loads and verifies the characteristic parameters of the acquisition device stored in the acquisition device, and step S123 is executed after verification is successful;
s123, the engraving code acquisition APP automatically starts continuous synchronous exposure acquisition of the high-resolution RGB color module and the spectrum fusion 3D module, and real-time processing is performed to generate a corresponding video stream real-time preview; the video stream real-time preview means that a high-resolution color image from a high-resolution RGB color module is displayed on a touch screen of a mobile intelligent terminal in real time in the acquisition process; meanwhile, the high-definition binocular spectral image pair from the spectral fusion 3D module is superimposed on a high-resolution color image after being subjected to real-time structural facula feature point extraction, binocular image feature point matching, triangulation point cloud generation, coordinate system conversion between a first spectral camera and a high-resolution RGB color module and perspective imaging projection into a depth map;
S124, adjusting shooting angle and light supplementing illumination (namely, enabling the video stream to preview the coding region in real time and simultaneously have effective 3D space information), and clicking a shooting button on the coding acquisition APP to finish synchronous snapshot of the coding high-precision complete information image;
s125, clicking a visual analysis button in the etching code acquisition APP, namely uploading the etching code high-precision complete information data to the etching code 3D visual service cluster through a wireless network for analysis processing.
Before step S2, the load balancing scheduling module of the 3D vision service cluster for etching codes dynamically schedules the GPU server for performing subsequent analysis according to the computing power resource usage condition of the whole 3D vision service cluster for etching codes.
S2, analyzing and processing the high-resolution color image by adopting an intelligent identification module of the engraving code image of the 3D vision service cluster to obtain an actual engraving code image area, specific text content and a corresponding text segmentation connected domain; step S2 comprises the following sub-steps:
s21, calling an engraving code intelligent detection algorithm to divide a candidate engraving code area image from the high-resolution color image; the steps of the engraving code intelligent detection algorithm comprise: firstly, carrying out image preprocessing on a high-resolution color image; then, the images are sent into a motor vehicle engraving code detection model to detect the engraving code image areas and the corresponding text segmentation connected areas; then, screening candidate carved coding region images from the detected carved coding image regions according to the size, the aspect ratio and the position threshold value; the motor vehicle engraving code detection model is a detection model which is obtained by training through an artificial intelligence algorithm and can detect the engraving codes of motor vehicles in various shapes, and the training process of the detection model in the embodiment is specifically as follows:
(1) Collecting color images of actual engraving codes of various motor vehicles under different conditions to obtain an original data set of the engraving codes of the motor vehicles; the different conditions comprise different engraving positions, shooting distances, shooting angles and illumination environments;
(2) Screening out qualified images from the original data set of the motor vehicle engraving codes and marking the engraving code areas in the qualified images to obtain an marking data set of the motor vehicle engraving codes;
(3) Performing image preprocessing on the motor vehicle marking coding marking data set, such as applying rotation, affine transformation, brightness transformation, fuzzy processing and other image processing operations to generate a motor vehicle marking coding enhancement data set;
(4) Training a deep neural network model (preferably PSENT capable of detecting a curved shape text) by utilizing the motor vehicle engraving code enhancement data set, and obtaining a motor vehicle engraving code detection model after repeated training, evaluation and tuning for a plurality of times;
s22, calling an intelligent identification algorithm of the engraving code to identify an actual engraving code region image, specific text content and a corresponding text segmentation connected domain from the candidate engraving code region image; the steps of the engraving code intelligent identification algorithm comprise: firstly, preprocessing such as inclination correction, size normalization and the like is carried out on the candidate carved coding region image; then, the specific text content is identified by a marking code identification model of the motor vehicle; finally, screening and merging recognition results based on priori knowledge of the engraving code (such as character bit number) and the position relation of the candidate region (two lines of texts which are closely adjacent up and down) to obtain an actual engraving code image region, specific text content and a corresponding text segmentation connected domain; the motor vehicle engraving code recognition model is a motor vehicle engraving code recognition model which is obtained through training by utilizing an artificial intelligence algorithm and can recognize various fonts in different backgrounds, and the training process of the recognition model in the embodiment is specifically as follows:
(1) Generating a marking coding simulation image data set of various typical fonts and backgrounds by a simulation method;
(2) Pre-training a motor vehicle engraving code recognition model (preferably lightweight and efficient SqueezeNst as CRNN for CNN backbone) using an engraving code simulated image dataset;
(3) And repeatedly training, evaluating and optimizing the pre-trained motor vehicle engraving code identification model by using the truly marked motor vehicle engraving code image data set to obtain a trained motor vehicle engraving code identification model.
S3, adopting an engraving code real 3D morphology restoration module of the engraving code 3D vision service cluster to analyze and process the high-resolution color image and the high-definition binocular spectral image pair, and reconstructing a restored engraving code real 3D morphology model and a corresponding engraving code ideal shape surface parameter model; the step S3 comprises the following sub-steps:
s31, a 3D space reconstruction algorithm is called to process a high-definition binocular spectral image pair, and a high-density three-dimensional point cloud around the etching code is reconstructed; the 3D space reconstruction algorithm comprises the following substeps:
(1) According to the camera distortion model and the distortion parameters of the first and second standard spectrum cameras, respectively carrying out corresponding distortion correction on the high-definition binocular spectrum image pair to obtain a de-distortion spectrum image pair excluding the influence of radial and tangential distortion;
(2) Detecting a fixed-mode artificial laser array spot projected to the carved surface by an infrared laser projector on the undistorted and ubiquitted spectrum image pair by using a spot detection operator, and positioning sub-pixel coordinates; matching and screening binocular feature points based on an image polar geometric constraint relationship between the first and second multispectral cameras and a laser spot feature descriptor; thirdly, calculating by using a triangulation principle to obtain a first 3D space point cloud which covers the whole etching coding surrounding space and takes a first pan-spectrum camera coordinate system as a world coordinate system, namely, the whole three-dimensional point cloud of the etching coding surrounding space;
(3) On the undistorted spectral image pair, detecting characteristic points comprising strokes of the text characters of the inscription code and inherent textures of the peripheral surface by using a characteristic point detection operator, and expressing by using a characteristic descriptor; combining an image polar line geometric constraint relation between the first and second spectrum cameras, a feature point descriptor and a position sequence constraint relation relative to the detected laser spots to complete the matching of visible light feature points; thirdly, calculating by using a triangulation principle to obtain a second 3D space point cloud which embodies the visible features of the etching code and takes the first pan-spectrum camera coordinate system as a world coordinate system, namely a three-dimensional point cloud of the space features around the etching code;
(4) Combining the first 3D space point cloud and the second 3D space point cloud, and storing point cloud data according to ascending order of Y coordinate values and X coordinate values to obtain a third 3D space point cloud which densely covers the space surface near the etching code, namely the high-density three-dimensional point cloud around the etching code;
s32, calling a 3D appearance reconstruction algorithm, and combining high-resolution color image information on the basis of a third 3D space point cloud to restore an actual 3D appearance model of the engraving code and a corresponding ideal shape surface parameter model of the engraving code; the 3D appearance reconstruction algorithm comprises the following substeps;
(1) According to external pose parameters between the first pan-spectrum camera and the high-resolution RGB color module, converting the third 3D space point cloud into a fourth 3D space point cloud taking a high-resolution RGB color module coordinate system as a world coordinate system;
(2) Carrying out distortion correction on the high-resolution color image of the engraving code according to a camera distortion model and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted high-resolution color image with the radial and tangential distortion effects removed;
(3) Carrying out distortion correction on the text segmentation connected domain output in the step S2 according to the distortion model of the camera and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted text segmentation connected domain;
(4) Projecting a fourth 3D space point cloud onto the de-distorted high-resolution color image by combining the camera model and the internal parameters of the calibrated high-resolution RGB color module, wherein 3D space points falling into the range of the de-distorted high-resolution color image form a fifth 3D space point cloud, and 3D space points falling into the segmentation communication domain of the de-distorted text form a sixth 3D space point cloud, namely, the 3D space point cloud of the coded text is carved;
(5) Using a sixth 3D space point cloud as an initial point set, and performing space growth in the fifth 3D space point cloud under the conditions of space proximity constraint and curved surface smoothness constraint to generate a continuous, smooth and stable seventh 3D space point cloud, namely, a 3D point cloud of a shape surface where the carving code is positioned;
(6) Performing gridding treatment on the seventh 3D space point cloud to generate a first 3D shape surface model, namely an initial mesh model of the shape surface where the engraving code is positioned;
(7) Fitting a seventh 3D space point cloud with the ideal shape surface model, and determining the type of the ideal shape surface model of the shape surface where the engraving code is positioned according to the matching degree of the ideal shape surface model and the seventh 3D space point cloud to obtain a corresponding first ideal shape surface parameter model; the ideal surface model comprises a plane, a cylindrical surface, a conical surface, a spherical surface and the like, and a seventh 3D space point cloud can be fitted in the mode of a decision tree in sequence by using the ideal surface model of the plane, the cylindrical surface, the conical surface, the spherical surface and the like;
(8) Performing internal space point interpolation on grids with larger meshes in the first 3D shape surface model by combining the first ideal shape surface parameter model, adding grid vertices and refining the grids to obtain a second 3D shape surface model, namely, a shape surface refining mesh model where the engraving codes are positioned;
(9) Projecting all grid vertexes of the second 3D shape surface model onto the undistorted high-resolution color image, and dividing the undistorted high-resolution color image by using fine plane grids formed by the projection points to obtain a high-resolution color image patch set;
(10) Texture pasting is carried out on the second 3D shape surface model by using a high-resolution color image patch set, so that a first 3D shape model which takes a high-resolution RGB color module coordinate system as a world coordinate system and has complete XYZ space geometrical information and fine RGB color appearance is obtained;
(11) Generating an imprinting code real 3D morphology model taking the imprinting code itself as a coordinate system and a corresponding imprinting code ideal shape surface parameter model: firstly, calculating a minimum bounding box of a sixth 3D space point cloud; then, the center of the minimum bounding box, the long axis direction, the first short axis corresponding to the text height and the second short axis corresponding to the text depth are respectively used as the original point, the horizontal X axis, the vertical Y axis and the depth Z axis of a new 3D space coordinate system, and coordinate system conversion is carried out on the first 3D morphology model and the corresponding first ideal shape surface parameter model to obtain a second 3D morphology model and the corresponding second ideal shape surface parameter model, wherein the shooting visual angle and the distance difference between the second 3D morphology model and the corresponding second ideal shape surface parameter model are eliminated; the second 3D morphology model is a true 3D morphology model of the engraving code, and the second ideal morphology parameter model is an ideal morphology parameter model of the engraving code.
S4, according to the specific text content of the etching code output in the step S2, a historical etching code real 3D morphology model is called from an etching code historical information data access module of the etching code 3D vision service cluster; generally, the historical carving coded real 3D morphology model is the carving coded real 3D morphology model stored in the last business handling.
S5, comparing the real 3D morphology model of the engraving code restored in the step S3 with the real 3D morphology model of the history engraving code acquired in the step S4 by adopting an actual 3D morphology comparison module of the engraving code 3D vision service cluster, and returning a 3D morphology comparison result; this step S5 comprises the following sub-steps:
s51, respectively generating corresponding sample orthographic projection original RGB images for the restored and historical carving coding real 3D shape models: generating rectangular sampling grids for the etching coding region on the XY plane of the etching coding real 3D morphology model restored in the step S3 and the historical etching coding real 3D morphology model called in the step S4 according to uniform physical length intervals, performing orthographic projection on the surface of the etching coding real 3D morphology model along the Z-axis direction by using the intersecting points of the rectangular sampling grids, taking RGB values at the intersecting points as corresponding orthographic projection image pixel point values, and generating a restored and historical etching coding real 3D morphology model sample orthographic projection original RGB image;
S52, carrying out graying and sample orthographic projection feature point extraction and matching on the restored and historical sample orthographic projection original RGB image to obtain a restored and historical sample orthographic projection initial matching point pair set: after the recovered and historical orthographic original RGB image is subjected to graying, a recovered orthographic original brightness image corresponding to the original orthographic original RGB image is calculated; sequentially carrying out feature point detection, feature description and feature matching on the two brightness images to obtain a restored and historical original point pair set for sample orthographic projection;
s53, reversely forward projecting and backtracking 3D shape surface space coordinates of the image forward projection matching point: reversely orthographically projecting coordinates of all restored and historic sample orthographic projection initial matching point pair sets respectively to obtain corresponding 3D shape surface space coordinates on the restored and historic engraving coding real 3D shape model surface;
s54, registering and aligning a real 3D morphology model based on 3D shape surface space coordinates of the morphology orthographic projection matching points: based on the 3D shape surface space coordinates of the restored and historical sample orthographic projection initial matching point pairs, optimally solving the rotation and translation transformation from the registration of the restored imprinting code real 3D morphology model to the historical imprinting code real 3D morphology model; and applying rotation and translation transformation to the restored imprinting encoding real 3D morphology model to obtain a restored imprinting encoding real 3D morphology alignment model.
S55, comparing the consistency of the 3D shape surfaces between the registered aligned real 3D shape models: generating rectangular sampling grids for the etching coding region according to specific uniform physical length intervals on the XY plane of the restored etching coding real 3D morphology alignment model and the historical etching coding real 3D morphology model; calculating the space distance between two intersection points obtained by orthographic projection of each rectangular sampling grid point along the Z-axis direction and intersecting the two model surfaces; counting the maximum value and the average value of all the space distances; if the maximum value and the average value of statistics are smaller than the corresponding preset space distance threshold values, judging that the restored 3D shape surfaces of the true 3D shape model are consistent with the historical carved code, otherwise, the restored 3D shape surfaces are inconsistent;
s56, comparing consistency of appearance between the registered aligned real 3D appearance models: orthographic projection restored marking code real 3D morphology alignment model, generating a restored morphology pair Ji Zheng projection RGB image; converting the restored aspect pair Ji Zheng projected RGB image and the historical aspect orthographic RGB image to an HSV color space; calculating and counting the maximum value and the average value of the color distances of all the overlapped points; if the maximum value and the average value of statistics are smaller than the corresponding preset color distance threshold values, judging that the appearance of the restored real 3D appearance model is consistent with the historical carved code, otherwise, the appearance of the restored real 3D appearance model is inconsistent.
S6, adopting an engraving code 1:1 original size reduction image generation module of the engraving code 3D visual service cluster to analyze and process the engraving code real 3D morphology model restored in the step S3 and the corresponding engraving code ideal shape surface parameter model to generate an engraving code 1:1 original size reduction image; this step S6 comprises the following sub-steps:
s61, analyzing and processing the restored sample orthographic projection original RGB image to determine an ideal shape surface area corresponding to the engraving code: firstly, calling an engraving code intelligent detection algorithm to analyze and process the restored sample front projection original RGB image output by the step S51, and obtaining a front projection image area corresponding to the engraving code; then, reversely orthographically projecting the orthographic projection image area to the surface of the imprinting code ideal shape surface parameter model to obtain an ideal shape surface area corresponding to the imprinting code;
s62, generating a coordinate mapping relation set between a two-dimensional unfolding plane and a three-dimensional ideal surface by meshing and dividing ideal surface areas corresponding to the marking codes at uniform physical length intervals: in the range of an ideal shape surface area corresponding to the etching code, taking the center point of the area as the origin of a coordinate system of a two-dimensional expansion plane and the center point of a two-dimensional expansion image, and generating a mapping relation between two-dimensional expansion plane coordinates (u, v) and three-dimensional ideal shape surface coordinates (x, y, z) of uniform physical length interval grid sampling in the ideal shape surface area corresponding to the etching code according to the ideal shape surface model type of the shape surface where the etching code is located and the etching code ideal shape surface parameter model obtained by matching in the step (7) in the step S32; the coordinate mapping method of different ideal shape surface model types is as follows:
For the plane type, uniformly taking points according to the longitudinal and transverse directions of an ideal space plane and the physical interval length meeting the set precision, and generating a mapping relation between a two-dimensional unfolded plane coordinate (u, v) and a three-dimensional ideal plane coordinate (x, y, z);
for cylindrical surface and conical surface types, firstly taking a generatrix of the center of an ideal surface area corresponding to the over-marking code as a path corresponding to a first coordinate axis of a two-dimensional expansion image to perform uniform linear length interval point taking, and then taking the circumference passing through the point positions as a path corresponding to a second coordinate axis of the two-dimensional expansion image to perform uniform circular arc length interval point taking, so as to generate a mapping relation between two-dimensional expansion plane coordinates (u, v) and three-dimensional ideal surface coordinates (x, y, z);
for the spherical type, firstly taking a straight line along the long axis direction of an orthographic projection image area of the marking code and passing through the center point of an ideal surface area corresponding to the marking code as a positioning straight line, taking points along a path with uniform arc length intervals by taking the circumference of a spherical great circle passing through the positioning straight line as a path corresponding to a transverse axis of a two-dimensional expansion image, finally taking points along the path with uniform arc length intervals by taking the circumference of the spherical great circle passing through the points as a path corresponding to a longitudinal axis of the two-dimensional expansion image, and generating a mapping relation between two-dimensional expansion plane coordinates (u, v) and three-dimensional ideal surface coordinates (x, y, z);
S63, based on a coordinate mapping relation set between the two-dimensional expansion plane and the three-dimensional ideal shape plane, projecting a sampled and carved coded real 3D shape model to generate a carved coded two-dimensional expansion image: traversing a coordinate mapping relation set between a two-dimensional expansion plane and a three-dimensional ideal shape surface, projecting each three-dimensional ideal shape surface point along the normal direction of an ideal shape surface at the point according to the ideal shape surface model type of the shape surface where the etching code is positioned and the etching code ideal shape surface parameter model, and taking the RGB value of the actual intersection point of the projection straight line and the surface of the etching code real 3D shape model as the pixel value of the corresponding etching code two-dimensional expansion image point so as to generate an etching code two-dimensional expansion image;
s64, generating an engraving code two-dimensional expansion correction image by geometrically correcting the engraving code two-dimensional expansion image: firstly, calling an intelligent detection algorithm of the etching code to analyze and process the two-dimensional unfolded image of the etching code to obtain an etching code image region and a corresponding text segmentation connected region; then, statistically solving the geometric center of the segmentation connected domain of the carved coded text and fitting the major axis of the ellipse; finally, performing rigid geometric transformation on the etching code by taking the center and the long axis as the center and the horizontal axis of the transformed image respectively to generate an etching code two-dimensional expansion correction image;
S65, performing printing configuration on the two-dimensional expansion correction image of the engraving code to generate a final original size reduction image of the engraving code 1:1: firstly, calculating the number of wide and high pixel points of a preset printing canvas image according to the wide and high physical length of the preset printing canvas and the preset printing resolution (DPI) and generating a blank printing canvas image; then, drawing lines at fixed physical intervals on the blank printing canvas image to generate a printing canvas image with standard reference graduation lines; next, the print resolution of the print-coded two-dimensional expansion correction image itself is calculated at uniform physical length intervals of step S62; finally, the two-dimensional expansion correction image of the engraving code is converted according to the pure proportion of the preset printing resolution and covered on the upper right corner of the printing canvas image with the standard reference scale marks, and the original size reduction image of the engraving code 1:1 is obtained, see fig. 9.
S7, the 3D visual service cluster of the etching code returns the visual analysis result of the etching code to the front end of the complete information acquisition of the etching code, and stores the visual analysis data of the etching code; the visual analysis result of the etching code comprises specific text content of the etching code, a 3D morphology comparison result and a 1:1 original size reduction image of the etching code; the visual analysis data of the engraving code comprise an engraving code visual analysis time stamp, an engraving code visual analysis result, a restored engraving code real 3D morphology model, a corresponding engraving code ideal shape surface parameter model and high-precision complete information of the engraving code.
S8, the front end of the complete information collection of the etching code receives and displays the etching code visual analysis result returned by the 3D visual service cluster of the etching code, and the etching code visual analysis result is submitted to a checking vehicle management system for business data archiving after being confirmed by checking operators;
s9, the check and verification post personnel uses the check and verification terminal to retrieve the business data comprising the visual analysis result of the marking code from the check and verification car management system, and reads the visual analysis data of the marking code and verifies through the marking code inquiry service module of the marking code 3D visual service cluster, and prints the related business record form verification business with the 1:1 original size reduction image of the marking code according to the verification result, thereby completing the whole check and verification processing process of the marking code of the motor vehicle to be checked.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. The system is characterized by comprising an engraving code complete information acquisition front end, an engraving code 3D visual service cluster, a check vehicle management system and a check terminal which are connected through a network;
The marking code complete information acquisition front end comprises a mobile intelligent terminal and an acquisition device which are connected into a whole; the mobile intelligent terminal is used for acquiring the high-precision complete information of the engraving code of the motor vehicle to be checked through the mounted engraving code acquisition APP control acquisition device and transmitting the information to the engraving code 3D vision service cluster; the high-precision complete information of the engraving code comprises characteristic parameters of a collecting device and collected high-precision complete information images of the engraving code; the high-precision complete information image of the engraving code comprises a high-resolution color image and a corresponding high-definition binocular floodlight spectrum image pair;
the marking code 3D visual service cluster is used for analyzing and processing the marking code high-precision complete information to obtain marking code visual analysis results and marking code visual analysis data; the visual analysis result of the etching code is required to be returned to the front end of the complete information acquisition of the etching code for confirmation;
the checking car management system is used for archiving business data of the visual analysis result of the etching code confirmed by the front end of the complete information acquisition of the etching code;
the checking and checking terminal is used for retrieving the business data comprising the marking code visual analysis result from the checking and checking vehicle management system, retrieving the marking code visual analysis data and checking through the marking code query service module of the marking code 3D visual service cluster, and printing the related business record form checking business with the marking code 1:1 original size restored image according to the checking result;
The acquisition device comprises an upper equipment box body (100), a lower equipment box body (200) and an internal control board (300); the internal control board (300) is arranged in a cavity formed after the upper box body (100) of the equipment and the lower box body (200) of the equipment are connected in a matching way;
a horizontal holding area (120), a vertical holding area (130) and a shooting area (110) are arranged on the upper box body (100) of the equipment; the camera shooting area (110) comprises an adjustable uniform soft light supplementing module, a spectrum fusion 3D module and a high-resolution RGB color module (111);
the lower box body (200) of the equipment is provided with a clamping structure (210), a terminal connecting line and a charging opening; the clamping structure (210) is used for clamping the mobile intelligent terminal; the terminal connection line (230) is used for connecting a mobile intelligent terminal; the clamping structure (210) is an adjustable clamping structure; the adjustable clamping structure (210) comprises a first wing (211), a second wing (212), an adjusting screw and an end knob (213); the first protective wing (211) and the second protective wing (212) are in threaded connection with an adjusting screw, and one end of the adjusting screw is fixedly connected with a tail end knob (213); the end knob (213) is used for adjusting the distance between the first protective wing (211) and the second protective wing (212) by rotating the adjusting screw;
The internal control board (300) is provided with a state control switch (360), an interface chip (310) and a storage and control chip (320) which are connected in series, a terminal connecting interface (340) and an external charging interface (350) which are connected with the state control switch (360), a 3D module interface (332) and a color module interface (333) which are connected with the interface chip (310), and a light supplementing module interface (331) which is connected with the storage and control chip (320); the terminal connection interface (340) is connected with the mobile intelligent terminal through a terminal connection penetrating through the inside and the outside of the acquisition device; the external charging interface (350) is for connecting an external charger through a charging opening; the light supplementing mode group interface (331), the 3D module interface (332) and the color module interface (333) are respectively and correspondingly connected with the adjustable uniform soft light supplementing module, the spectrum fusion 3D module and the high-resolution RGB color module (111) through internal connecting cables; the state control switch (360) is a double-throw toggle switch and is used for controlling the acquisition device to switch between the marking coding visual information acquisition state and the charging state of the mobile intelligent terminal.
2. The system for comparing high-precision complete information acquisition and real 3D morphology restoration of motor vehicle engraving codes according to claim 1, characterized in that the hyperspectral fusion 3D module comprises a first hyperspectral camera (112), a second hyperspectral camera (114) and an infrared laser area array projector (113), and a 3D image control chip connected with the first hyperspectral camera (112), the second hyperspectral camera (114) and the infrared laser area array projector (113); the first pan-spectrum camera 112 and the second pan-spectrum camera 114 form a binocular stereoscopic structure, have resolution not lower than high definition, and have a base line length between them of more than 40mm; the 3D image control chip is used for realizing hardware synchronous control and synchronous image acquisition of the first broad spectrum camera (112), the second broad spectrum camera (114) and the infrared laser area array projector (113); the first spectrum camera (112), the second spectrum camera (114), the infrared laser area array projector (113) and the 3D image control chip are integrated on the 3D acquisition PCB and connected with the 3D module interface (332) of the internal control board through the 3D acquisition PCB.
3. The system for comparing high-precision complete information acquisition and real 3D morphology restoration of motor vehicle engraving codes according to claim 2, characterized in that said high-resolution RGB color module (111) comprises a special close-up lens and a high-resolution sensor chip, and a color master control chip connected with the special close-up lens and the high-resolution sensor chip; the special close-up lens, the high-resolution sensor chip and the color master control chip are integrated on an RGB color PCB and are connected with a color module interface (333) through the RGB color PCB.
4. The system for comparing high-precision complete information acquisition and real 3D morphology restoration of motor vehicle engraving codes according to claim 3, wherein the 3D image control chip and the color master control chip are respectively provided with exposure synchronous electric signal input and output pins, and the exposure synchronous electric signal input and output pins corresponding to the two pins are connected through a signal wire and are used for realizing hardware synchronous acquisition between the spectrum fusion 3D module and the high-resolution RGB color module.
5. The system for comparing high-precision complete information acquisition and real 3D morphology restoration of the motor vehicle engraving code according to claim 1, wherein the adjustable and controllable uniform soft light compensation module comprises a plurality of soft light beads; the soft light beads are integrated on the light supplementing PCB and are connected with the light supplementing module group interface (331) through the light supplementing PCB; and all the soft light beads are controlled by the coded signals from the same serial signal line of the storage and control chip (320), and the luminous intensity of each soft light bead can be independently adjusted.
6. The system for comparing high-precision complete information acquisition and real 3D morphology restoration of the motor vehicle engraving codes according to claim 1, wherein a light supplementing PCB of the adjustable and controllable uniform soft light supplementing module, a 3D acquisition PCB of the spectrum fusion 3D module and an RGB color PCB of a high-resolution RGB color module (111) are all positioned on the same special-shaped metal assembly plate; the special-shaped metal assembly plate is fixed in a cavity formed after the upper box body (100) and the lower box body (200) of the equipment are connected in a matched mode.
7. The system for high-precision complete information acquisition and real 3D morphology restoration comparison of motor vehicle etching codes according to claim 1, wherein the etching code 3D visual service cluster consists of a plurality of GPU servers and comprises a load balancing scheduling module, an etching code image intelligent identification module, an etching code real 3D morphology restoration module, an etching code historical information data access module, an etching code real 3D morphology comparison module, an etching code 1:1 original size reduction image generation module and an etching code query service module.
8. A method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code, which is characterized in that the method is realized by adopting the system for comparing high-precision complete information acquisition and real 3D morphology restoration of the motor vehicle engraving code according to any one of claims 1-7; the method comprises the following steps:
S1, a checking operator uses a marking code complete information acquisition front end to acquire marking code high-precision complete information images of a motor vehicle to be checked on site, and uploads characteristic parameters of an acquisition device contained in the marking code complete information acquisition front end and the acquired marking code high-precision complete information images as marking code high-precision complete information data to a marking code 3D visual service cluster; the high-precision complete information image of the engraving code comprises a high-resolution color image and a corresponding high-definition binocular floodlight spectrum image pair;
s2, analyzing and processing the high-resolution color image by adopting an intelligent identification module of the engraving code image of the 3D vision service cluster to obtain an actual engraving code image area, specific text content and a corresponding text segmentation connected domain;
s3, adopting an engraving code real 3D morphology restoration module of the engraving code 3D vision service cluster to analyze and process the high-resolution color image and the high-definition binocular spectral image pair, and reconstructing a restored engraving code real 3D morphology model and a corresponding engraving code ideal shape surface parameter model;
s4, according to the specific text content of the etching code output in the step S2, a historical etching code real 3D morphology model is called from an etching code historical information data access module of the etching code 3D vision service cluster;
S5, comparing the real 3D morphology model of the engraving code restored in the step S3 with the real 3D morphology model of the history engraving code acquired in the step S4 by adopting an actual 3D morphology comparison module of the engraving code 3D vision service cluster, and returning a 3D morphology comparison result;
s6, adopting an engraving code 1:1 original size reduction image generation module of the engraving code 3D visual service cluster to analyze and process the engraving code real 3D morphology model restored in the step S3 and the corresponding engraving code ideal shape surface parameter model to generate an engraving code 1:1 original size reduction image;
s7, the 3D visual service cluster of the etching code returns the visual analysis result of the etching code to the front end of the complete information acquisition of the etching code, and stores the visual analysis data of the etching code; the visual analysis result of the etching code comprises specific text content of the etching code, a 3D morphology comparison result and a 1:1 original size reduction image of the etching code; the etching coding visual analysis data comprise etching coding visual analysis time stamps, etching coding visual analysis results, restored etching coding real 3D morphology models, corresponding etching coding ideal shape surface parameter models and etching coding high-precision complete information;
S8, the front end of the complete information collection of the etching code receives and displays the etching code visual analysis result returned by the 3D visual service cluster of the etching code, and the etching code visual analysis result is submitted to a checking vehicle management system for business data archiving after being confirmed by checking operators;
s9, the check and verification post personnel uses the check and verification terminal to retrieve the business data comprising the visual analysis result of the marking code from the check and verification car management system, and reads the visual analysis data of the marking code and verifies through the marking code inquiry service module of the marking code 3D visual service cluster, and prints the related business record form verification business with the 1:1 original size restored image of the marking code according to the verification result.
9. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle etching code according to claim 8, wherein the acquisition device included in the front end of the etching code complete information acquisition used in step S1 is calibrated once after the completion of production and assembly, and the method for calibrating the acquisition device comprises the following sub-steps:
s111, calibrating a binocular structure formed by the first and second hyperspectral cameras to obtain distortion coefficients and internal parameters of the first and second hyperspectral cameras and external pose parameters between the first and second hyperspectral cameras;
S112, calibrating a binocular structure formed by the first hyperspectral camera and the high-resolution RGB color module to obtain a distortion coefficient and internal parameters of the high-resolution RGB color module and external pose parameters between the first hyperspectral camera and the high-resolution RGB color module;
s113, uniformly encoding the acquired parameters of the camera system and the serial numbers of the acquisition devices, generating parameter verification information, forming characteristic parameters of the acquisition devices, and writing the characteristic parameters of the acquisition devices into a storage and control chip of the acquisition devices; the camera system parameters comprise distortion coefficients and internal parameters of the first and second spectrum cameras and the high-resolution RGB color module, external pose parameters between the first and second spectrum cameras, and external pose parameters between the first spectrum camera and the high-resolution RGB color module.
10. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of motor vehicle etching code according to claim 9, wherein in step S1, the method for acquiring the etching code high-precision complete information of the motor vehicle to be checked in situ by using the etching code complete information acquisition front end comprises the following sub-steps:
S121, a checking operator holds the etching code complete information acquisition front end by hand, and opens an etching code acquisition APP pre-installed on the mobile intelligent terminal;
s122, the marking code acquisition APP automatically loads and verifies the characteristic parameters of the acquisition device stored in the acquisition device, and step S123 is executed after verification is successful;
s123, the engraving code acquisition APP automatically starts continuous synchronous exposure acquisition of the high-resolution RGB color module and the spectrum fusion 3D module, and real-time processing is performed to generate a corresponding video stream real-time preview; the video stream real-time preview means that a high-resolution color image from a high-resolution RGB color module is displayed on a touch screen of a mobile intelligent terminal in real time in the acquisition process; meanwhile, the high-definition binocular spectral image pair from the spectral fusion 3D module is superimposed on a high-resolution color image after being subjected to real-time structural facula feature point extraction, binocular image feature point matching, triangulation point cloud generation, coordinate system conversion between a first spectral camera and a high-resolution RGB color module and perspective imaging projection into a depth map;
s124, adjusting the shooting angle and the light supplementing illumination, and clicking a shooting button on the engraving code acquisition APP to finish synchronous snapshot of the engraving code high-precision complete information image;
S125, clicking a visual analysis button in the etching code acquisition APP, namely uploading the etching code high-precision complete information data to the etching code 3D visual service cluster through a wireless network for analysis processing.
11. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code according to claim 10, wherein step S2 comprises the following sub-steps:
s21, calling an engraving code intelligent detection algorithm to divide a candidate engraving code area image from the high-resolution color image; the steps of the engraving code intelligent detection algorithm comprise: firstly, carrying out image preprocessing on a high-resolution color image; then, the images are sent into a motor vehicle engraving code detection model to detect the engraving code image areas and the corresponding text segmentation connected areas; then, screening candidate carved coding region images from the detected carved coding image regions according to the size, the aspect ratio and the position threshold value; the motor vehicle engraving code detection model is a detection model which is obtained by training through an artificial intelligence algorithm and can detect the engraving codes of motor vehicles in various shapes;
s22, calling an intelligent identification algorithm of the engraving code to identify an actual engraving code region image, specific text content and a corresponding text segmentation connected domain from the candidate engraving code region image; the steps of the engraving code intelligent identification algorithm comprise: firstly, performing inclination correction and size normalization on candidate carved coding region images; then, the specific text content is identified by a marking code identification model of the motor vehicle; finally, screening and merging recognition results based on the priori knowledge of the carved coding and the position relation of the candidate region to obtain an actual carved coding image region, specific text content and a corresponding text segmentation connected region; the motor vehicle engraving code recognition model is a motor vehicle engraving code recognition model which is obtained through training by utilizing an artificial intelligence algorithm and can recognize various fonts in different backgrounds.
12. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code according to claim 11, wherein step S3 comprises the following sub-steps:
s31, a 3D space reconstruction algorithm is called to process a high-definition binocular spectral image pair, and a high-density three-dimensional point cloud around the etching code is reconstructed; the 3D space reconstruction algorithm comprises the following substeps:
(1) According to the camera distortion model and the distortion parameters of the first and second standard spectrum cameras, respectively carrying out corresponding distortion correction on the high-definition binocular spectrum image pair to obtain a de-distortion spectrum image pair excluding the influence of radial and tangential distortion;
(2) Detecting a fixed-mode artificial laser array spot projected to the carved surface by an infrared laser projector on the undistorted and ubiquitted spectrum image pair by using a spot detection operator, and positioning sub-pixel coordinates; matching and screening binocular feature points based on an image polar geometric constraint relationship between the first and second multispectral cameras and a laser spot feature descriptor; thirdly, calculating by using a triangulation principle to obtain a first 3D space point cloud which covers the whole etching coding surrounding space and takes a first pan-spectrum camera coordinate system as a world coordinate system, namely, the whole three-dimensional point cloud of the etching coding surrounding space;
(3) On the undistorted spectral image pair, detecting characteristic points comprising strokes of the text characters of the inscription code and inherent textures of the peripheral surface by using a characteristic point detection operator, and expressing by using a characteristic descriptor; combining an image polar line geometric constraint relation between the first and second spectrum cameras, a feature point descriptor and a position sequence constraint relation relative to the detected laser spots to complete the matching of visible light feature points; thirdly, calculating by using a triangulation principle to obtain a second 3D space point cloud which embodies the visible features of the etching code and takes the first pan-spectrum camera coordinate system as a world coordinate system, namely a three-dimensional point cloud of the space features around the etching code;
(4) Combining the first 3D space point cloud and the second 3D space point cloud, and storing point cloud data according to ascending order of Y coordinate values and X coordinate values to obtain a third 3D space point cloud which densely covers the space surface near the etching code, namely the high-density three-dimensional point cloud around the etching code;
s32, calling a 3D appearance reconstruction algorithm, and combining high-resolution color image information on the basis of a third 3D space point cloud to restore an actual 3D appearance model of the engraving code and a corresponding ideal shape surface parameter model of the engraving code; the 3D appearance reconstruction algorithm comprises the following substeps;
(1) According to external pose parameters between the first pan-spectrum camera and the high-resolution RGB color module, converting the third 3D space point cloud into a fourth 3D space point cloud taking a high-resolution RGB color module coordinate system as a world coordinate system;
(2) Carrying out distortion correction on the high-resolution color image of the engraving code according to a camera distortion model and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted high-resolution color image with the radial and tangential distortion effects removed;
(3) Carrying out distortion correction on the text segmentation connected domain output in the step S2 according to the distortion model of the camera and the distortion parameters of the calibrated high-resolution RGB color module to obtain a de-distorted text segmentation connected domain;
(4) Projecting a fourth 3D space point cloud onto the de-distorted high-resolution color image by combining the camera model and the internal parameters of the calibrated high-resolution RGB color module, wherein 3D space points falling into the range of the de-distorted high-resolution color image form a fifth 3D space point cloud, and 3D space points falling into the segmentation communication domain of the de-distorted text form a sixth 3D space point cloud, namely, the 3D space point cloud of the coded text is carved;
(5) Using a sixth 3D space point cloud as an initial point set, and performing space growth in the fifth 3D space point cloud under the conditions of space proximity constraint and curved surface smoothness constraint to generate a continuous, smooth and stable seventh 3D space point cloud, namely, a 3D point cloud of a shape surface where the carving code is positioned;
(6) Performing gridding treatment on the seventh 3D space point cloud to generate a first 3D shape surface model, namely an initial mesh model of the shape surface where the engraving code is positioned;
(7) Fitting a seventh 3D space point cloud with the ideal shape surface model, and determining the type of the ideal shape surface model of the shape surface where the engraving code is positioned according to the matching degree of the ideal shape surface model and the seventh 3D space point cloud to obtain a corresponding first ideal shape surface parameter model;
(8) Performing internal space point interpolation on grids with larger meshes in the first 3D shape surface model by combining the first ideal shape surface parameter model, adding grid vertices and refining the grids to obtain a second 3D shape surface model, namely, a shape surface refining mesh model where the engraving codes are positioned;
(9) Projecting all grid vertexes of the second 3D shape surface model onto the undistorted high-resolution color image, and dividing the undistorted high-resolution color image by using fine plane grids formed by the projection points to obtain a high-resolution color image patch set;
(10) Texture pasting is carried out on the second 3D shape surface model by using a high-resolution color image patch set, so that a first 3D shape model which takes a high-resolution RGB color module coordinate system as a world coordinate system and has complete XYZ space geometrical information and fine RGB color appearance is obtained;
(11) Generating an imprinting code real 3D morphology model taking the imprinting code itself as a coordinate system and a corresponding imprinting code ideal shape surface parameter model: firstly, calculating a minimum bounding box of a sixth 3D space point cloud; then, the center of the minimum bounding box, the long axis direction, the first short axis corresponding to the text height and the second short axis corresponding to the text depth are respectively used as the original point, the horizontal X axis, the vertical Y axis and the depth Z axis of a new 3D space coordinate system, and coordinate system conversion is carried out on the first 3D morphology model and the corresponding first ideal shape surface parameter model to obtain a second 3D morphology model and the corresponding second ideal shape surface parameter model, wherein the shooting visual angle and the distance difference between the second 3D morphology model and the corresponding second ideal shape surface parameter model are eliminated; the second 3D morphology model is a true 3D morphology model of the engraving code, and the second ideal morphology parameter model is an ideal morphology parameter model of the engraving code.
13. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code according to claim 12, wherein step S5 comprises the following sub-steps:
s51, respectively generating corresponding sample orthographic projection original RGB images for the restored and historical carved coded real 3D morphology models;
S52, carrying out graying and sample orthographic projection feature point extraction and matching on the restored and historical sample orthographic projection original RGB image to obtain a restored and historical sample orthographic projection initial matching point pair set;
s53, reversely forward projecting and backtracking 3D shape surface space coordinates of the image forward projection matching point;
s54, registering and aligning a real 3D morphology model based on the 3D surface space coordinates of the morphology orthographic projection matching points;
s55, comparing the consistency of the 3D shape surfaces between the registered and aligned real 3D shape models;
s56, comparing consistency of appearance between the registered aligned real 3D appearance models.
14. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle engraving code according to claim 13, wherein step S6 comprises the following sub-steps:
s61, analyzing and processing the restored sample orthographic projection original RGB image to determine an ideal shape surface area corresponding to the engraving code;
s62, meshing and dividing ideal shape surface areas corresponding to the engraving codes at uniform physical length intervals to generate a coordinate mapping relation set between a two-dimensional unfolding plane and a three-dimensional ideal shape surface;
s63, based on a coordinate mapping relation set between a two-dimensional expansion plane and a three-dimensional ideal shape plane, projecting a sampled engraving code real 3D shape model to generate an engraving code two-dimensional expansion image;
S64, geometrically correcting the two-dimensional expansion image of the etching code to generate a two-dimensional expansion correction image of the etching code;
s65, performing printing configuration on the two-dimensional expansion correction image of the engraving code to generate a final original size reduction image of the engraving code 1:1.
15. The method for comparing high-precision complete information acquisition and real 3D morphology restoration of a motor vehicle etching code according to claim 8, wherein before step S2, the load balancing scheduling module of the etching code 3D vision service cluster dynamically schedules the GPU server for subsequent analysis processing according to the power resource usage condition of the whole etching code 3D vision service cluster.
CN202110070353.9A 2021-01-19 2021-01-19 High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes Active CN112907973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110070353.9A CN112907973B (en) 2021-01-19 2021-01-19 High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110070353.9A CN112907973B (en) 2021-01-19 2021-01-19 High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes

Publications (2)

Publication Number Publication Date
CN112907973A CN112907973A (en) 2021-06-04
CN112907973B true CN112907973B (en) 2023-04-25

Family

ID=76115854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110070353.9A Active CN112907973B (en) 2021-01-19 2021-01-19 High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes

Country Status (1)

Country Link
CN (1) CN112907973B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115131350B (en) * 2022-08-30 2022-12-16 南京木木西里科技有限公司 Large-depth-of-field observation and surface topography analysis system
CN116645676B (en) * 2023-07-20 2023-11-03 深圳市驿格科技有限公司 Frame number information acquisition method and system thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN104930985A (en) * 2015-06-16 2015-09-23 大连理工大学 Binocular vision three-dimensional morphology measurement method based on time and space constraints

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101338B (en) * 2018-08-14 2021-04-30 成都佳诚弘毅科技股份有限公司 Image restoration method based on VIN image acquisition device
CN111757086A (en) * 2019-03-28 2020-10-09 杭州海康威视数字技术股份有限公司 Active binocular camera, RGB-D image determination method and device
CN110288050B (en) * 2019-07-02 2021-09-17 广东工业大学 Hyperspectral and LiDar image automatic registration method based on clustering and optical flow method
CN110378289B (en) * 2019-07-19 2023-05-16 王立之 Reading and identifying system and method for vehicle identification code

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN104930985A (en) * 2015-06-16 2015-09-23 大连理工大学 Binocular vision three-dimensional morphology measurement method based on time and space constraints

Also Published As

Publication number Publication date
CN112907973A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN106228507B (en) A kind of depth image processing method based on light field
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN112907973B (en) High-precision complete information acquisition and real 3D morphology restoration comparison system and method for motor vehicle engraving codes
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN101697233A (en) Structured light-based three-dimensional object surface reconstruction method
CN104504744B (en) A kind of true method and device of plan for synthesizing license plate image
CN106643555B (en) Connector recognition methods based on structured light three-dimensional measurement system
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN108765584A (en) Laser point cloud data collection augmentation method, apparatus and readable storage medium storing program for executing
CN111105493B (en) Human hand three-dimensional acquisition method based on multi-view stereo vision
CN113936280B (en) Automatic character recognition system and method for code disc of embedded instrument
CN110992366A (en) Image semantic segmentation method and device and storage medium
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
CN110942092A (en) Graphic image recognition method and recognition system
CN113077523A (en) Calibration method, calibration device, computer equipment and storage medium
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
CN110956668A (en) Focusing stack imaging system preset position calibration method based on focusing measure
CN111179423A (en) Three-dimensional infrared image generation method based on two-dimensional infrared image
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN111598939B (en) Human body circumference measuring method based on multi-vision system
CN112361989B (en) Method for calibrating parameters of measurement system through point cloud uniformity consideration
CN214226124U (en) High-precision complete information acquisition device for engraving codes on motor vehicle
CN110081828A (en) Machine vision gap of the shield tail detection image grid search-engine point reliability filter method
CN115033998B (en) Personalized 2D data set construction method for mechanical parts

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant