CN113033435B - Whole vehicle chassis detection method based on multi-vision fusion - Google Patents

Whole vehicle chassis detection method based on multi-vision fusion Download PDF

Info

Publication number
CN113033435B
CN113033435B CN202110343920.3A CN202110343920A CN113033435B CN 113033435 B CN113033435 B CN 113033435B CN 202110343920 A CN202110343920 A CN 202110343920A CN 113033435 B CN113033435 B CN 113033435B
Authority
CN
China
Prior art keywords
chassis
images
vehicle
pixels
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110343920.3A
Other languages
Chinese (zh)
Other versions
CN113033435A (en
Inventor
田涌涛
李成龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Parking Intelligent Technology Co ltd
Original Assignee
Suzhou Parking Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Parking Intelligent Technology Co ltd filed Critical Suzhou Parking Intelligent Technology Co ltd
Priority to CN202110343920.3A priority Critical patent/CN113033435B/en
Publication of CN113033435A publication Critical patent/CN113033435A/en
Application granted granted Critical
Publication of CN113033435B publication Critical patent/CN113033435B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a whole vehicle chassis detection method based on multi-vision fusion, which is characterized in that a plurality of vision cameras are added on a mechanical device for automatically holding/placing a vehicle to capture a plurality of position pictures of the appearance of the vehicle, particularly a chassis; and then combining the historical detection data, and generating an integral picture of the vehicle chassis in real time by adopting a picture visual fusion technology. According to the invention, for the handover process of the whole vehicle logistics, whether the appearance of the chassis of the vehicle is damaged or not can be used for fast evidence retention, the technical support is provided for the automatic whole vehicle logistics, the handover efficiency of the whole vehicle logistics is improved, and the scraping responsibility in the vehicle logistics is clear.

Description

Whole vehicle chassis detection method based on multi-vision fusion
Technical Field
The invention relates to a whole vehicle chassis detection method based on multi-vision fusion, and belongs to the technical field of vehicle detection.
Background
In the whole vehicle logistics link of the vehicle enterprise, a new vehicle is taken off the vehicle factory assembly line to a 4S shop, and a plurality of logistics links are needed. Risks such as scratch damage and the like of the appearance of the whole car can occur in each movable carrying link. In order to avoid scratching the damaged responsibility of the vehicle during handling, repeated confirmation by human or naked eyes is required during the handover, but nevertheless, cases in which the damaged responsibility of the vehicle cannot be confirmed due to examination and confirmation are frequently occurred.
Disclosure of Invention
The invention aims to provide a whole vehicle chassis detection method based on multi-vision fusion, which is characterized in that a plurality of vision cameras are added to a mechanical structure of an automatic holding/placing vehicle to capture a plurality of position pictures of the appearance of the vehicle, particularly the chassis. And generating an overall picture of the vehicle chassis in real time through a picture visual fusion technology. For the whole vehicle logistics handing-over process, whether the appearance of the vehicle chassis damages evidence is reserved.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a whole vehicle chassis detection method based on multi-vision fusion comprises the following steps:
s1, a plurality of vision cameras are distributed and installed on a mechanical device for holding/placing a vehicle, each vision camera corresponds to a part of chassis areas of the vehicle, and the sum of the chassis areas corresponding to all the vision cameras completely covers all the chassis areas of the vehicle;
s2, when the mechanical device holds/places the vehicle, all the vision cameras are adopted to shoot to obtain the corresponding local chassis images of the vehicle;
s3, extracting key points on each local chassis image, and matching feature vectors of the key points of two different local chassis images; the key points refer to small protruding areas in the image, and the feature vectors represent intensity patterns around the key points;
s4, estimating a homography matrix by adopting a Ranac algorithm and successfully matched characteristics, calculating an affine transformation matrix of two images successfully matched by using the homography matrix, and fusing all local chassis images by adopting a linear gradual change method to obtain a complete panoramic image of the vehicle chassis;
s5, detecting the panoramic image of the vehicle chassis by adopting a neural network object detection method, identifying the positions of four hubs or tires of the vehicle, calculating a projection transformation matrix of the panoramic image, and carrying out projection correction on the panoramic image by using the projection transformation matrix to obtain a final composite chassis panoramic image.
Further, in step S1, the vision camera is mounted on a mechanical device through a position adjustment mechanism.
Further, in step S3, the process of extracting the key points on each local chassis image and matching the feature vectors of the key points of two different local chassis images includes the following steps:
s31, performing image enhancement processing on the shot local chassis image by adopting an automatic white balance algorithm;
s32, extracting key points according to the brightness relation of each local chassis image and the connected pixels;
s33, for each key point, creating a feature vector according to the brightness relation of each pixel in the field range of the key point;
and S34, matching the feature vectors of the key points of the two different local chassis images, and judging the matching relation between the two different local chassis images according to the similarity of the feature vectors of the key points.
Further, in step S32, the process of extracting the key points by combining the brightness relationship of the connected pixels for each local chassis image includes the following steps:
s321, given a pixel, the luminances of N pixels in the area around the pixel are compared, and the pixels in the area are classified into three types:
setting the pixels with higher brightness than the pixel point and the brightness difference value larger than the preset brightness threshold value as I class, setting the pixels with lower brightness than the pixel point and the brightness found larger than the preset brightness threshold value as II class, and setting the pixels with the rest brightness as III class;
s322, counting the number of connected pixels in the region, and taking a given pixel point as a key point if the number of the connected pixels of the class I or the class II is larger than a preset number threshold value;
s323, repeating the steps S321 to S322 until all pixels of the whole image are processed.
Further, in step S33, the process of creating the feature vector according to the brightness relationship of each pixel in the range of the keypoint field for each keypoint includes the following steps:
s331, carrying out smoothing processing on the local chassis image by utilizing Gaussian kernel;
s332, giving a key point;
s333, sequentially randomly selecting a pair of pixels from within a defined square field centered on a given keypoint;
s334, comparing the brightness of the two randomly selected pixels, if the brightness of the first pixel is higher than that of the second pixel, setting the corresponding bit of the descriptor of the given key point to be 1, otherwise, setting the corresponding bit of the descriptor of the given key point to be 0;
s335, repeating the steps S333-S334 for M times aiming at the given key point, and putting the brightness comparison result of the M times of pixels into the binary feature vector of the key point;
s335, repeating the steps S323 to S335 until the processing is completed for all key points.
Further, in step S34, the process of matching feature vectors of key points of two different local chassis images and determining the matching relationship between the two different local chassis images according to the similarity of the feature vectors of the key points includes the following steps:
s341, taking one of the local chassis images as a training image;
s342, calculating an ORB descriptor of the training image and storing the ORB descriptor into a memory, wherein the ORB descriptor of the training image contains binary feature vectors of key points;
s343, inquiring ORB descriptors of other local chassis images, and performing key point matching on the ORB descriptors of the other local chassis images and the ORB descriptors of the training images by adopting a matching function.
Further, in step S343, the process of performing the keypoint matching with the ORB descriptor of the training image by using the matching function includes:
and calculating the standard Euclidean distance similarity between any two key points in the two images, and taking the standard Euclidean distance similarity as key matching quality.
Further, in step S343, the process of performing the keypoint matching with the ORB descriptor of the training image by using the matching function includes:
and calculating whether the feature vector between any two key points in the two images contains a descriptor sequence with similarity larger than a preset similarity threshold value.
Further, in step S4, the process of fusing all the local chassis images by using the linear gradient method to obtain the panoramic image of the complete vehicle chassis includes:
s41, matching key points on all extracted local chassis images by utilizing the feature descriptors to obtain the same key points in different local chassis images, and generating a plurality of groups of matching pairs;
s42, referring to shooting parameters of the current logistics link, shooting parameters of the historical logistics environment and historical chassis images of the vehicle, and analyzing and obtaining relative positions among all local chassis images according to the obtained matching pairs;
s43, adjusting the picture direction of each local chassis image to make the directions consistent;
s44, projecting all local chassis images on a spherical surface or a cylindrical surface through projective transformation;
s45, calculating to obtain a seam area of the adjacent local chassis images, processing pixels of the seam area according to a preset fusion algorithm, and removing dislocation pixels between the adjacent local chassis images to obtain a final chassis panoramic image; the preset fusion algorithm is to weight the pixels in the joint region according to the distance between the pixels and the joint.
Further, the detection method further comprises the following steps:
s4, packaging the acquired chassis panoramic image and the corresponding local chassis image processing data into a detection data file of the current logistics link;
s5, carrying out hash processing on the detection data file, uploading the corresponding hash value to a cloud server, and loading the corresponding hash value into a corresponding block of the block chain.
The invention has the beneficial effects that:
(1) Technical support is provided for automatic whole car logistics, the handover efficiency of the whole car logistics is improved, and the scraping responsibility in the car logistics is clear.
(2) The workload of personnel detection is reduced, the man-made work mass is reduced, and the cost is effectively saved.
(3) And a mechanical device is adopted to automatically collect the picture operation, so that the safety of the detection operation is ensured.
(4) And the block chain technology is used for storing the detection data, so that the safety of the detection data is ensured.
(5) Historical detection data is introduced in the fusion process, so that the fusion effect is effectively enhanced, and the fusion speed is improved.
The foregoing description is only an overview of the present invention, and is intended to provide a better understanding of the present invention, as it is embodied in the following description, with reference to the preferred embodiments of the present invention and the accompanying drawings.
Drawings
Fig. 1 is a flowchart of a whole vehicle chassis detection method based on multi-vision fusion according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating an example of a fusion result of one of chassis panoramic images according to an embodiment of the present invention.
Fig. 3 is a schematic representation of one of the feature vectors.
FIG. 4 is a schematic view of a neighborhood corresponding to one of the keypoints.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless explicitly specified and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In addition, the technical features of the different embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Fig. 1 is a flowchart of a whole vehicle chassis detection method based on multi-vision fusion according to an embodiment of the present invention. The embodiment can be applied to the situation of detecting the whole vehicle chassis through a server, multi-vision equipment and the like.
Referring to fig. 1, the method for detecting the chassis of the whole vehicle specifically includes:
s1, a plurality of vision cameras are distributed and installed on a mechanical device for holding/placing the vehicle, each vision camera corresponds to a part of chassis areas of the vehicle, and the sum of the chassis areas corresponding to all the vision cameras completely covers all the chassis areas of the vehicle.
A typical passenger vehicle chassis is oversized (3000 mm or more) relative to the camera object distance (within 200 mm) to be imaged by a single camera. Therefore, the invention proposes that a plurality of visible light cameras are arranged on a mechanical device for holding/placing a vehicle, partial images of the chassis are shot at different positions, and then the whole panoramic picture of the whole vehicle chassis is spliced by an image fusion technology. Because the appearance parameters of the vehicles are different, the invention further provides that the vision camera is arranged on the mechanical device through the position adjusting mechanism, and for vehicles with different appearance parameters, the shooting position of the vision camera can be quickly adjusted through the position adjusting mechanism so as to adapt to different shooting requirements. For example, the visual camera is installed by adopting a clamping device capable of being quickly disassembled and assembled, or the visual camera is quickly moved by arranging a short-distance guide rail on the mechanical device, and even a plurality of installation positions and fixing components are reserved on the mechanical device, so that the visual camera is quickly disassembled and assembled.
S2, when the mechanical device holds/places the vehicle, all the vision cameras are adopted to shoot and obtain the corresponding local chassis images of the vehicle.
S3, extracting key points on each local chassis image, and matching feature vectors of the key points of two different local chassis images; the keypoints refer to small areas of protrusion in the image, and the feature vectors represent intensity patterns around the keypoints. Specifically, the method comprises the following steps:
s31, performing image enhancement processing on the shot local chassis image by adopting an automatic white balance algorithm.
S32, extracting key points according to the brightness relation of the connected pixels aiming at each local chassis image. Comprising the following steps: s321, given a pixel, the luminances of N pixels in the area around the pixel are compared, and the pixels in the area are classified into three types: the pixels with higher brightness than the pixel point and the brightness difference value larger than the preset brightness threshold value are set as I class, the pixels with lower brightness than the pixel point and the brightness larger than the preset brightness threshold value are set as II class, and the pixels with the rest brightness are set as III class. S322, counting the number of connected pixels in the region, and taking a given pixel point as a key point if the number of the connected pixels of the class I or the class II is larger than a preset number threshold value. S323, repeating the steps S321 to S322 until all pixels of the whole image are processed.
The present embodiment employs ORBs to quickly create feature vectors for keypoints in an image that can be used to identify the earth-part features in the image. ORB is characterized by an ultrafast speed and is not affected to some extent by noise and image transformations, such as rotation and scaling transformations.
Specifically, the ORB first looks up a special region from the image, called a keypoint. Key points are small areas of prominence in the image, such as corner points, or pixels with sharp pixel value going features from light to dark. The ORB then calculates a corresponding feature vector for each keypoint. In this embodiment, the feature vector created by the ORB algorithm contains only 1 and 0, called binary feature vector. The order of 1 and 0 may vary depending on the particular keypoint and the surrounding pixel region. The vector represents the intensity pattern around the keypoint, so multiple feature vectors can be used to identify a larger area, even a specific object in the image.
For example, the process of quickly selecting keypoints is as follows: giving a pixel point p, and comparing 16 pixels in the range of the target p circle by FAST, wherein each pixel is divided into three types according to the fact that p is higher than p, less than p or similar to p. On this basis, for a given threshold h, brighter pixels, i.e. pixels with a luminance above ip+h, darker pixels, i.e. pixels with a luminance below Ip-h, a similar pixel will be a pixel with a luminance between these two values. After classifying the pixels, if there are more than 8 consecutive pixels on the circle, the pixel p is selected as a key point if it is darker or lighter than p.
S33, for each key point, creating a feature vector according to the brightness relation of each pixel in the key point field range. Comprising the following steps:
s331, carrying out smoothing processing on the local chassis image by utilizing Gaussian kernel; s332, giving a key point; s333, sequentially randomly selecting a pair of pixels from within a defined square field centered on a given keypoint;
s334, comparing the brightness of the two randomly selected pixels, if the brightness of the first pixel is higher than that of the second pixel, setting the corresponding bit of the descriptor of the given key point to be 1, otherwise, setting the corresponding bit of the descriptor of the given key point to be 0; s335, repeating the steps S333-S334 for M times aiming at the given key point, and putting the brightness comparison result of the M times of pixels into the binary feature vector of the key point; s335, repeating the steps S323 to S335 until the processing is completed for all key points.
In the present embodiment, binary feature vectors (also referred to as binary descriptors), which are feature vectors containing only 1 and 0, are employed. Each keypoint is described by a binary feature vector, typically a 128.512 bit string, containing only 1 and 0. These feature vectors may collectively represent an object. Fig. 3 is a schematic representation of one of the feature vectors.
The given image is first smoothed using a gaussian kernel to prevent the descriptor from being too sensitive to high frequency noise. Second, for a given keypoint, a pair of pixels is randomly selected within a well-defined neighborhood around the keypoint, the neighborhood around the keypoint is called a Patch, which is a square having a particular pixel width and height. FIG. 4 is a schematic view of a neighborhood corresponding to one of the keypoints. Shown as a light gray square in fig. 4, is the first pixel in a random pair, which is one pixel extracted from a gaussian distribution centered at the keypoint, with a standard deviation or dispersion trend σ. The pixel shown as a dark gray square in fig. 4 is the second pixel in the random pair. The standard deviation of the pixel extracted from the Gaussian distribution centering on the first pixel is sigma/2, and experience shows that the Gaussian selection improves the characteristic matching rate. Finally, a binary descriptor is constructed by comparing the brightness of the two pixels as a keypoint. For example, if the first pixel is brighter than the second pixel, then the corresponding bit in the descriptor is assigned a value of 1, otherwise a value of 0 is assigned. For a 256-bit vector, the present embodiment repeats this process 256 times for the same keypoint and then goes to the next keypoint. The 256 pixel brightness comparison result is then put into the binary feature vector of the key point. The foregoing process is repeated until a vector is created for each keypoint in the image.
And S34, matching the feature vectors of the key points of the two different local chassis images, and judging the matching relation between the two different local chassis images according to the similarity of the feature vectors of the key points.
Specifically, one of the partial images is given as a training image, similar features are searched in the other partial images, and the searched image is defined as a query image.
The first step is to compute the ORB descriptor of the training image and store it in memory. The ORB descriptor will contain binary feature vectors for describing key points in this training image. The second step is to calculate and save the ORB descriptors of the query image, and after obtaining the descriptors of the training and query images, the final step is to use the corresponding descriptors to perform keypoint matching on the two images, typically using a matching function.
The purpose of the matching function is to match the keypoints of two different images by comparing the descriptors of the two images to see if they are very close to match. When the matching function compares two keypoints, it derives a matching quality based on an index that represents the similarity of the feature vectors of the keypoints. This index can be considered as a standard euclidean distance similarity to two keypoints. Some metrics will directly detect if the feature vector contains 1 and 0 in similar order. It should be noted that different matching functions use different metrics to determine the quality of the match. For binary descriptors used by ORB, etc., hamming metrics are typically used because they are very fast to execute.
S4, estimating a homography matrix by adopting a Ranac algorithm and successfully matched characteristics, calculating an affine transformation matrix of the successfully matched images by using the homography matrix, and fusing all local chassis images by adopting a linear gradient method to obtain a complete panoramic image of the vehicle chassis.
Considering that the vehicle needs to pass through a plurality of logistics links, and each logistics link needs chassis photo fusion. Because of the special transportation of the vehicle, the shooting parameters (including external factors such as ambient light) of each link can cause slightly different shot partial images, but because the shooting objects are the same vehicle chassis, the embodiment proposes that history detection data is introduced in the fusion process. For example, the characteristic points of the non-edge area are additionally selected according to the historical chassis images, so that the relative positions, the picture directions and the like between the local chassis images are quickly obtained, and the fusion effect is further enhanced or the fusion speed is improved.
Specifically, the method comprises the following steps:
s41, matching key points on all extracted local chassis images by using the feature descriptors to obtain the same key points in different local chassis images, and generating a plurality of groups of matching pairs. S42, referring to shooting parameters of the current logistics link, shooting parameters of the historical logistics environment and historical chassis images of the vehicle, and analyzing and obtaining relative positions among all the local chassis images according to the obtained matching pairs. S43, adjusting the picture direction of each local chassis image to be consistent. S44, because the different stitching of the perspective angles of the cameras can destroy the consistency of the field of view, all images are projected on one spherical surface or cylindrical surface through projection transformation. S45, calculating to obtain a seam area of the adjacent local chassis images, processing pixels of the seam area according to a preset fusion algorithm, and removing dislocation pixels between the adjacent local chassis images to obtain a final chassis panoramic image; the preset fusion algorithm is to weight the pixels in the joint region according to the distance between the pixels and the joint.
A seam refers to the line that is most similar in the overlapping area between images. After the joint positions of the two images are obtained, a plurality of pixels near the joint are fused, and dislocation between the images is removed to obtain a joint result. The fusion may be a weighted fusion or the like in which a position near the joint is obtained from a distance from the joint. The weight can be quickly calculated according to the shooting parameters of the current logistics link, the shooting parameters of the historical logistics environment and the corresponding historical weight. Fig. 2 is a diagram illustrating an example of a fusion result of one of chassis panoramic images in an embodiment of the present invention.
S5, detecting the panoramic image of the vehicle chassis by adopting a neural network object detection method, identifying the positions of four hubs or tires of the vehicle, calculating a projection transformation matrix of the panoramic image, and carrying out projection correction on the panoramic image by using the projection transformation matrix to obtain a final composite chassis panoramic image.
In some examples, the detection method further comprises the steps of:
s4, packaging the acquired chassis panoramic image and the corresponding local chassis image processing data into a detection data file of the current logistics link. S5, carrying out hash processing on the detection data file, uploading the corresponding hash value to a cloud server, and loading the corresponding hash value into a corresponding block of the block chain. The characteristic of non-falsification of the block records ensures the data security and traceability of the logistics handover process. Preferably, the shooting parameters of the historical logistics environment and the historical chassis image of the vehicle are downloaded from corresponding blocks of the blockchain.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. The whole vehicle chassis detection method based on the multi-vision fusion is characterized by comprising the following steps of:
s1, a plurality of vision cameras are distributed and installed on a mechanical device for holding/placing a vehicle, each vision camera corresponds to a part of chassis areas of the vehicle, and the sum of the chassis areas corresponding to all the vision cameras completely covers all the chassis areas of the vehicle;
s2, when the mechanical device holds/places the vehicle, all the vision cameras are adopted to shoot to obtain the corresponding local chassis images of the vehicle;
s3, extracting key points on each local chassis image, and matching feature vectors of the key points of two different local chassis images; the key points refer to small protruding areas in the image, and the feature vectors represent intensity patterns around the key points;
s4, estimating a homography matrix by adopting a Ranac algorithm and successfully matched characteristics, calculating an affine transformation matrix of two images successfully matched by using the homography matrix, and fusing all local chassis images by adopting a linear gradual change method to obtain a complete panoramic image of the vehicle chassis;
s5, detecting the panoramic image of the vehicle chassis by adopting a neural network object detection method, identifying the positions of four hubs or tires of the vehicle, calculating a projection transformation matrix of the panoramic image, and carrying out projection correction on the panoramic image by using the projection transformation matrix to obtain a final composite chassis panoramic image.
2. The method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 1, wherein in step S1, the vision camera is mounted on a mechanical device through a position adjustment mechanism.
3. The method for detecting the whole chassis based on the multi-vision fusion according to claim 1, wherein in step S3, the process of extracting the key points on each local chassis image and matching the feature vectors of the key points of two different local chassis images includes the following steps:
s31, performing image enhancement processing on the shot local chassis image by adopting an automatic white balance algorithm;
s32, extracting key points according to the brightness relation of each local chassis image and the connected pixels;
s33, for each key point, creating a feature vector according to the brightness relation of each pixel in the field range of the key point;
and S34, matching the feature vectors of the key points of the two different local chassis images, and judging the matching relation between the two different local chassis images according to the similarity of the feature vectors of the key points.
4. The method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 3, wherein in step S32, the process of extracting key points by combining the brightness relationship of the connected pixels for each partial chassis image includes the following steps:
s321, given a pixel, the luminances of N pixels in the area around the pixel are compared, and the pixels in the area are classified into three types:
setting the pixels with higher brightness than the pixel point and the brightness difference value larger than the preset brightness threshold value as I class, setting the pixels with lower brightness than the pixel point and the brightness found larger than the preset brightness threshold value as II class, and setting the pixels with the rest brightness as III class;
s322, counting the number of connected pixels in the region, and taking a given pixel point as a key point if the number of the connected pixels of the class I or the class II is larger than a preset number threshold value;
s323, repeating the steps S321 to S322 until all pixels of the whole image are processed.
5. A method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 3, wherein in step S33, the process of creating a feature vector for each key point according to the brightness relationship of each pixel in the range of the key point field comprises the following steps:
s331, carrying out smoothing processing on the local chassis image by utilizing Gaussian kernel;
s332, giving a key point;
s333, sequentially randomly selecting a pair of pixels from within a defined square field centered on a given keypoint;
s334, comparing the brightness of the two randomly selected pixels, if the brightness of the first pixel is higher than that of the second pixel, setting the corresponding bit of the descriptor of the given key point to be 1, otherwise, setting the corresponding bit of the descriptor of the given key point to be 0;
s335, repeating the steps S333-S334 for M times aiming at the given key point, and putting the brightness comparison result of the M times of pixels into the binary feature vector of the key point;
s335, repeating the steps S323 to S335 until the processing is completed for all key points.
6. The method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 5, wherein in step S34, the process of matching feature vectors of key points of two different local chassis images and judging a matching relationship between the two different local chassis images according to similarity of the feature vectors of the key points includes the following steps:
s341, taking one of the local chassis images as a training image;
s342, calculating an ORB descriptor of the training image and storing the ORB descriptor into a memory, wherein the ORB descriptor of the training image contains binary feature vectors of key points;
s343, inquiring ORB descriptors of other local chassis images, and performing key point matching on the ORB descriptors of the other local chassis images and the ORB descriptors of the training images by adopting a matching function.
7. The method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 6, wherein the step S343 of performing the key point matching with the ORB descriptor of the training image using the matching function comprises:
and calculating the standard Euclidean distance similarity between any two key points in the two images, and taking the standard Euclidean distance similarity as key matching quality.
8. The method for detecting a complete vehicle chassis based on multi-vision fusion according to claim 6, wherein the step S343 of performing the key point matching with the ORB descriptor of the training image using the matching function comprises:
and calculating whether the feature vector between any two key points in the two images contains a descriptor sequence with similarity larger than a preset similarity threshold value.
9. The method for detecting the whole chassis based on the multi-vision fusion according to claim 6, wherein in step S4, the process of fusing all the partial chassis images by using the linear gradient method to obtain the panoramic image of the whole chassis of the vehicle includes:
s41, matching key points on all extracted local chassis images by utilizing the feature descriptors to obtain the same key points in different local chassis images, and generating a plurality of groups of matching pairs;
s42, referring to shooting parameters of the current logistics link, shooting parameters of the historical logistics environment and historical chassis images of the vehicle, and analyzing and obtaining relative positions among all local chassis images according to the obtained matching pairs;
s43, adjusting the picture direction of each local chassis image to make the directions consistent;
s44, projecting all local chassis images on a spherical surface or a cylindrical surface through projective transformation;
s45, calculating to obtain a seam area of the adjacent local chassis images, processing pixels of the seam area according to a preset fusion algorithm, and removing dislocation pixels between the adjacent local chassis images to obtain a final chassis panoramic image; the preset fusion algorithm is to weight the pixels in the joint region according to the distance between the pixels and the joint.
10. The method for detecting the chassis of the whole vehicle based on the multi-vision fusion according to claim 1, wherein the method further comprises the following steps:
s4, packaging the acquired chassis panoramic image and the corresponding local chassis image processing data into a detection data file of the current logistics link;
s5, carrying out hash processing on the detection data file, uploading the corresponding hash value to a cloud server, and loading the corresponding hash value into a corresponding block of the block chain.
CN202110343920.3A 2021-03-31 2021-03-31 Whole vehicle chassis detection method based on multi-vision fusion Active CN113033435B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110343920.3A CN113033435B (en) 2021-03-31 2021-03-31 Whole vehicle chassis detection method based on multi-vision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110343920.3A CN113033435B (en) 2021-03-31 2021-03-31 Whole vehicle chassis detection method based on multi-vision fusion

Publications (2)

Publication Number Publication Date
CN113033435A CN113033435A (en) 2021-06-25
CN113033435B true CN113033435B (en) 2023-11-14

Family

ID=76453565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110343920.3A Active CN113033435B (en) 2021-03-31 2021-03-31 Whole vehicle chassis detection method based on multi-vision fusion

Country Status (1)

Country Link
CN (1) CN113033435B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409208A (en) * 2018-09-10 2019-03-01 东南大学 A kind of vehicle characteristics extraction and matching process based on video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026770A1 (en) * 2009-07-31 2011-02-03 Jonathan David Brookshire Person Following Using Histograms of Oriented Gradients

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409208A (en) * 2018-09-10 2019-03-01 东南大学 A kind of vehicle characteristics extraction and matching process based on video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉的车辆衡中轴型检测方法研究;侯岳青;徐贵力;朱仕鹏;;计算机测量与控制(09);全文 *

Also Published As

Publication number Publication date
CN113033435A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
CN108683907A (en) Optics module picture element flaw detection method, device and equipment
CN113038018B (en) Method and device for assisting user in shooting vehicle video
WO2020110576A1 (en) Information processing device
CN109413411B (en) Black screen identification method and device of monitoring line and server
US20200074215A1 (en) Method and system for facilitating detection and identification of vehicle parts
KR102336030B1 (en) Electric vehicle charger fire detection and charger condition prediction system
CN110087049A (en) Automatic focusing system, method and projector
US11010884B2 (en) System and method for evaluating displays of electronic devices
Stone et al. Forward looking anomaly detection via fusion of infrared and color imagery
CN109859104A (en) A kind of video generates method, computer-readable medium and the converting system of picture
CN112985778A (en) Positioning method of test chart, terminal and storage medium
CN113033435B (en) Whole vehicle chassis detection method based on multi-vision fusion
CN106997366B (en) Database construction method, augmented reality fusion tracking method and terminal equipment
CN115018565A (en) Advertisement media image identification method, system, equipment and readable storage medium
CN111275756B (en) Spool positioning method and device
CN116758425A (en) Automatic acceptance checking method and device for large-base photovoltaic power station
CN113727022B (en) Method and device for collecting inspection image, electronic equipment and storage medium
CN109146865B (en) Visual alignment detection graph source generation system
JPH1151611A (en) Device and method for recognizing position and posture of object to be recognized
CN115456969A (en) Method for detecting appearance defect, electronic device and storage medium
CN116051876A (en) Camera array target recognition method and system of three-dimensional digital model
KR102196744B1 (en) Method and server for providing cultural property inspection service using drone
CN115601699A (en) Safety helmet wearing detection method, electronic equipment and computer readable medium
CN113066003B (en) Method and device for generating panoramic image, electronic equipment and storage medium
JP2023046979A (en) Collation device and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant