CN112560606B - Trailer angle identification method and device - Google Patents

Trailer angle identification method and device Download PDF

Info

Publication number
CN112560606B
CN112560606B CN202011405469.5A CN202011405469A CN112560606B CN 112560606 B CN112560606 B CN 112560606B CN 202011405469 A CN202011405469 A CN 202011405469A CN 112560606 B CN112560606 B CN 112560606B
Authority
CN
China
Prior art keywords
information
corner
position information
frame
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011405469.5A
Other languages
Chinese (zh)
Other versions
CN112560606A (en
Inventor
李世明
张海强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingwei Hirain Tech Co Ltd
Original Assignee
Beijing Jingwei Hirain Tech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingwei Hirain Tech Co Ltd filed Critical Beijing Jingwei Hirain Tech Co Ltd
Priority to CN202011405469.5A priority Critical patent/CN112560606B/en
Publication of CN112560606A publication Critical patent/CN112560606A/en
Application granted granted Critical
Publication of CN112560606B publication Critical patent/CN112560606B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a trailer angle identification method and device, which are used for reducing the complexity of trailer angle detection. The method, based on the neural network, comprises the following steps: acquiring a real-time image; the real-time image is obtained by shooting a camera for shooting a two-dimensional code on the trailer in real time; performing corner recognition on the real-time image by using a neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimension code; performing frame recognition on the real-time image by using a neural network to obtain a frame recognition result; the frame recognition result includes: frame information of the two-dimensional code; performing identification ID information identification on the real-time image by using a neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code; determining the position information of each corner point corresponding to the ID information according to the frame identification result; and determining the trailer angle according to the angular point position information corresponding to each ID information.

Description

Trailer angle identification method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a trailer angle identification method and device.
Background
As shown in fig. 1, when a large truck turns, the trailer makes an angle with the truck head that is the trailer angle (angle of the truck head toward the center axis of the trailer).
Trucks with longer trailers need to identify trailer angle during an autopilot turn: when the intersection turns, the rotation control and the speed control of the truck need to acquire the change condition of the included angle between the trailer and the locomotive to obtain the swing posture of the trailer, and then the trailer is prevented from being collided with the road edge or other non-road obstacles to cause damage according to the swing posture.
Referring to fig. 2, in order to obtain the trailer angle conveniently, two-dimension codes are set on the trailer, and the trailer angle can be calculated through the position information of the two-dimension codes.
The current two-dimensional code corner detection is realized based on a relatively complicated traditional image processing flow, and comprises algorithms such as gradient calculation, edge detection, segmentation, straight line fitting and the like; the complex algorithm flow results in the need for multiple parameter adjustments to meet the needs of the usage scenario. And multiple parameters are needed for different scenes for complex and changeable scenes, so that the complexity of use is further increased.
Disclosure of Invention
In view of this, the embodiments of the present invention provide a trailer angle recognition method and apparatus, so as to reduce the complexity of trailer angle detection.
In order to achieve the above object, the embodiment of the present invention provides the following technical solutions:
a trailer angle identification method based on a neural network, the trailer angle identification method comprising:
acquiring a real-time image; the real-time image is obtained by shooting a camera for shooting a two-dimensional code on the trailer in real time;
performing corner recognition on the real-time image by using the neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimensional code;
performing frame recognition on the real-time image by using the neural network to obtain a frame recognition result; the frame recognition result comprises: frame information of the two-dimensional code;
performing identification ID information identification on the real-time image by using the neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code;
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
and determining the trailer angle according to the angular point position information corresponding to each ID information.
Optionally, the neural network includes an encoder, a corner prediction branch, a frame prediction branch, and an ID prediction branch; wherein the encoder is configured to: extracting a feature map of the real-time image; the corner prediction branch is used for: calculating the probability of each feature point by using the feature map, and taking the feature points with probability values meeting preset conditions as corner points to obtain the corner point identification result; the frame prediction branch is used for: performing frame recognition by using the feature map to obtain a frame recognition result; the ID prediction branch is to: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
Optionally, determining, according to the frame recognition result, the position information of each corner point corresponding to the ID information includes: and taking the corner points belonging to the same frame as the corner points corresponding to the same ID to determine the position information of each corner point corresponding to the ID information.
Optionally, the number of the two-dimensional codes is two; the determining the trailer angle according to the angular point position information corresponding to each ID information comprises: calculating the center point position information of each two-dimensional code according to the corner point position information corresponding to each ID information; calculating an included angle between a connecting line of the centers of the two-dimensional codes and a target straight line; the target straight line is a straight line which is perpendicular to the direction of the vehicle head on the horizontal plane; and subtracting the calculated included angle from a pre-calibrated angle difference value to obtain the trailer angle.
Optionally, the angular point position information is specifically the position information of the angular point under an image coordinate system; the calculating of the center point position information of each two-dimensional code comprises the following steps: calculating the center point position information of each two-dimensional code under the image coordinate system according to the corner point position information corresponding to each ID information; according to the PNP algorithm and an internal reference matrix of the camera, a rotation translation matrix from an image coordinate system to a camera coordinate system is obtained through calculation; and calculating to obtain the position information of the center point under the camera coordinate system according to the rotation translation matrix.
Optionally, the center point positions of the two-dimensional codes are respectively expressed as (x 1, y 1), (x 2, y 2); the formula for calculating the included angle between the connecting line of the centers of the two-dimensional codes and the target straight line comprises the following steps:/(x 2-x 1)); said->And the included angle between the connecting line of the centers of the two-dimension codes and the target straight line is shown.
Optionally, the method further comprises: obtaining a trailer posture by using the trailer angle; and at least one of rotation control and speed control is performed according to the posture of the trailer.
A trailer angle identification apparatus comprising:
a camera for: shooting a two-dimensional code on a trailer in real time to obtain a real-time image;
an identification system for:
performing corner recognition on the real-time image by using a neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimensional code;
performing frame recognition on the real-time image by using the neural network to obtain a frame recognition result; the frame recognition result comprises: frame information of the two-dimensional code;
performing identification ID information identification on the real-time image by using the neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code;
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
and determining the trailer angle according to the angular point position information corresponding to each ID information.
Optionally, the neural network includes an encoder, a corner prediction branch, a frame prediction branch, and an ID prediction branch; wherein the encoder is configured to: extracting a feature map of the real-time image; the corner prediction branch is used for: calculating the probability of each feature point by using the feature map, and taking the feature points with probability values meeting preset conditions as corner points to obtain the corner point identification result; the frame prediction branch is used for: performing frame recognition by using the feature map to obtain a frame recognition result; the ID prediction branch is to: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
Optionally, the number of the two-dimensional codes is two; the determining the trailer angle according to the angular point position information corresponding to each ID information comprises: calculating the center point position information of each two-dimensional code according to the corner point position information corresponding to each ID information; calculating an included angle between a connecting line of the centers of the two-dimensional codes and a target straight line; the target straight line is a straight line which is perpendicular to the direction of the vehicle head on the horizontal plane; and subtracting the calculated included angle from a pre-calibrated angle difference value to obtain the trailer angle.
Therefore, in the embodiment of the invention, the neural network model is adopted to detect the corner points of the two-dimensional code, and the frame and ID information of the two-dimensional code are identified, so that the parameter adjustment in the deployment process is simplified. And then, according to the frame identification result, determining the position information of each angular point corresponding to each ID information, and then, determining the angle of the trailer, and performing rotation control and speed control.
Drawings
FIG. 1 is a schematic view of a trailer angle provided by an embodiment of the present invention;
fig. 2 is a schematic diagram of a two-dimensional code arranged on a trailer according to an embodiment of the present invention;
FIG. 3 is an exemplary block diagram of a hanging angle recognition device according to an embodiment of the present invention;
FIG. 4 is an exemplary flow chart of a trailer angle identification method provided by an embodiment of the present invention;
fig. 5 is a schematic diagram of carrying ID information by a two-dimensional code frame according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an included angle (trailer angle) between a vehicle head direction and a central axis of a trailer according to an embodiment of the present invention;
FIG. 7 is an exemplary flow chart of a trailer angle identification method provided by an embodiment of the present invention;
FIG. 8 is an exemplary block diagram of a neural network provided by an embodiment of the present invention;
FIG. 9 is an exemplary flow of hook angle identification based on a neural network architecture provided by an embodiment of the present invention;
fig. 10 is another exemplary flow chart of a trailer angle identification method according to an embodiment of the present invention.
Detailed Description
The invention provides a trailer angle identification method and a trailer angle identification device based on a neural network, so as to reduce the complexity of trailer angle detection.
The trailer angle recognition device may specifically be a camera mounted on the vehicle head, and a camera of the camera may capture a two-dimensional code (as shown in fig. 2) set on the trailer.
When the camera is installed, the area in the field angle of view of the camera can cover the active area of the two-dimensional code, so that the shot image can be ensured to contain two-dimensional codes as much as possible.
Referring to fig. 3, the hanging angle identifying device illustratively includes: a camera 1 and an identification system 2.
Referring to fig. 4, the trailer angle identification method executed by the trailer angle identification device includes the following steps:
s1: a real-time image is acquired.
The real-time image can be captured in real time by the camera 1.
S2: and carrying out corner recognition on the real-time image by using a neural network to obtain a corner recognition result.
Specifically, the corner recognition result may include corner position information of the two-dimensional code.
S3: and performing frame recognition on the real-time image by using a neural network to obtain a frame recognition result.
Specifically, the frame recognition result may include: and frame information of the two-dimensional code.
In general, two-dimensional codes are included in the real-time image, and the frame recognition result generally includes frame information of the two-dimensional codes.
The border information includes the upper left corner coordinates of the identified rectangle and the width and height of the rectangle.
S4: and carrying out Identification (ID) information identification on the real-time image by using the neural network to obtain an ID information identification result.
Referring to fig. 5, the white frame in the black area carries ID information, and the ID information of the two-dimensional code can be obtained as an ID information identification result by using the neural network to identify the ID information.
And through the ID information, whether the two-dimensional code is the two-dimensional code on the left side of the trailer or the two-dimensional code on the right side can be known.
S5: determining the position information of each corner point corresponding to the ID information according to the frame identification result;
in one example, corner points belonging to the same frame may be used as corner points corresponding to the same ID to determine location information of each corner point corresponding to the ID information.
S6: and determining the trailer angle according to the angular point position information corresponding to each ID information.
Referring to fig. 6, the trailer angle is the angle of the head of the vehicle toward the center axis of the trailer. Of course, the angle between the rear edge of the vehicle head (target straight line) and the front edge of the trailer is also understood.
In one example, center point position information of each two-dimensional code may be calculated according to corner point position information corresponding to each ID information;
calculating the included angle between the two-dimension code center connecting line and the target straight line(the target straight line is a straight line perpendicular to the direction of the vehicle head on the horizontal plane);
assuming that the center point positions of the two-dimensional codes are respectively expressed as (x 1, y 1), (x 2, y 2);
the formula of the included angle between the two-dimensional code center connecting line and the target straight line comprises:
finally, the calculated included angleAnd subtracting the angle difference delta theta from the pre-calibrated angle difference delta theta to obtain the trailer angle theta.
The angle difference may be determined manually.
Steps S2-S6 may be performed by the recognition system.
Therefore, in the embodiment of the invention, the neural network model is adopted to detect the corner points of the two-dimensional code, and the frame and ID information of the two-dimensional code are identified, so that the parameter adjustment in the deployment process is simplified. And then, according to the frame identification result, determining the position information of each angular point corresponding to each ID information, and then, determining the angle of the trailer, and performing rotation control and speed control.
In other embodiments of the present invention, referring to fig. 7, after step S6, the method may further include the following steps:
s7: obtaining a trailer posture by using the trailer angle;
referring to fig. 6, according to the angle θ and the known length and width of the trailer, the relative positional relationship (trailer posture) of the whole trailer with respect to the vehicle head position can be obtained, including the positions of the four corners of the trailer, the boundary information, and the like.
S8: at least one of rotation control and speed control is performed according to the posture of the trailer.
How to perform rotation control and speed control according to the posture of the trailer can be referred to in the prior art, and will not be described herein.
In other embodiments of the present invention, referring to fig. 8, the neural network may include at least an encoder (encoder), a corner prediction branch (decoder), a frame prediction branch, and an ID prediction branch.
In one example, referring to FIG. 9, the bezel prediction branch and the ID prediction branch may share an RPN (RegionProposal Network, regional recommendation network) module that uses a conventional FaterRCNN object detection network.
The RPN module is mainly used for recommending the area which possibly is the two-dimensional code, and can also identify the ID in the area.
The trailer angle identification method is described in more detail below using the neural network architecture shown in fig. 9 as an example.
Referring to fig. 10, the method illustratively includes the steps of:
s101: a real-time image is acquired.
This step is the same as S1 and will not be described here.
S102: and extracting a characteristic diagram of the real-time image.
Step S102 may be performed by an encoder.
A feature map is understood to mean a code that characterizes each pixel of an image.
Image abstract semantic features (feature graphs) can be extracted by using encoders in a neural network model formed by a plurality of convolution layers. In particular, an existing model, resnet18, densnet, etc., model may be employed as the encoder.
S103: and calculating the probability of each feature point by using the feature map, and taking the probability value meeting the condition as the corner point to obtain a corner point identification result.
Step S103 may be performed by the corner prediction branch.
Specifically, the characteristic diagram size output by the encoder can be restored to the input diagram size by using a neural network module with a decoding function, which is formed by deconvolution layers. Therefore, the key points of the subsequent prediction can be ensured to correspond to the corner positions of the original two-dimensional codes.
And calculating the probability of each feature point in the restored feature map, wherein the feature points with probability values meeting preset conditions (such as being larger than a threshold value) are key points, namely corner points in the embodiment.
Because the two-dimensional code has four corner points, a plurality of characteristic points possibly exist in a certain area to meet the preset condition, and the point at the most boundary can be taken as the corner point, or the point at the most center of the area can be taken as the corner point.
The angular point position information obtained in this step is specifically the position information of the angular point under the image coordinate system.
S104: and carrying out frame recognition by using the feature map to obtain a frame recognition result.
Step S104 may be performed by the bezel prediction branch.
S105: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
Step S105 may be performed by the ID predictive branch.
The model parameters can be trained according to the training flow of the convolutional neural network, so that the convolutional neural network model has the capability of predicting two-dimension code corner points, two-dimension code frames and ID information.
S106: and grouping the detected corner points according to whether the detected corner points belong to the same two-dimensional code.
Dividing the corner points belonging to the same frame into a group according to the predicted two-dimensional code frame and the corresponding ID, and finally obtaining the position information of four corner points corresponding to each ID, such as: the correspondence between ID and position is expressed in { ID1: [ P1, P2, P3, P4], ID2: [ P5, P6, P7, P8] }.
Wherein P1-P8 denote pixel point positions (x, y) of the corner points.
The encoder and the three branches (corner prediction branch, frame prediction branch, ID prediction branch) are components of the deep learning neural network.
It should be noted that, in the conventional corner detection method, under different illumination conditions, the detection precision will float, especially under conditions of low illumination, strong illumination, etc., which may cause increase of missed detection and false detection. And the key point detection based on deep learning can be more robust, and the detection effect with higher performance and high robustness can be realized as long as the data acquisition covers the comprehensive scene.
S107: and calculating the center point position information of each two-dimensional code under the image coordinate system according to the corner point position information corresponding to each ID information.
In one example, a rotational translation matrix from an image coordinate system to a camera coordinate system can be calculated according to a PNP algorithm and an internal reference matrix of the camera; and calculating to obtain the position information of the center point under the camera coordinate system according to the rotation translation matrix.
The PNP algorithm is now briefly described:
since the z-axis has no effect on the angle calculation, only the information of the x-axis and the y-axis is taken. It is assumed that the positional information of the center points of the two-dimensional codes in the camera coordinate system is (x 1, y 1), (x 2, y 2), respectively.
The relationship between the position information of the two-dimensional code corner under the camera coordinate system and the position information under the image coordinate system is shown in the following formula 1.
Because each two-dimensional code has four corner points, for each two-dimensional code, the position information of 3 corner points is firstly brought into a formula 1, and s in the formula is a temporary representation quantity which can be eliminated; k is an internal reference matrix, calibrated in advance, T is a rotation translation matrix for a known quantity, and T is a quantity to be obtained, left (x i ,y i 1) the pixel position of the ith corner in the image coordinate system is known, the right (x i ,y i ,z i 1) is the 3D position of the corner point under the camera coordinate system; by combining 3 points, 4 sets of solutions (x, y) and four sets of rotation translation matrix solutions can be obtained.
Substituting the pixel position of the 4 th point and the information of the solution of the 4 groups into a formula 1 respectively, and taking the solution with the smallest error, namely the final solution; the solution is a rotation translation matrix of the two-dimensional code to the camera coordinate system.
And (3) bringing the solved unique rotation translation matrix and the pixel position of the center point of the two-dimensional code into a formula 1, and obtaining the position information of the center point of the two-dimensional code under an image coordinate system.
Fig. 1 illustrates an exemplary structure of a hook angle recognition apparatus, which may include:
camera 1 for: shooting a two-dimensional code on a trailer in real time to obtain a real-time image;
an identification system 2 for:
performing corner recognition on the real-time image by using a neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimension code;
performing frame recognition on the real-time image by using a neural network to obtain a frame recognition result; the frame recognition result includes: frame information of the two-dimensional code;
performing identification ID information identification on the real-time image by using a neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code;
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
and determining the trailer angle according to the angular point position information corresponding to each ID information.
In one example, the recognition system may further include a neural network and a pose estimation module, the pose estimation module operable to:
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
and determining the trailer angle according to the angular point position information corresponding to each ID information.
In other embodiments, the above-described hook angle recognition device or other devices may be used to:
obtaining a trailer posture by using the trailer angle;
at least one of rotation control and speed control is performed according to the posture of the trailer.
In other embodiments of the present invention, referring to fig. 8 and 9, the neural network may further include an encoder, a corner prediction branch, a frame prediction branch, and an ID prediction branch;
wherein the encoder is for: extracting a feature map of the real-time image;
corner prediction branching is used to: calculating the probability of each feature point by using the feature map, and taking the feature points with probability values meeting preset conditions as corner points to obtain a corner point identification result;
the frame prediction branch is used for: performing frame recognition by using the feature map to obtain a frame recognition result;
the ID prediction branch is used to: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
The number of the two-dimensional codes is two; in other embodiments of the present invention, in determining the trailer angle according to the corner position information corresponding to each ID information, the above-mentioned identification system or pose estimation module is specifically configured to:
calculating the center point position information of each two-dimensional code according to the corner point position information corresponding to each ID information;
calculating an included angle between a connecting line of the centers of the two-dimensional codes and a target straight line; the target straight line is a straight line which is vertical to the direction of the headstock on the horizontal plane;
and subtracting the calculated included angle from a pre-calibrated angle difference value to obtain the trailer angle.
Please refer to the foregoing descriptions, and the detailed description is omitted herein.
In other embodiments of the present invention, in determining the position information of each corner point corresponding to the ID information according to the frame recognition result, the recognition system or the neural network or the pose estimation module is specifically configured to:
and taking the corner points belonging to the same frame as the corner points corresponding to the same ID to determine the position information of each corner point corresponding to the ID information.
Please refer to the foregoing descriptions, and the detailed description is omitted herein.
The angular point position information is specifically the position information of the angular point under an image coordinate system;
in other embodiments of the present invention, in calculating the position information of the center point of each two-dimensional code, the above-mentioned recognition system or pose estimation module is specifically configured to:
calculating the center point position information of each two-dimensional code under an image coordinate system according to the corner point position information corresponding to each ID information;
according to the PNP algorithm and an internal reference matrix of the camera, a rotation translation matrix from an image coordinate system to a camera coordinate system is obtained through calculation;
and calculating to obtain the position information of the center point under the camera coordinate system according to the rotation translation matrix.
The center point positions of the two-dimensional codes are respectively expressed as (x 1, y 1), (x 2, y 2); in other embodiments of the present invention, the formula for calculating the included angle between the connecting line of the centers of two-dimensional codes and the target straight line includes:
and the included angle between the connecting line of the centers of the two-dimension codes and the target straight line is shown.
Please refer to the foregoing descriptions, and the detailed description is omitted herein.
The description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. The trailer angle identification method is characterized by comprising the following steps of:
acquiring a real-time image; the real-time image is obtained by shooting a camera for shooting a two-dimensional code on the trailer in real time;
performing corner recognition on the real-time image by using the neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimensional code;
performing frame recognition on the real-time image by using the neural network to obtain a frame recognition result; the frame recognition result comprises: frame information of the two-dimensional code;
performing identification ID information identification on the real-time image by using the neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code;
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
determining the trailer angle according to the angular point position information corresponding to each ID information;
wherein, according to the frame recognition result, determining the position information of each corner point corresponding to the ID information includes: taking the corner points belonging to the same frame as the corner points corresponding to the same ID to determine the position information of each corner point corresponding to the ID information;
the number of the two-dimensional codes is two;
the determining the trailer angle according to the angular point position information corresponding to each ID information comprises:
calculating the center point position information of each two-dimensional code according to the corner point position information corresponding to each ID information;
calculating an included angle between a connecting line of the centers of the two-dimensional codes and a target straight line; the target straight line is a straight line which is perpendicular to the direction of the vehicle head on the horizontal plane;
and subtracting the calculated included angle from a pre-calibrated angle difference value to obtain the trailer angle.
2. The method of claim 1, wherein,
the neural network comprises an encoder, a corner prediction branch, a frame prediction branch and an ID prediction branch;
wherein the encoder is configured to: extracting a feature map of the real-time image;
the corner prediction branch is used for: calculating the probability of each feature point by using the feature map, and taking the feature points with probability values meeting preset conditions as corner points to obtain the corner point identification result;
the frame prediction branch is used for: performing frame recognition by using the feature map to obtain a frame recognition result;
the ID prediction branch is to: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
3. The method of claim 1, wherein,
the angular point position information is specifically the position information of the angular point under an image coordinate system;
the calculating of the center point position information of each two-dimensional code comprises the following steps:
calculating the center point position information of each two-dimensional code under the image coordinate system according to the corner point position information corresponding to each ID information;
according to the PNP algorithm and an internal reference matrix of the camera, a rotation translation matrix from an image coordinate system to a camera coordinate system is obtained through calculation;
and calculating to obtain the position information of the center point under the camera coordinate system according to the rotation translation matrix.
4. The method of claim 3, wherein,
the center point positions of the two-dimensional codes are respectively expressed as (x 1, y 1), (x 2, y 2);
the formula for calculating the included angle between the connecting line of the centers of the two-dimensional codes and the target straight line comprises the following steps:
the saidAnd the included angle between the connecting line of the centers of the two-dimension codes and the target straight line is shown.
5. The method as recited in claim 1, further comprising:
obtaining a trailer posture by using the trailer angle;
and at least one of rotation control and speed control is performed according to the posture of the trailer.
6. A trailer angle identification device, comprising:
a camera for: shooting a two-dimensional code on a trailer in real time to obtain a real-time image;
an identification system for:
performing corner recognition on the real-time image by using a neural network to obtain a corner recognition result; the corner recognition result comprises corner position information of the two-dimensional code;
performing frame recognition on the real-time image by using the neural network to obtain a frame recognition result; the frame recognition result comprises: frame information of the two-dimensional code;
performing identification ID information identification on the real-time image by using the neural network to obtain an ID information identification result; the ID information identification result includes: ID information of the two-dimensional code;
determining the position information of each corner point corresponding to the ID information according to the frame identification result;
determining the trailer angle according to the angular point position information corresponding to each ID information;
wherein, according to the frame recognition result, determining the position information of each corner point corresponding to the ID information includes: taking the corner points belonging to the same frame as the corner points corresponding to the same ID to determine the position information of each corner point corresponding to the ID information;
the number of the two-dimensional codes is two;
the determining the trailer angle according to the angular point position information corresponding to each ID information comprises:
calculating the center point position information of each two-dimensional code according to the corner point position information corresponding to each ID information;
calculating an included angle between a connecting line of the centers of the two-dimensional codes and a target straight line; the target straight line is a straight line which is perpendicular to the direction of the vehicle head on the horizontal plane;
and subtracting the calculated included angle from a pre-calibrated angle difference value to obtain the trailer angle.
7. The apparatus of claim 6, wherein the neural network comprises an encoder, a corner prediction branch, a bezel prediction branch, and an ID prediction branch;
wherein the encoder is configured to: extracting a feature map of the real-time image;
the corner prediction branch is used for: calculating the probability of each feature point by using the feature map, and taking the feature points with probability values meeting preset conditions as corner points to obtain the corner point identification result;
the frame prediction branch is used for: performing frame recognition by using the feature map to obtain a frame recognition result;
the ID prediction branch is to: and carrying out ID information identification by using the feature map to obtain an ID information identification result.
CN202011405469.5A 2020-12-02 2020-12-02 Trailer angle identification method and device Active CN112560606B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011405469.5A CN112560606B (en) 2020-12-02 2020-12-02 Trailer angle identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011405469.5A CN112560606B (en) 2020-12-02 2020-12-02 Trailer angle identification method and device

Publications (2)

Publication Number Publication Date
CN112560606A CN112560606A (en) 2021-03-26
CN112560606B true CN112560606B (en) 2024-04-16

Family

ID=75048006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011405469.5A Active CN112560606B (en) 2020-12-02 2020-12-02 Trailer angle identification method and device

Country Status (1)

Country Link
CN (1) CN112560606B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113733827A (en) * 2021-10-19 2021-12-03 长沙立中汽车设计开发股份有限公司 Device and method for detecting relative rotation angle between semitrailer trailer and trailer

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049728A (en) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 Method, system and terminal for augmenting reality based on two-dimension code
DE102016209418A1 (en) * 2016-05-31 2017-11-30 Bayerische Motoren Werke Aktiengesellschaft Operating a team by measuring the relative position of an information carrier via a read-out device
DE102017106152A1 (en) * 2017-03-22 2018-09-27 Connaught Electronics Ltd. Determine an angle of a trailer with optimized template
CN110765795A (en) * 2019-09-24 2020-02-07 北京迈格威科技有限公司 Two-dimensional code identification method and device and electronic equipment
WO2020042345A1 (en) * 2018-08-28 2020-03-05 初速度(苏州)科技有限公司 Method and system for acquiring line-of-sight direction of human eyes by means of single camera
CN110930454A (en) * 2019-11-01 2020-03-27 北京航空航天大学 Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning
CN111222639A (en) * 2018-11-26 2020-06-02 福特全球技术公司 Trailer angle detection using end-to-end learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897648B (en) * 2016-07-22 2020-01-31 阿里巴巴集团控股有限公司 Method and system for identifying position of two-dimensional code

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103049728A (en) * 2012-12-30 2013-04-17 成都理想境界科技有限公司 Method, system and terminal for augmenting reality based on two-dimension code
DE102016209418A1 (en) * 2016-05-31 2017-11-30 Bayerische Motoren Werke Aktiengesellschaft Operating a team by measuring the relative position of an information carrier via a read-out device
DE102017106152A1 (en) * 2017-03-22 2018-09-27 Connaught Electronics Ltd. Determine an angle of a trailer with optimized template
WO2020042345A1 (en) * 2018-08-28 2020-03-05 初速度(苏州)科技有限公司 Method and system for acquiring line-of-sight direction of human eyes by means of single camera
CN111222639A (en) * 2018-11-26 2020-06-02 福特全球技术公司 Trailer angle detection using end-to-end learning
CN110765795A (en) * 2019-09-24 2020-02-07 北京迈格威科技有限公司 Two-dimensional code identification method and device and electronic equipment
CN110930454A (en) * 2019-11-01 2020-03-27 北京航空航天大学 Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于CNN双目特征点匹配目标识别与定位研究;蒋强卫;甘兴利;李雅宁;;无线电工程(08);全文 *
基于YOLOv2-Tiny的环视实时车位线识别算法;何俏君;郭继舜;关倩仪;钟斌;付颖;谷俊;;汽车电器(09);全文 *

Also Published As

Publication number Publication date
CN112560606A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN106952308B (en) Method and system for determining position of moving object
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN112598922B (en) Parking space detection method, device, equipment and storage medium
CN106168988B (en) Method and device for generating masking rules and for masking image information of a camera
EP3716145A1 (en) Object detection device and method
CN110443245B (en) License plate region positioning method, device and equipment in non-limited scene
CN110110608B (en) Forklift speed monitoring method and system based on vision under panoramic monitoring
CN111213153A (en) Target object motion state detection method, device and storage medium
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
KR20180112010A (en) A method of detecting an object on the road side of a car, a computing device, a driver assistance system and an automobile
JP2020060550A (en) Abnormality detector, method for detecting abnormality, posture estimating device, and mobile control system
CN113793413A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN112560606B (en) Trailer angle identification method and device
CN115272477A (en) Checkerboard coding corner detection algorithm applied to panoramic image splicing
CN111273701A (en) Visual control system and control method for holder
CN111860270B (en) Obstacle detection method and device based on fisheye camera
CN113296516A (en) Robot control method for automatically lifting automobile
CN111144415B (en) Detection method for tiny pedestrian target
US20230237809A1 (en) Image processing device of person detection system
JP3465531B2 (en) Object recognition method and apparatus
CN111899277A (en) Moving object detection method and device, storage medium and electronic device
CN114926332A (en) Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle
US11420855B2 (en) Object detection device, vehicle, and object detection process
CN111192290B (en) Blocking processing method for pedestrian image detection
CN113313968A (en) Parking space detection method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant