CN113469045A - Unmanned card-collecting visual positioning method and system, electronic equipment and storage medium - Google Patents
Unmanned card-collecting visual positioning method and system, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113469045A CN113469045A CN202110741222.9A CN202110741222A CN113469045A CN 113469045 A CN113469045 A CN 113469045A CN 202110741222 A CN202110741222 A CN 202110741222A CN 113469045 A CN113469045 A CN 113469045A
- Authority
- CN
- China
- Prior art keywords
- visual
- unmanned
- positioning
- correction
- card
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
- Navigation (AREA)
Abstract
The invention relates to the technical field of port operation, and provides a visual positioning method and system for an unmanned card collection, electronic equipment and a storage medium. The visual positioning method comprises the following steps: collecting visual characteristics including actual positions of lane lines and the position marks in a map coordinate system; in the running process of the unmanned collecting card, the pose deviation of the unmanned collecting card is obtained according to the visual position and the actual position of the visual characteristic in the image coordinate system of the first camera of the unmanned collecting card, and the unmanned collecting card is positioned and corrected, and the method comprises the following steps: according to the visual position of the lane line and the actual position of the lane line, obtaining the transverse pose deviation of the unmanned truck, and performing transverse positioning correction on the unmanned truck; and according to the visual position of the scallop mark and the actual position of the scallop mark, acquiring the longitudinal pose deviation of the unmanned collecting card, and performing longitudinal positioning correction on the unmanned collecting card. The invention combines the visual map and the visual means to carry out positioning correction on the unmanned card collection in the transverse and longitudinal directions, thereby realizing the rapid and accurate positioning of the unmanned card collection.
Description
Technical Field
The invention relates to the technical field of port operation, in particular to a visual positioning method and system for an unmanned card collection, electronic equipment and a storage medium.
Background
In port environment, the unmanned truck has low moving speed but has high requirement on positioning accuracy. Generally speaking, the width of an unmanned truck is about 2.5 meters, the width of a lane line in a storage yard is about 2.8 meters, and the accuracy of transverse positioning is required to be about 10 cm; meanwhile, the unmanned card concentrator in the longitudinal direction needs to complete alignment at a designated position, the box can be normally grabbed and placed only when the alignment precision is 10cm, and therefore the longitudinal positioning precision is about 10 cm.
The Positioning of the unmanned hub can be realized by GPS (Global Positioning System) Positioning or visual Positioning. However, because containers are filled in the storage yard, GPS signals are weak, and the requirement of positioning accuracy cannot be met by only depending on the GPS; the visual positioning depends on visual sensors such as a camera and the like, and the positioning result cannot be guaranteed to be completely reliable.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the invention and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present invention provides a visual positioning method, system, electronic device and storage medium for an unmanned card collection, which are capable of implementing fast and accurate positioning of the unmanned card collection by firstly establishing a multi-visual characteristic visual map based on a map coordinate system and then performing positioning correction in the horizontal and vertical directions on the unmanned card collection by combining with a visual means.
One aspect of the present invention provides a visual positioning method for an unmanned card concentrator, comprising: acquiring the actual position of a visual feature in a map coordinate system, wherein the visual feature comprises a lane line and a decibel mark; in the running process of the unmanned collecting card, obtaining the pose deviation of the unmanned collecting card according to the visual position of the visual characteristic in the image coordinate system of the first camera of the unmanned collecting card and the actual position, and carrying out positioning correction on the unmanned collecting card, wherein the method comprises the following steps: according to the visual position of the lane line and the actual position of the lane line, obtaining the transverse pose deviation of the unmanned truck, and performing transverse positioning correction on the unmanned truck; and according to the visual position of the scallop mark and the actual position of the scallop mark, acquiring the longitudinal pose deviation of the unmanned collecting card, and performing longitudinal positioning correction on the unmanned collecting card.
In some embodiments, the obtaining the lateral pose deviation of the unmanned card includes: extracting a first image from the video stream of the first camera to obtain the visual position of the lane line and obtain the initial position of the map coordinate system projected by the first camera; in the map coordinate system, correcting the initial position of the first camera based on the transverse direction and the heading direction, and obtaining the transverse position correction amount and the heading angle correction amount of the initial position of the first camera when the visual position of the lane line matches the actual position of the lane line; and taking the transverse position correction quantity and the course angle correction quantity as the transverse pose deviation.
In some embodiments, said correcting the initial position of the first camera based on the lateral direction and the heading comprises: obtaining the initial position of the visual position of the lane line projected to the map coordinate system; carrying out transverse position search and course angle search around the initial position of the first camera, adjusting the initial position of the lane line according to the currently searched transverse position search step length and course angle search angle after each search, and obtaining the current position of the lane line; and when the current position of the lane line is transversely overlapped with the actual position of the lane line, determining that the visual position of the lane line is matched with the actual position of the lane line.
In some embodiments, said performing lateral positioning corrections on said unmanned hub comprises: acquiring the transverse position variation and the course angle variation of the unmanned card concentrator from the time stamp of the first image to a first current moment; obtaining a transverse pose residual correction according to the transverse pose deviation, the transverse position variation and the course angle variation; and according to the residual correction amount of the transverse pose, carrying out transverse position correction and course angle correction on the unmanned card collection at the first current moment.
In some embodiments, the trained segmentation network is used to segment the pixel points of the lane line from the first image, so as to obtain the visual position of the lane line.
In some embodiments, the obtaining the longitudinal pose deviation of the unmanned card includes: extracting a second image from the video stream of the first camera to obtain the initial position of the visual position of the scallop mark projected to the map coordinate system; in the map coordinate system, acquiring a longitudinal position deviation amount from the initial position of the scallop mark to the actual position of the scallop mark; taking at least the longitudinal position deviation amount as the longitudinal pose deviation.
In some embodiments, the visual features further comprise ground arrows, the actual positions of the ground arrows comprising the actual positions of the feature points of the ground arrows and their feature descriptors; the obtaining of the longitudinal pose deviation of the unmanned card gathering further comprises: obtaining the visual position of the feature point and the initial position projected to the map coordinate system by the feature descriptor of the visual position; matching feature points according to the feature descriptors to obtain the yaw angle correction quantity of the ground arrow when the initial positions of the feature points are coincident with the actual positions of the feature points; and taking the longitudinal position deviation amount and the course angle correction amount as the longitudinal pose deviation.
In some embodiments, said longitudinally positioning corrections to said unmanned hub comprises: acquiring the longitudinal position variation and the yaw angle variation of the unmanned card collection from the timestamp of the second image to a second current time; obtaining a longitudinal pose residual correction according to the longitudinal pose deviation, the longitudinal position variation and the yaw angle variation; and according to the residual correction amount of the longitudinal pose, performing longitudinal position correction and yaw angle correction on the unmanned card collection at the second current moment.
In some embodiments, the visual position of the scallop identifier is obtained by detecting the scallop identifier and the ground arrow from the second image through a trained detection network; and carrying out feature point detection and descriptor calculation on the ground arrow to obtain the visual position of the feature point and a feature descriptor thereof.
In some embodiments, the method for visually positioning an unmanned card further comprises: verifying the fusion positioning result of the unmanned truck according to the position of the visual feature in the image coordinate system of the second camera of the unmanned truck and the actual position of the visual feature, comprising: extracting a picture image from the video stream of the second camera, obtaining the position of each visual feature, and obtaining the projection position of each visual feature projected to the map coordinate system; obtaining a verification result of the fusion positioning result according to a difference value between the projection position of each visual feature and the actual position of each visual feature in the map coordinate system; when the verification result is that the positioning accuracy of the fusion positioning result is greater than or equal to a passing threshold and smaller than an optimal threshold, returning to the step of positioning and correcting the unmanned card concentrator; when the verification result is that the positioning accuracy of the fusion positioning result is smaller than the passing threshold, sending alarm information; and the fusion positioning result comprises a visual positioning result obtained by performing transverse positioning correction and longitudinal positioning correction on the unmanned card concentrator based on the first camera.
In some embodiments, the second camera and the first camera are different onboard cameras of the unmanned card collection.
Another aspect of the invention provides an unmanned hub vision positioning system, comprising: the visual map establishing module is used for acquiring the actual position of a visual feature in a map coordinate system, wherein the visual feature comprises a lane line and a decibel mark; the positioning correction module is used for obtaining the pose deviation of the unmanned collecting card according to the visual position of the visual feature in the image coordinate system of the first camera of the unmanned collecting card and the actual position in the running process of the unmanned collecting card, and performing positioning correction on the unmanned collecting card, and comprises: the transverse pose correction unit is used for obtaining the transverse pose deviation of the unmanned truck according to the visual position of the lane line and the actual position of the lane line and carrying out transverse positioning correction on the unmanned truck; and the longitudinal pose correction unit is used for obtaining the longitudinal pose deviation of the unmanned collecting card according to the visual position of the scallop mark and the actual position of the scallop mark, and performing longitudinal positioning correction on the unmanned collecting card.
Yet another aspect of the present invention provides an electronic device, comprising: a processor; a memory having executable instructions stored therein; wherein the executable instructions, when executed by the processor, implement the method for visually positioning an unmanned card concentrator of any of the above embodiments.
Yet another aspect of the present invention provides a computer-readable storage medium storing a program that when executed by a processor implements the method for visual positioning of an unmanned card concentrator of any of the embodiments described above.
Compared with the prior art, the invention has the beneficial effects that:
constructing a visual map with multiple visual characteristics based on map coordinates by acquiring the actual positions of the visual characteristics in a map coordinate system; in the driving process of the unmanned collecting card, a visual map and a visual means are combined, the camera of the unmanned collecting card is used for collecting visual characteristics and matching the visual characteristics with the visual map to obtain the pose deviation of the unmanned collecting card, and the unmanned collecting card is subjected to transverse and longitudinal positioning correction to realize the rapid and accurate positioning of the unmanned collecting card;
furthermore, after the positioning correction of the unmanned card concentrator in the transverse and longitudinal directions is carried out, the fused positioning result is verified based on the reliable visual relative relation and the absolute map coordinate, and the positioning deviation warning is carried out on the unmanned card concentrator in a mode of obtaining the deviation of the fused positioning result after the visual features are projected to a map coordinate system, so that the complete reliability of the visual positioning is ensured.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic diagram illustrating the steps of a method for visually locating an unmanned card collection according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the steps for obtaining lateral pose deviation of an unmanned card gather in one embodiment of the present invention;
FIG. 3 is a schematic diagram of a scenario for obtaining lateral pose deviation of an unmanned card gather in an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the steps for obtaining the longitudinal pose deviation of an unmanned card gather in one embodiment of the present invention;
FIG. 5 is a schematic diagram of a scenario for obtaining longitudinal pose deviation of an unmanned card gather in an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the steps of a visual positioning method for an unmanned card collector according to yet another embodiment of the present invention;
FIG. 7 shows a block diagram of an unmanned hub vision positioning system in an embodiment of the present invention;
FIG. 8 is a schematic diagram showing a structure of an electronic apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
The drawings are merely schematic illustrations of the invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
In addition, the flow shown in the drawings is only an exemplary illustration, and not necessarily includes all the steps. For example, some steps may be divided, some steps may be combined or partially combined, and the actual execution sequence may be changed according to the actual situation. The use of "first," "second," and similar terms in the detailed description is not intended to imply any order, quantity, or importance, but rather is used to distinguish one element from another. It should be noted that features of the embodiments of the invention and of the different embodiments may be combined with each other without conflict.
Fig. 1 shows the main steps of the visual positioning method of the unmanned card collection in an embodiment, and referring to fig. 1, the visual positioning method of the unmanned card collection includes: step S110, collecting the actual position of visual features in a map coordinate system, wherein the visual features comprise lane lines and decibel marks; step S120, in the running process of the unmanned collecting card, obtaining the pose deviation of the unmanned collecting card according to the visual position and the actual position of the visual feature in the image coordinate system of the first camera of the unmanned collecting card, and carrying out positioning correction on the unmanned collecting card, wherein the step S comprises the following steps: step S120-2, obtaining the transverse pose deviation of the unmanned truck according to the visual position of the lane line and the actual position of the lane line, and carrying out transverse positioning correction on the unmanned truck; and step S120-4, obtaining the longitudinal pose deviation of the unmanned collecting card according to the visual position of the scallop mark and the actual position of the scallop mark, and performing longitudinal positioning correction on the unmanned collecting card.
The visual positioning method comprises the steps of constructing a visual map with multiple visual characteristics based on map coordinates by acquiring the actual positions of the visual characteristics in a map coordinate system; in the running process of the unmanned collecting card, a visual map and a visual means are combined, the camera of the unmanned collecting card is used for collecting visual characteristics and matching the visual characteristics with the visual map to obtain the pose deviation of the unmanned collecting card, and the unmanned collecting card is subjected to transverse and longitudinal positioning correction to realize the rapid and accurate positioning of the unmanned collecting card.
The following describes in detail the method for visually locating an unmanned card in conjunction with a specific example.
In step S110, the actual positions of the plurality of visual features in the map coordinate system of the electronic map may be obtained by manual collection, and the visual map is established in advance. The visual characteristics refer to a target object which can be used for transversely and longitudinally positioning the unmanned truck in a port environment, and the target object comprises lane lines for transverse positioning, wherein each lane line records a coordinate sequence (corresponding to a group of position points which are continuously and uniformly distributed on each lane line) of the lane line to form the actual position of the lane line; the scallop mark is used for longitudinal positioning, specifically, the mark number of the scallop on the ground is stored, the coordinate (the central point position corresponding to the scallop) of the mark is stored, and the actual position of the scallop mark is formed; the visual features for vertical positioning also include ground arrows, and the coordinates of all feature points (mainly corner points) and corresponding feature descriptors thereof are stored to form the actual positions of the ground arrows, which will be described in detail below in connection with correction of vertical positioning of the unmanned card.
The actual position of the visual feature may be three-dimensional coordinates or may be two-dimensional coordinates in both the lateral and longitudinal directions to simplify the calculation. The visual map containing the visual features is used for providing absolute map coordinate reference for positioning correction of the unmanned card concentrator in the transverse and longitudinal directions.
In step S120, in the driving process of the unmanned truck, the pose deviation in the transverse and longitudinal directions of the unmanned truck is obtained by means of matching the visual positioning with the map coordinates, and positioning correction is performed.
Fig. 2 shows the main steps of obtaining the lateral pose deviation of the unmanned card collection in an embodiment, and referring to fig. 2, obtaining the lateral pose deviation of the unmanned card collection includes: step S210, extracting a first image from the video stream of the first camera, obtaining the visual position of the lane line, and obtaining the initial position of the first camera projected to a map coordinate system; step S220, in a map coordinate system, correcting the initial position of the first camera based on the transverse direction and the heading direction, and obtaining the transverse position correction amount and the heading direction angle correction amount of the initial position of the first camera when the visual position of the lane line is matched with the actual position of the lane line; and step S230, taking the transverse position correction quantity and the heading angle correction quantity as transverse pose deviation.
When the visual position of the lane line is obtained, a trained segmentation network is adopted to segment the pixel points of the lane line from the first image to obtain the visual position of the lane line, namely the coordinate sequences of the left lane line and the right lane line in the image coordinate system of the first camera. The segmentation network adopts the existing algorithm models, such as a ReSeg model for realizing segmentation based on a Bidirectional Recurrent Neural Network (BRNN), an LSTM-CF model for carrying out image segmentation based on ReNet and cavity convolution and combining depth information, and the like, and the description is not repeated.
From the first image, the visual position of the first camera can also be obtained, for example, the coordinates of the center point of the first image in the image coordinate system of the first camera are taken as the visual position of the first camera. After the visual position of the first camera is obtained, the visual position of the first camera is projected to a map coordinate system, and the initial position of the first camera in the map coordinate system is obtained.
Continuing to refer to fig. 2, correcting the initial position of the first camera based on the lateral direction and the heading direction specifically includes: s220-2, acquiring the initial position of the visual position of the lane line projected to a map coordinate system; s220-4, performing transverse position search and course angle search around the initial position of the first camera, adjusting the initial position of the lane line according to the currently searched transverse position search step length and course angle search angle after each search, and obtaining the current position of the lane line; and S220-6, when the current position of the lane line is transversely overlapped with the actual position of the lane line, determining that the visual position of the lane line is matched with the actual position of the lane line.
FIG. 3 shows an embodiment for obtaining noneIn the scenario of the lateral pose deviation of the human set card, as shown in fig. 3, in a map coordinate system 300, an actual position 310 of a lane line, an actual position 320 of a shellfish mark (including shellfish 01, shellfish 03, and shellfish 05), an initial position 310' of the lane line, and an initial position 330 of a first camera are marked. The positions are shown in a real-scene mode in a non-coordinate value mode, and the display is convenient. Transverse direction X1Parallel to the X-axis direction of the map coordinate system 300, heading C refers to the speed direction of the time stamp of the first image of the unmanned collective card.
Based on the lateral X in the map coordinate system 3001When the initial position 330 of the first camera is corrected according to the heading C, a lateral position search is performed around the initial position 330 of the first camera (in the lateral direction X)1Forward or backward) and heading angle search (forward or backward along heading C, making heading angle, i.e. heading C and lateral X1Angle of (theta)cChanges occur). When performing a transverse X1And after the two-dimensional search of the heading C, for example, the current position 330 ' of the first camera is searched, the step length X ' is searched according to the currently searched lateral position (between the current position 330 ' of the first camera and the initial position 330 of the first camera in the lateral direction X)1Upper distance) and heading angle search angle θc'(the heading angle of the first camera's current position 330 'and the heading angle θ of the first camera's initial position 330cAngle therebetween), wherein the heading angle of the current position 330 'of the first camera is the line connecting the current position 330' of the first camera and the initial position 330 of the first camera with the lateral direction X1The initial position 310 'of the lane line is translated along the transverse direction by the transverse position search step length x' and rotated around the heading direction by the heading angle search angle thetac', the current position of the lane line is obtained. At this time, if the current position of the lane line coincides with the actual position 310 of the lane line in the lateral direction, which indicates that the two-dimensional search based on the lateral direction and the heading direction is completed, the lateral position correction amount (all search results need to be accumulated, i.e., x 'in this embodiment) and the heading direction angle correction amount (similarly, all search results need to be accumulated, i.e., θ in this embodiment, x' needs to be accumulated) of the initial position 330 of the first camera after the search is obtainedc'). The obtained lateral position correction amount andand the course angle correction is used as the transverse pose deviation.
Further, the transverse positioning correction of the unmanned card concentrator comprises the following steps: acquiring the transverse position variation and course angle variation of the unmanned card collection from the timestamp of the first image to a first current moment; obtaining transverse pose residual correction including transverse position residual correction and course angle residual correction according to the transverse pose deviation, the transverse position variation and the course angle variation; and according to the residual correction amount of the transverse pose, performing transverse position correction and course angle correction on the unmanned truck at the first current moment to finish transverse positioning correction on the unmanned truck. The residual correction of the transverse position can be regarded as a transverse positioning correction target, the residual correction of the course angle can be regarded as a transverse positioning correction means, and after the transverse positioning correction is finished, the unmanned card concentrator can keep moving straight.
When longitudinal positioning is corrected, longitudinal pose deviation of the unmanned collecting card is obtained, and the method comprises the following steps: extracting a second image from the video stream of the first camera to obtain the initial position of the visual position projection of the scallop mark to the map coordinate system; in a map coordinate system, acquiring a longitudinal position deviation amount from an initial position of the scallop mark to an actual position of the scallop mark; and taking at least the longitudinal position deviation amount as the longitudinal pose deviation.
The visual characteristics further comprise ground arrows, and the actual positions of the ground arrows comprise the actual positions of the characteristic points of the ground arrows and characteristic descriptors thereof; obtaining the longitudinal pose deviation of the unmanned card collection, further comprising: acquiring the visual position of the feature point and the initial position projected by the feature descriptor to the map coordinate system and the feature descriptor thereof; matching the characteristic points according to the characteristic descriptors to obtain the yaw angle correction quantity of the ground arrow when the initial positions of the characteristic points coincide with the actual positions of the characteristic points; and taking the longitudinal position deviation amount and the course angle correction amount as longitudinal pose deviation.
Fig. 4 shows the main steps of obtaining the longitudinal pose deviation of the unmanned collection card in an embodiment, fig. 5 shows a scenario of obtaining the longitudinal pose deviation in the embodiment, and the actual position 310 of the lane line and the mark of the shellfish corresponding to the shellfish 03 are shown in the map coordinate system 300 of fig. 5And the actual position of the ground arrow 440, the characteristic points of the ground arrow are illustrated as black dots on the ground arrow. The above positions are also shown in a realistic manner for ease of presentation. Longitudinal direction Y1Parallel to the Y-axis direction of the map coordinate system 300.
With reference to fig. 4 and 5, obtaining the longitudinal pose deviation of the unmanned card album includes: step S410, a second image is extracted from the video stream of the first camera, the visual position of the scallop identifier is obtained and projected to the initial position 320 'of the map coordinate system 300, and the visual position of the feature point and the feature descriptor thereof are obtained and projected to the initial position 440' of the map coordinate system 300 and the feature descriptor thereof. Step S420, in the map coordinate system 300, obtaining the longitudinal position deviation amount D from the initial position 320' of the scallop mark to the actual position 320 of the scallop markY(i.e. in the longitudinal direction Y)1Distance from the initial position 320 'of the scallop mark to the actual position 320 of the scallop mark), and matching feature points according to the feature descriptors to obtain a yaw angle correction amount theta of the ground arrow when the initial position 440' of the feature points coincides with the actual position 440 of the feature pointsY(i.e. about the longitudinal direction Y)1The angle of rotation of the initial position 440' of the feature point relative to the actual position 440 of the feature point). Step S430, shifting the longitudinal position by an amount DYAnd course angle correction amount thetaYAs a longitudinal pose deviation.
When the visual position of the scallop mark, the visual position of the feature point of the ground arrow and the feature descriptor of the visual position are obtained, the scallop mark and the ground arrow are detected from the second image through a trained detection network, and the visual position of the scallop mark, namely the coordinate of the scallop mark in the image coordinate system of the first camera is obtained; and carrying out feature point detection and descriptor calculation on the ground arrow to obtain the visual position of the feature point and a feature descriptor thereof, namely the coordinates of each feature point of the ground arrow in the image coordinate system of the first camera and the feature descriptor for describing the feature point. The detection network adopts the existing algorithm model, such as target detection model R-CNN, Fast R-CNN and the like based on deep learning, and the description is not repeated. The feature point detection and descriptor calculation also adopts the existing algorithm model, and the explanation is not carried out.
Obtaining the yaw angle correction theta of the ground arrowYIn this case, the correction of the longitudinal position of the ground arrow is obtained, that is, when the initial position 440' of the feature point coincides with the actual position 440 of the feature point, the feature point needs to be simultaneously in the longitudinal direction Y1Position correction and parallel winding of Y direction1And correcting the yaw angle. The longitudinal position correction amount of the ground arrow and the longitudinal position deviation amount of the decibel sign can be averaged to be used as the final longitudinal position deviation amount.
Further, the longitudinal positioning correction of the unmanned card concentrator comprises the following steps: acquiring the longitudinal position variation and the yaw angle variation of the unmanned collecting card from the timestamp of the second image to a second current moment; obtaining longitudinal pose residual correction quantity including longitudinal position residual correction quantity and yaw angle residual correction quantity according to the longitudinal pose deviation, the longitudinal position variation quantity and the yaw angle variation quantity; and according to the residual correction amount of the longitudinal pose, performing longitudinal position correction and yaw angle correction on the unmanned collecting card at the second current moment. The residual correction amount of the longitudinal position can be regarded as a longitudinal positioning correction target, the residual correction amount of the yaw angle can be regarded as a longitudinal positioning correction means, and after the longitudinal positioning correction is finished, the unmanned collecting card can accurately reach the corresponding shellfish position.
The time stamp of the second image is later than that of the first image, the second current time is later than the first current time, namely when the unmanned card collection is subjected to transverse and longitudinal positioning correction, transverse positioning correction is firstly carried out, and then longitudinal positioning correction is carried out, so that in the driving process of the unmanned card collection, the unmanned card collection is firstly kept in the middle of a lane line through transverse positioning correction, and then accurately reaches the corresponding shellfish position through longitudinal positioning correction.
The above-described positioning correction process may be performed in stages during the travel of the unmanned collective. For example, whenever an unmanned collection card reaches one decibel and needs to go to the next decibel, a positioning correction is performed to ensure that port operations continue accurately.
Further, in one embodiment, the positioning source of the unmanned card concentrator is provided with other sensors, such as a laser radar, a speedometer and the like, besides the vision sensor, namely the first camera, during the automatic driving process of the unmanned card concentrator, in addition to the transverse and longitudinal positioning correction of the unmanned card concentrator based on the first camera according to the method, the positioning correction of the unmanned card concentrator is also provided through other sensors. Therefore, the fusion positioning of the unmanned card collection is realized by a visual positioning means, namely, transverse and longitudinal positioning correction is carried out according to visual characteristics and a positioning means of other sensors.
In addition, on one hand, the unmanned card concentrator is fused and positioned by combining various positioning sources, on the other hand, although the positioning accuracy can be improved by multi-sensor fusion, the corresponding failure rate is also improved, and because the error or failure of any positioning source is reflected on the final output result, the positioning deviation alarm is also carried out on the fused and positioned result. And when the positioning deviation is alarmed, projecting the visual characteristics from the image coordinate system to a map coordinate system, comparing the projected visual characteristic coordinates with real coordinates stored in the map in advance, determining the precision of the fusion positioning result, and carrying out the positioning deviation alarming.
Fig. 6 shows a flow of steps of a visual positioning method for an unmanned card collection in yet another embodiment, and referring to fig. 6, the visual positioning method further includes: step S610, verifying the fusion positioning result of the unmanned truck according to the position of the visual features (including the lane line, the scallop number identifier, the ground arrow, and the like) in the image coordinate system of the second camera of the unmanned truck and the actual position of the visual features, including: step S610-2, extracting picture images from the video stream of the second camera, obtaining the position of each visual feature, and obtaining the projection position of each visual feature projected to a map coordinate system; step S610-4, obtaining a verification result of the fusion positioning result according to a difference value between the projection position of each visual feature and the actual position of each visual feature in the map coordinate system; when the verification result is that the positioning accuracy of the fusion positioning result is greater than or equal to the passing threshold and smaller than the optimal threshold, the step S120 is returned to carry out positioning correction on the unmanned truck, and at the moment, the precision of the fusion positioning result can be improved by reducing the speed of the vehicle, appropriately delaying the opportunity of maneuvering operation such as lane changing/overtaking and the like, adjusting light to compensate light rays of the surrounding environment and the like; and when the verification result is that the positioning accuracy of the fusion positioning result is smaller than the passing threshold value, executing the step S610-6 to send out alarm information, reporting positioning abnormity and reminding the user to exit the automatic driving mode. The fusion positioning result comprises a visual positioning result obtained by performing transverse positioning correction and longitudinal positioning correction on the unmanned card collection based on the first camera, and a positioning result obtained by performing positioning correction on the unmanned card collection based on other sensors.
The second camera and the first camera are preferably different vehicle-mounted cameras of an unmanned collecting card, so that the accuracy of the checking result is improved. When projecting visual features, such as lane lines, to a map coordinate system, the projection may be performed according to the relative position of the unmanned truck and the left and right lane lines and the position of the second camera obtained from the screen image of the second camera. The accuracy rate of the fusion positioning result is obtained according to the coordinate phase difference quantity by back projecting the left lane line and the right lane line to a map coordinate system and then comparing the actual positions of the lane lines. The pass threshold and the optimum threshold may be set as desired, for example, the pass threshold is 90% and the optimum threshold is 98%.
In one embodiment, only the pass threshold may be set at 90%. That is, after the verification result of the fusion positioning result is obtained, the verification result is compared with the passing threshold, if the verification result is smaller than the passing threshold, an alarm notification is sent out, and positioning abnormity is reported; if the check result is greater than or equal to the pass threshold, the fusion positioning is accurate, and the automatic driving mode of the unmanned card gathering can be continued.
The verification of the fusion positioning result can be periodically executed in the driving process of the unmanned card collection, the fusion positioning result of the visual positioning is verified based on the reliable visual relative relation and the absolute map coordinate, and the error of the visual positioning is estimated in real time, so that the complete reliability of the visual positioning is ensured, and the high-precision positioning of the unmanned card collection is realized.
The embodiment of the invention also provides a visual positioning system of the unmanned card collection, which can be used for realizing the visual positioning method described in any embodiment. The features and principles of the visual positioning method described in any of the above embodiments may be applied to the following visual positioning system embodiments. In the following embodiments of the visual positioning system, the features and principles already set forth regarding visual positioning will not be repeated.
Fig. 7 shows the main modules of the unmanned hub visual positioning system in an embodiment, and referring to fig. 7, unmanned hub visual positioning system 700 includes: the visual map establishing module 710 is configured to acquire an actual position of a visual feature in a map coordinate system, where the visual feature includes a lane line and a decibel identifier; the positioning and correcting module 720 is configured to, during a driving process of the unmanned truck, obtain a pose deviation of the unmanned truck according to a visual position and an actual position of the visual feature in an image coordinate system of a first camera of the unmanned truck, and perform positioning and correcting on the unmanned truck, where the positioning and correcting module includes: the transverse pose correction unit 720-2 is used for obtaining the transverse pose deviation of the unmanned truck according to the visual position of the lane line and the actual position of the lane line, and performing transverse positioning correction on the unmanned truck; and the longitudinal pose correction unit 720-4 is used for obtaining the longitudinal pose deviation of the unmanned collecting card according to the visual position of the scallop mark and the actual position of the scallop mark, and performing longitudinal positioning correction on the unmanned collecting card.
Further, the system 700 for visually positioning an unmanned truck may further include a module for implementing other process steps of the above-described embodiment of the visual positioning method, for example, a positioning deviation warning module for projecting the visual features to a map coordinate system according to the fusion positioning result to obtain a fusion positioning result deviation, and performing a positioning deviation warning on the unmanned truck. The specific principle of each module can refer to the description of each embodiment of the visual positioning method, and the description is not repeated here.
As described above, the unmanned truck-mounted visual positioning system of the present invention can construct a visual map with multiple visual features based on map coordinates by acquiring the actual positions of the visual features in a map coordinate system; in the driving process of the unmanned collecting card, a visual map and a visual means are combined, the camera of the unmanned collecting card is used for collecting visual characteristics and matching the visual characteristics with the visual map to obtain the pose deviation of the unmanned collecting card, and the unmanned collecting card is subjected to transverse and longitudinal positioning correction to realize the rapid and accurate positioning of the unmanned collecting card; furthermore, after the positioning correction of the unmanned card concentrator in the transverse and longitudinal directions is carried out, the fused positioning result is verified based on the reliable visual relative relation and the absolute map coordinate, and the positioning deviation warning is carried out on the unmanned card concentrator in a mode of obtaining the deviation of the fused positioning result after the visual features are projected to a map coordinate system, so that the complete reliability of the visual positioning is ensured.
The embodiment of the invention also provides electronic equipment, which comprises a processor and a memory, wherein the memory stores executable instructions, and the executable instructions are executed by the processor to realize the visual positioning method of the unmanned card concentrator described in any embodiment.
As described above, the electronic device of the present invention can construct a visual map of multi-visual features based on map coordinates by acquiring the actual positions of the visual features in a map coordinate system; in the driving process of the unmanned collecting card, a visual map and a visual means are combined, the camera of the unmanned collecting card is used for collecting visual characteristics and matching the visual characteristics with the visual map to obtain the pose deviation of the unmanned collecting card, and the unmanned collecting card is subjected to transverse and longitudinal positioning correction to realize the rapid and accurate positioning of the unmanned collecting card; furthermore, after the positioning correction of the unmanned card concentrator in the transverse and longitudinal directions is carried out, the fused positioning result is verified based on the reliable visual relative relation and the absolute map coordinate, and the positioning deviation warning is carried out on the unmanned card concentrator in a mode of obtaining the deviation of the fused positioning result after the visual features are projected to a map coordinate system, so that the complete reliability of the visual positioning is ensured.
Fig. 8 is a schematic structural diagram of an electronic device in an embodiment of the present invention, and it should be understood that fig. 8 only schematically illustrates various modules, and these modules may be virtual software modules or actual hardware modules, and the combination, the splitting, and the addition of the remaining modules of these modules are within the scope of the present invention.
As shown in fig. 8, electronic device 800 is in the form of a general purpose computing device. The components of the electronic device 800 include, but are not limited to: at least one processing unit 810, at least one memory unit 820, a bus 830 connecting different platform components (including memory unit 820 and processing unit 810), a display unit 840, etc.
Wherein the storage unit stores program code that can be executed by the processing unit 810, such that the processing unit 810 performs the steps of the method for visually positioning an unmanned hub described in any of the above embodiments. For example, processing unit 810 may perform the steps shown in fig. 1.
The storage unit 820 may include readable media in the form of volatile memory units such as a random access memory unit (RAM)8201 and/or a cache memory unit 8202, and may further include a read only memory unit (ROM) 8203.
The electronic device 800 may also communicate with one or more external devices 8000, which may be one or more of a keyboard, pointing device, Bluetooth device, etc. These external devices 8000 enable a user to interactively communicate with the electronic device 800. The electronic device 800 may also be capable of communicating with one or more other computing devices, including routers, modems. Such communication may occur via input/output (I/O) interfaces 850. Also, the electronic device 800 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 860. The network adapter 860 may communicate with other modules of the electronic device 800 via the bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the electronic device 800, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the present invention further provides a computer-readable storage medium for storing a program, and when the program is executed, the method for visually positioning an unmanned card concentrator described in any of the above embodiments is implemented. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the method for visual positioning of an unmanned card concentrator as described in any of the embodiments above, when the program product is run on the terminal device.
As described above, the computer-readable storage medium of the present invention can construct a visual map of multi-visual features based on map coordinates by acquiring actual positions of the visual features in a map coordinate system; in the driving process of the unmanned collecting card, a visual map and a visual means are combined, the camera of the unmanned collecting card is used for collecting visual characteristics and matching the visual characteristics with the visual map to obtain the pose deviation of the unmanned collecting card, and the unmanned collecting card is subjected to transverse and longitudinal positioning correction to realize the rapid and accurate positioning of the unmanned collecting card; furthermore, after the positioning correction of the unmanned card concentrator in the transverse and longitudinal directions is carried out, the fused positioning result is verified based on the reliable visual relative relation and the absolute map coordinate, and the positioning deviation warning is carried out on the unmanned card concentrator in a mode of obtaining the deviation of the fused positioning result after the visual features are projected to a map coordinate system, so that the complete reliability of the visual positioning is ensured.
Fig. 9 is a schematic structural diagram of a computer-readable storage medium of the present invention. Referring to fig. 9, a program product 900 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of readable storage media include, but are not limited to: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device, such as through the internet using an internet service provider.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (14)
1. A visual positioning method of an unmanned card gathering is characterized by comprising the following steps:
acquiring the actual position of a visual feature in a map coordinate system, wherein the visual feature comprises a lane line and a decibel mark;
in the running process of the unmanned collecting card, obtaining the pose deviation of the unmanned collecting card according to the visual position of the visual characteristic in the image coordinate system of the first camera of the unmanned collecting card and the actual position, and carrying out positioning correction on the unmanned collecting card, wherein the method comprises the following steps:
according to the visual position of the lane line and the actual position of the lane line, obtaining the transverse pose deviation of the unmanned truck, and performing transverse positioning correction on the unmanned truck;
and according to the visual position of the scallop mark and the actual position of the scallop mark, acquiring the longitudinal pose deviation of the unmanned collecting card, and performing longitudinal positioning correction on the unmanned collecting card.
2. The visual localization method of claim 1, wherein the obtaining lateral pose deviations of the unmanned card includes:
extracting a first image from the video stream of the first camera to obtain the visual position of the lane line and obtain the initial position of the map coordinate system projected by the first camera;
in the map coordinate system, correcting the initial position of the first camera based on the transverse direction and the heading direction, and obtaining the transverse position correction amount and the heading angle correction amount of the initial position of the first camera when the visual position of the lane line matches the actual position of the lane line;
and taking the transverse position correction quantity and the course angle correction quantity as the transverse pose deviation.
3. The visual positioning method of claim 2, wherein the correcting the initial position of the first camera based on the lateral direction and the heading comprises:
obtaining the initial position of the visual position of the lane line projected to the map coordinate system;
carrying out transverse position search and course angle search around the initial position of the first camera, adjusting the initial position of the lane line according to the currently searched transverse position search step length and course angle search angle after each search, and obtaining the current position of the lane line;
and when the current position of the lane line is transversely overlapped with the actual position of the lane line, determining that the visual position of the lane line is matched with the actual position of the lane line.
4. The visual positioning method of claim 2, wherein said performing lateral positioning corrections on said unmanned card concentrator comprises:
acquiring the transverse position variation and the course angle variation of the unmanned card concentrator from the time stamp of the first image to a first current moment;
obtaining a transverse pose residual correction according to the transverse pose deviation, the transverse position variation and the course angle variation;
and according to the residual correction amount of the transverse pose, carrying out transverse position correction and course angle correction on the unmanned card collection at the first current moment.
5. The visual positioning method of claim 2, wherein the trained segmentation network is used to segment the pixel points of the lane line from the first image to obtain the visual position of the lane line.
6. The visual localization method of claim 1, wherein the obtaining the longitudinal pose deviation of the unmanned card gather comprises:
extracting a second image from the video stream of the first camera to obtain the initial position of the visual position of the scallop mark projected to the map coordinate system;
in the map coordinate system, acquiring a longitudinal position deviation amount from the initial position of the scallop mark to the actual position of the scallop mark;
taking at least the longitudinal position deviation amount as the longitudinal pose deviation.
7. The visual positioning method of claim 6, wherein the visual features further comprise ground arrows, and the actual positions of the ground arrows comprise the actual positions of the feature points of the ground arrows and feature descriptors thereof;
the obtaining of the longitudinal pose deviation of the unmanned card gathering further comprises:
obtaining the visual position of the feature point and the initial position projected to the map coordinate system by the feature descriptor of the visual position;
matching feature points according to the feature descriptors to obtain the yaw angle correction quantity of the ground arrow when the initial positions of the feature points are coincident with the actual positions of the feature points;
and taking the longitudinal position deviation amount and the course angle correction amount as the longitudinal pose deviation.
8. The visual positioning method of claim 7, wherein said longitudinally positioning corrections to said unmanned card concentrator comprises:
acquiring the longitudinal position variation and the yaw angle variation of the unmanned card collection from the timestamp of the second image to a second current time;
obtaining a longitudinal pose residual correction according to the longitudinal pose deviation, the longitudinal position variation and the yaw angle variation;
and according to the residual correction amount of the longitudinal pose, performing longitudinal position correction and yaw angle correction on the unmanned card collection at the second current moment.
9. The visual positioning method of claim 7, wherein the visual position of the bunk mark is obtained by detecting the bunk mark and the ground arrow from the second image through a trained detection network; and
and carrying out feature point detection and descriptor calculation on the ground arrow to obtain the visual position of the feature point and a feature descriptor thereof.
10. The visual positioning method of any of claims 1-9, further comprising:
verifying the fusion positioning result of the unmanned truck according to the position of the visual feature in the image coordinate system of the second camera of the unmanned truck and the actual position of the visual feature, comprising:
extracting a picture image from the video stream of the second camera, obtaining the position of each visual feature, and obtaining the projection position of each visual feature projected to the map coordinate system;
obtaining a verification result of the fusion positioning result according to a difference value between the projection position of each visual feature and the actual position of each visual feature in the map coordinate system;
when the verification result is that the positioning accuracy of the fusion positioning result is greater than or equal to a passing threshold and smaller than an optimal threshold, returning to the step of positioning and correcting the unmanned card concentrator;
when the verification result is that the positioning accuracy of the fusion positioning result is smaller than the passing threshold, sending alarm information;
and the fusion positioning result comprises a visual positioning result obtained by performing transverse positioning correction and longitudinal positioning correction on the unmanned card concentrator based on the first camera.
11. The visual positioning method of claim 10, wherein the second camera and the first camera are different onboard cameras of the unmanned cluster card.
12. An unmanned hub vision positioning system, comprising:
the visual map establishing module is used for acquiring the actual position of a visual feature in a map coordinate system, wherein the visual feature comprises a lane line and a decibel mark;
the positioning correction module is used for obtaining the pose deviation of the unmanned collecting card according to the visual position of the visual feature in the image coordinate system of the first camera of the unmanned collecting card and the actual position in the running process of the unmanned collecting card, and performing positioning correction on the unmanned collecting card, and comprises:
the transverse pose correction unit is used for obtaining the transverse pose deviation of the unmanned truck according to the visual position of the lane line and the actual position of the lane line and carrying out transverse positioning correction on the unmanned truck;
and the longitudinal pose correction unit is used for obtaining the longitudinal pose deviation of the unmanned collecting card according to the visual position of the scallop mark and the actual position of the scallop mark, and performing longitudinal positioning correction on the unmanned collecting card.
13. An electronic device, comprising:
a processor;
a memory having executable instructions stored therein;
wherein the executable instructions, when executed by the processor, implement the method of visual positioning of an unmanned card collection of any of claims 1-11.
14. A computer-readable storage medium storing a program, wherein the program, when executed by a processor, implements the method for visual positioning of an unmanned card concentrator of any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741222.9A CN113469045B (en) | 2021-06-30 | 2021-06-30 | Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110741222.9A CN113469045B (en) | 2021-06-30 | 2021-06-30 | Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113469045A true CN113469045A (en) | 2021-10-01 |
CN113469045B CN113469045B (en) | 2023-05-02 |
Family
ID=77876958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110741222.9A Active CN113469045B (en) | 2021-06-30 | 2021-06-30 | Visual positioning method and system for unmanned integrated card, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113469045B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114114369A (en) * | 2022-01-27 | 2022-03-01 | 智道网联科技(北京)有限公司 | Autonomous vehicle positioning method and apparatus, electronic device, and storage medium |
CN114689044A (en) * | 2022-03-28 | 2022-07-01 | 重庆长安汽车股份有限公司 | Fusion positioning system and method for dealing with failure scene of global navigation satellite system |
WO2024169256A1 (en) * | 2023-02-17 | 2024-08-22 | 上海西井科技股份有限公司 | Automatic container pickup method and apparatus for automated straddle carrier, electronic device and storage medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303555A (en) * | 2015-09-18 | 2016-02-03 | 浙江工业大学 | Binocular vision-based method and system for locating and guiding trucks |
CN107728175A (en) * | 2017-09-26 | 2018-02-23 | 南京航空航天大学 | The automatic driving vehicle navigation and positioning accuracy antidote merged based on GNSS and VO |
CN108182552A (en) * | 2018-02-02 | 2018-06-19 | 上海西井信息科技有限公司 | Loading-unloading method, system, equipment and the storage medium of container ship |
US20180292543A1 (en) * | 2017-04-11 | 2018-10-11 | Autoliv Asp, Inc. | Global navigation satellite system vehicle position augmentation utilizing map enhanced dead reckoning |
CN108875689A (en) * | 2018-07-02 | 2018-11-23 | 上海西井信息科技有限公司 | Automatic driving vehicle alignment method, system, equipment and storage medium |
CN109613584A (en) * | 2018-12-27 | 2019-04-12 | 北京主线科技有限公司 | The positioning and orienting method of unmanned truck based on UWB |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN111137279A (en) * | 2020-01-02 | 2020-05-12 | 广州赛特智能科技有限公司 | Port unmanned truck collection station parking method and system |
CN111242031A (en) * | 2020-01-13 | 2020-06-05 | 禾多科技(北京)有限公司 | Lane line detection method based on high-precision map |
US20200198149A1 (en) * | 2018-12-24 | 2020-06-25 | Ubtech Robotics Corp Ltd | Robot vision image feature extraction method and apparatus and robot using the same |
US20200218905A1 (en) * | 2019-01-08 | 2020-07-09 | Qualcomm Incorporated | Lateral and longitudinal offset tracking in vehicle position estimation |
CN111986261A (en) * | 2020-08-13 | 2020-11-24 | 清华大学苏州汽车研究院(吴江) | Vehicle positioning method and device, electronic equipment and storage medium |
CN112285734A (en) * | 2020-10-30 | 2021-01-29 | 北京斯年智驾科技有限公司 | Spike-based high-precision alignment method and system for unmanned port container truck |
CN112415548A (en) * | 2020-11-09 | 2021-02-26 | 北京斯年智驾科技有限公司 | Unmanned card-collecting positioning method, device and system, electronic device and storage medium |
CN112964260A (en) * | 2021-02-01 | 2021-06-15 | 东风商用车有限公司 | Automatic driving positioning method, device, equipment and storage medium |
-
2021
- 2021-06-30 CN CN202110741222.9A patent/CN113469045B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105303555A (en) * | 2015-09-18 | 2016-02-03 | 浙江工业大学 | Binocular vision-based method and system for locating and guiding trucks |
US20180292543A1 (en) * | 2017-04-11 | 2018-10-11 | Autoliv Asp, Inc. | Global navigation satellite system vehicle position augmentation utilizing map enhanced dead reckoning |
CN107728175A (en) * | 2017-09-26 | 2018-02-23 | 南京航空航天大学 | The automatic driving vehicle navigation and positioning accuracy antidote merged based on GNSS and VO |
CN108182552A (en) * | 2018-02-02 | 2018-06-19 | 上海西井信息科技有限公司 | Loading-unloading method, system, equipment and the storage medium of container ship |
CN108875689A (en) * | 2018-07-02 | 2018-11-23 | 上海西井信息科技有限公司 | Automatic driving vehicle alignment method, system, equipment and storage medium |
US20200198149A1 (en) * | 2018-12-24 | 2020-06-25 | Ubtech Robotics Corp Ltd | Robot vision image feature extraction method and apparatus and robot using the same |
CN109613584A (en) * | 2018-12-27 | 2019-04-12 | 北京主线科技有限公司 | The positioning and orienting method of unmanned truck based on UWB |
US20200218905A1 (en) * | 2019-01-08 | 2020-07-09 | Qualcomm Incorporated | Lateral and longitudinal offset tracking in vehicle position estimation |
CN110849367A (en) * | 2019-10-08 | 2020-02-28 | 杭州电子科技大学 | Indoor positioning and navigation method based on visual SLAM fused with UWB |
CN111137279A (en) * | 2020-01-02 | 2020-05-12 | 广州赛特智能科技有限公司 | Port unmanned truck collection station parking method and system |
CN111242031A (en) * | 2020-01-13 | 2020-06-05 | 禾多科技(北京)有限公司 | Lane line detection method based on high-precision map |
CN111986261A (en) * | 2020-08-13 | 2020-11-24 | 清华大学苏州汽车研究院(吴江) | Vehicle positioning method and device, electronic equipment and storage medium |
CN112285734A (en) * | 2020-10-30 | 2021-01-29 | 北京斯年智驾科技有限公司 | Spike-based high-precision alignment method and system for unmanned port container truck |
CN112415548A (en) * | 2020-11-09 | 2021-02-26 | 北京斯年智驾科技有限公司 | Unmanned card-collecting positioning method, device and system, electronic device and storage medium |
CN112964260A (en) * | 2021-02-01 | 2021-06-15 | 东风商用车有限公司 | Automatic driving positioning method, device, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
ZHIQI FENG ET AL.: ""Decision Making and Local Trajectory Planning for Autonomous Driving in Off-road Environment"", 《2020 3RD INTERNATIONAL CONFERENCE ON UNMANNED SYSTEMS (ICUS)》 * |
汪沛等: "无人驾驶技术在港口中的应用", 《港口科技》 * |
赵翔等: "基于视觉和毫米波雷达的车道级定位方法", 《上海交通大学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114114369A (en) * | 2022-01-27 | 2022-03-01 | 智道网联科技(北京)有限公司 | Autonomous vehicle positioning method and apparatus, electronic device, and storage medium |
CN114689044A (en) * | 2022-03-28 | 2022-07-01 | 重庆长安汽车股份有限公司 | Fusion positioning system and method for dealing with failure scene of global navigation satellite system |
WO2024169256A1 (en) * | 2023-02-17 | 2024-08-22 | 上海西井科技股份有限公司 | Automatic container pickup method and apparatus for automated straddle carrier, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113469045B (en) | 2023-05-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11042762B2 (en) | Sensor calibration method and device, computer device, medium, and vehicle | |
CN110163930B (en) | Lane line generation method, device, equipment, system and readable storage medium | |
US10943355B2 (en) | Systems and methods for detecting an object velocity | |
EP3361278B1 (en) | Autonomous vehicle localization based on walsh kernel projection technique | |
CN110462343A (en) | The automated graphics for vehicle based on map mark | |
CN108764187A (en) | Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line | |
CN113469045A (en) | Unmanned card-collecting visual positioning method and system, electronic equipment and storage medium | |
CN110415550B (en) | Automatic parking method based on vision | |
CN109086277A (en) | A kind of overlay region building ground drawing method, system, mobile terminal and storage medium | |
Shunsuke et al. | GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon | |
US11403947B2 (en) | Systems and methods for identifying available parking spaces using connected vehicles | |
CN112232275B (en) | Obstacle detection method, system, equipment and storage medium based on binocular recognition | |
US12012102B2 (en) | Method for determining a lane change indication of a vehicle | |
US20210180958A1 (en) | Graphic information positioning system for recognizing roadside features and method using the same | |
CN110119138A (en) | For the method for self-locating of automatic driving vehicle, system and machine readable media | |
CN111353453B (en) | Obstacle detection method and device for vehicle | |
US11842440B2 (en) | Landmark location reconstruction in autonomous machine applications | |
CN115705693A (en) | Method, system and storage medium for annotation of sensor data | |
US20240221390A1 (en) | Lane line labeling method, electronic device and storage medium | |
CN116295508A (en) | Road side sensor calibration method, device and system based on high-precision map | |
CN116762094A (en) | Data processing method and device | |
CN116642511A (en) | AR navigation image rendering method and device, electronic equipment and storage medium | |
CN113885496A (en) | Intelligent driving simulation sensor model and intelligent driving simulation method | |
CN114972494B (en) | Map construction method and device for memorizing parking scene | |
CN115376365B (en) | Vehicle control method, device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050 Patentee after: Shanghai Xijing Technology Co.,Ltd. Address before: Room 503-3, 398 Jiangsu Road, Changning District, Shanghai 200050 Patentee before: SHANGHAI WESTWELL INFORMATION AND TECHNOLOGY Co.,Ltd. |
|
CP01 | Change in the name or title of a patent holder |