CN117522952A - Auxiliary positioning method, system and computer medium for vehicle - Google Patents

Auxiliary positioning method, system and computer medium for vehicle Download PDF

Info

Publication number
CN117522952A
CN117522952A CN202210909225.3A CN202210909225A CN117522952A CN 117522952 A CN117522952 A CN 117522952A CN 202210909225 A CN202210909225 A CN 202210909225A CN 117522952 A CN117522952 A CN 117522952A
Authority
CN
China
Prior art keywords
image
vehicle
eye
coordinates
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210909225.3A
Other languages
Chinese (zh)
Inventor
刘现款
陈国芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN202210909225.3A priority Critical patent/CN117522952A/en
Publication of CN117522952A publication Critical patent/CN117522952A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The present invention relates to the field of vehicle positioning technologies, and in particular, to a method, a system, and a computer medium for assisting in positioning a vehicle. According to the method, pixel point coordinates representing a target vehicle are determined from an acquired image to be processed, the pixel point coordinates are transformed by using a homography matrix to obtain a reference coordinate set, angular point coordinates are determined from the reference coordinate set, a reference size is calculated according to the angular point coordinates, when the fact that the reference size is consistent with the actual size is detected, a reference center point is calculated according to the reference coordinate set, an auxiliary positioning coordinate is obtained, vehicle positioning is carried out through the image to be processed acquired by a camera, the frequency of acquiring the image is higher relative to that of acquiring the image by a sensor, when positioning errors occur due to abnormality of a vehicle-mounted sensor, vehicle positioning can be assisted, positioning verification is carried out through the vehicle size, the situation of vision positioning errors is avoided, and therefore the accuracy of vehicle positioning is improved.

Description

Auxiliary positioning method, system and computer medium for vehicle
Technical Field
The present invention relates to the field of vehicle positioning technologies, and in particular, to a method, a system, and a computer medium for assisting in positioning a vehicle.
Background
At present, the vehicle positioning is generally performed by adopting a multi-sensor fusion positioning mode and combining and positioning through a vehicle-mounted global navigation positioning system, a laser radar, inertial navigation, a wheel speed device and other sensors, so that how to improve the accuracy of the vehicle positioning under the condition of abnormal state of the vehicle-mounted sensors becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above, the embodiments of the present invention provide a method, a system and a computer medium for assisting in positioning a vehicle, so as to solve the problem that the accuracy of positioning the vehicle is low in the case of abnormal state of a vehicle-mounted sensor.
In a first aspect, an embodiment of the present invention provides an auxiliary positioning method for a vehicle, including:
determining pixel point coordinates representing a target vehicle from an acquired image to be processed, transforming the pixel point coordinates by using a homography matrix, and determining a reference coordinate set representing the target vehicle by a change result, wherein the homography matrix is a transformation relation between a preset map coordinate system and an image coordinate system of the image to be processed;
determining reference coordinates meeting preset conditions in the reference coordinate set as angular point coordinates of the target vehicle, calculating the reference size of the target vehicle according to the angular point coordinates, and acquiring the actual size of the target vehicle;
And when the reference size is detected to be consistent with the actual size, calculating a reference center point of the target vehicle according to the reference coordinate set, and determining a reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate.
In a second aspect, an embodiment of the present invention provides an auxiliary positioning system for a vehicle, the auxiliary positioning system including:
the system comprises an image collector, a memory, a controller and a positioning terminal;
the image collector is connected with the controller, is deployed at the top of the station platform in a fixed pose, continuously collects continuous images in a overlooking view angle, and sends the continuous images to the controller;
the controller is connected with the memory, and when the controller receives the continuous images, the controller identifies whether the continuous images contain a target vehicle or not;
when the continuous images are identified to contain a target vehicle or not, the controller acquires pixel point coordinates representing the target vehicle in the continuous images, and transforms the pixel point coordinates by using a preset homography matrix to obtain a reference coordinate set of the target vehicle;
the controller calculates the reference size of the target vehicle according to the reference coordinate set, and acquires the stored actual size corresponding to the target vehicle from the memory;
When the reference size is detected to be consistent with the actual size, the controller calculates a reference center point of the target vehicle according to the reference coordinate set, and determines a reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate;
the controller is connected with the positioning terminal and sends the auxiliary positioning coordinates to the positioning terminal for auxiliary positioning.
In a third aspect, embodiments of the present invention provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the assisted positioning method according to the first aspect.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
the method comprises the steps of determining pixel point coordinates representing a target vehicle from an acquired image to be processed, transforming the pixel point coordinates by using a homography matrix, determining a reference coordinate set representing the target vehicle by a change result, determining the homography matrix as a transformation relation between a preset map coordinate system and an image coordinate system of the image to be processed, determining the reference coordinate meeting preset conditions in the reference coordinate set as a corner coordinate of the target vehicle, calculating a reference size of the target vehicle according to the corner coordinate, acquiring the actual size of the target vehicle, calculating a reference center point of the target vehicle according to the reference coordinate set when the reference size is detected to be consistent with the actual size, determining the reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate, positioning the vehicle by the image to be processed acquired by a camera, wherein the frequency of acquiring image data is higher relative to that of the vehicle-mounted sensor, and the vehicle positioning can be assisted when positioning errors occur due to abnormality of the vehicle-mounted sensor, and positioning verification is performed by the vehicle size, so that the positioning error caused by camera offset is avoided, and the accuracy of the vehicle positioning is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application environment of a vehicle positioning assisting method according to an embodiment of the invention;
fig. 2 is a flowchart of a method for assisting in positioning a vehicle according to a first embodiment of the present invention;
fig. 3 is a flow chart of an auxiliary positioning method for a vehicle according to a second embodiment of the present invention;
fig. 4 is a system architecture diagram of an auxiliary positioning system for a vehicle according to a third embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
It should be understood that the sequence numbers of the steps in the following embodiments do not mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
The auxiliary positioning method of the vehicle provided by the embodiment of the invention can be applied to an application environment as shown in fig. 1, wherein the image acquisition equipment and the vehicle management terminal are in communication. The image capturing device includes, but is not limited to, a camera, a video camera, other devices with photographing function (mobile phone, tablet computer, etc.), and the corresponding device end of the vehicle management terminal includes, but is not limited to, a palm computer, a desktop computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cloud terminal device, a personal digital assistant (personal digital assistant, PDA), and other computer devices. The image acquisition device can be deployed in the top of the platform to fix the pose for acquiring images of the vehicle in the area of the platform.
Referring to fig. 2, a flow chart of an auxiliary positioning method for a vehicle according to an embodiment of the present invention is shown, where the auxiliary positioning method may be applied to the vehicle management terminal in fig. 1, where a device end corresponding to the vehicle management terminal is connected to an image acquisition device to acquire an image to be processed acquired by the image acquisition device, and the vehicle management terminal has a storage function to provide an actual size of a target vehicle, where the actual size may be used to compare with a calculated reference size to determine accuracy of an auxiliary positioning point. As shown in fig. 2, the assisted positioning method may include the steps of:
step S201, determining pixel point coordinates representing the target vehicle from the acquired image to be processed, transforming the pixel point coordinates by using a homography matrix, and determining a reference coordinate set of the change result representing the target vehicle.
The image to be processed may be an image acquired by an image acquisition device corresponding to the device end, the target vehicle may be a vehicle needing auxiliary positioning, the image acquisition device may be a camera, etc., and the camera may be a binocular camera, a depth camera, etc.
The pixel point coordinates may refer to coordinates of pixel points belonging to the target vehicle in the image to be processed, the homography matrix is a transformation relationship between a preset map coordinate system and an image coordinate system of the image to be processed, and the reference coordinates may refer to coordinates of the target vehicle in the map coordinate system.
Specifically, in this embodiment, the camera is fixed on the top of the platform to capture an image of an internal area of the platform from a top view or an oblique top view, and it is to be noted that, because the captured area is a fixed area, the pose of the camera is fixed, and because the camera is a fixed pose, the internal parameters and external parameters of the camera can be known by default, and further, the first transformation matrix between the image coordinate system and the camera coordinate system of the camera can be determined by the internal parameters of the camera through calibration of the camera, the second transformation matrix between the camera coordinate system and the preset map coordinate system is determined by the external parameters of the camera, and the homography matrix can be obtained by multiplying the first transformation matrix and the second transformation matrix.
In the camera calibration process, a coordinate pair can be constructed based on the corresponding relation between the pixel points in the image coordinate system and the points in the map coordinate system, the homography matrix can be directly solved according to the coordinate pair, and the solving mode can be realized by adopting a neural network, so that the homography matrix can be obtained under the condition of unknown camera internal parameters and external parameters.
In this embodiment, after the image to be processed is obtained, the image may be identified to determine whether the image to be processed includes a vehicle, in this embodiment, a template matching manner is adopted to confirm whether the image to be processed includes a vehicle, that is, a plurality of platform internal area images are pre-stored as template images, a plurality of template images are set to adapt to different external environments, the external environments may refer to weather, illumination, and the like, and since the platform internal area images should be unchanged when no vehicle passes through, whether the image to be processed includes a vehicle may be rapidly detected by the template matching manner, the template matching may refer to performing similarity calculation on the image to be processed and each template image, the similarity calculation manner may adopt cosine similarity, euclidean distance, and the like, in this embodiment, the range of the obtained calculation result is [0,1], a corresponding similarity threshold is set, for example, to be 0.8, and when the calculation result is greater than the similarity threshold, it may be considered that the image to include no vehicle and only noise information in the image acquisition process is included.
It should be noted that, because there is only a difference between the template images in the external environment, the similarity between the template images is also higher, at this time, in order to improve the calculation efficiency, a template image may be selected to perform similarity calculation with the image to be processed, when the calculation result approaches the similarity threshold, that is, the image to be processed is considered to not include a vehicle, the approach may mean that a floating value exists, for example, the floating value may be 0.1, that is, the similarity threshold floats downward to 0.1, and is changed to 0.7, and in order to ensure the accuracy of the result, the implementer may perform similarity calculation on the image to be processed and each template image after the calculation result approaches the similarity threshold, and compare the calculation result with the original similarity threshold of 0.8, thereby reducing invalid calculation as much as possible, reducing calculation load, and improving the processing efficiency.
When the calculation result is smaller than or equal to the similarity threshold value, the fact that the image to be processed is greatly different from the template image is indicated, namely that the image to be processed contains the vehicle is indicated, and at the moment, the detection result is determined to be that the image identification contains the target vehicle.
Optionally, acquiring an image to be processed by adopting a binocular camera, wherein the image to be processed comprises a left-eye image to be processed and a right-eye image to be processed;
Before determining the coordinates of the pixel points representing the target vehicle from the acquired image to be processed, the method further comprises:
respectively inputting the left-eye to-be-processed image and the right-eye to-be-processed image into a pre-trained example segmentation model to obtain a left-eye segmentation image and a right-eye segmentation image;
multiplying the left eye segmentation image with the left eye to-be-processed image point by point to obtain a left eye vehicle contour image;
and multiplying the right-eye segmentation image with the right-eye image to be processed point by point to obtain a right-eye vehicle contour image.
The binocular camera comprises a left-eye camera and a right-eye camera, the left-eye image to be processed can refer to an image to be processed acquired by the left-eye camera, the right-eye image to be processed can refer to an image to be processed acquired by the right-eye camera, the example segmentation model can be a target detection model capable of distinguishing examples, for example, a Mask regression convolutional network (Mask-RCNN) model can be adopted, the left-eye segmentation image can refer to a target segmentation result of the left-eye image to be processed, the right-eye segmentation image can refer to a target segmentation result of the right-eye image to be processed, the left-eye vehicle contour image can refer to a left-eye image only comprising target vehicle pixels, and the right-eye vehicle contour image can refer to a right-eye image only comprising target vehicle pixels.
Specifically, the left-eye to-be-processed image corresponds to the right-eye to-be-processed image one by one, and is two images acquired by the left-eye camera and the right-eye camera at the same time, the output of the example segmentation model is a segmentation image of M channels, M can represent the number of vehicles contained in the to-be-processed image, each vehicle corresponds to the segmentation image of one channel, in the segmentation image, the pixel points belonging to the vehicle are a first preset pixel value, the other pixel points are a second preset pixel value, for example, in the segmentation image, the pixel point pixel value belonging to the vehicle is 1, and the other pixel points are 0.
If M is greater than 1, the practitioner may sequentially determine that the vehicle corresponding to the segmented image of one of the channels is the target vehicle. Since the pixel value of the pixel belonging to the vehicle in the divided image is 1, the other pixel is 0, and the divided image is multiplied with the image to be processed point by point, the pixel value of the pixel belonging to the vehicle in the image to be processed is reserved, and the pixel value of the other pixel is set to 0, so that the vehicle contour image is obtained.
According to the method and the device for identifying the target vehicle in the image to be processed, the pixel points of the target vehicle in the image to be processed are determined in an example segmentation mode, irrelevant working conditions can be isolated, the accuracy of subsequent identification is improved, and the accuracy of auxiliary positioning is further improved.
Optionally, after obtaining the right-eye vehicle contour image, the method further includes:
respectively inputting the left-eye vehicle contour image and the right-eye vehicle contour image into a pre-trained character recognition model to obtain a left-eye recognition result and a right-eye recognition result;
and when the left eye recognition result is consistent with the right eye recognition result, determining that the left eye recognition result or the right eye recognition result is an image recognition result, and after the image recognition result is obtained, executing the step of determining the pixel point coordinates representing the target vehicle from the obtained image to be processed.
The character recognition model may be an optical character recognition model (Optical Character Recognition, OCR), the left-eye recognition result may be character information in a left-eye vehicle contour image, and the right-eye recognition result may be character information in a right-eye vehicle contour image.
Specifically, the character recognition model may be used to extract a vehicle identifier in a vehicle contour image, where the vehicle identifier may refer to information that may be used to identify a vehicle, such as a vehicle serial number, a vehicle license plate, etc., in order to avoid a recognition error of the character recognition model, a left-eye recognition result is compared with a right-eye recognition result, when the left-eye recognition result is consistent with the right-eye recognition result, any one of the left-eye recognition result and the right-eye recognition result is determined to be an image recognition result, if the left-eye recognition result is inconsistent with the right-eye recognition result, the example segmentation and the character recognition process are performed again after waiting for a preset frame number, so as to avoid an image to be processed is acquired during an inbound process of the vehicle, a motion blur phenomenon exists, resulting in an image recognition error, the preset frame number may be set to 15 frames, and if Q may be set to 10 after waiting for Q preset frame numbers, neither of the left-eye recognition result is consistent with the right-eye recognition result, then abnormal information is generated and the abnormal information is sent to an operation and maintenance person.
According to the embodiment, character recognition is respectively carried out on the left-eye vehicle contour image and the right-eye vehicle contour image, and the left-eye recognition result and the right-eye recognition result are mutually verified, so that the accuracy of the recognition result is ensured, misjudgment caused by subsequent comparison with vehicle information is avoided, and the accuracy of auxiliary positioning is improved.
Optionally, after obtaining the image recognition result, the step of determining the coordinates of the pixel points characterizing the target vehicle from the acquired image to be processed includes:
obtaining N preset vehicle identifications, and carrying out similarity calculation on each vehicle identification and an image recognition result to obtain a similarity calculation result;
if the maximum similarity in the similarity calculation result is greater than a preset similarity threshold value, determining that the other result of the image contains the target vehicle, and executing the step of determining the pixel point coordinates representing the target vehicle from the acquired image to be processed.
Wherein N is an integer greater than zero, the vehicle identifier may refer to pre-stored vehicle identifier information, for example, information that may be used to identify a vehicle, such as a vehicle serial number, a vehicle license plate, etc., the similarity calculation may use a euclidean distance, and the similarity calculation result may be used to determine a vehicle identifier corresponding to the target vehicle.
Specifically, all vehicles passing through the platform by default are stored with corresponding vehicle identifications, and then after the target vehicle is determined, the image identification result of the target vehicle is compared with each stored vehicle identification, so that the actual vehicle identification of the target vehicle is obtained.
Since there is a vehicle identification consistent with the image recognition result, the similarity threshold may be set to 0.95 to ensure that the vehicle identification corresponding to the target vehicle is obtained.
If the maximum similarity in the similarity calculation result is smaller than or equal to the similarity threshold value, which indicates that the image identification has an abnormal condition at this time, the image identification can be carried out again after waiting for the preset frame number, the preset frame number can be set to 15 frames, if the maximum similarity in the similarity calculation result is still not satisfied after waiting for Q preset frame numbers, Q can be set to 10 frames, and abnormal information is generated and sent to operation and maintenance personnel.
According to the embodiment, the image recognition result is compared with the stored vehicle identification, so that the actual vehicle identification of the target vehicle is obtained, the accuracy of the process of obtaining the actual size of the target vehicle according to the actual vehicle identification is facilitated, and the accuracy of auxiliary positioning is improved.
Optionally, transforming the pixel coordinates using a homography matrix, and determining the reference coordinate set of the change result for characterizing the target vehicle includes:
extracting characteristic points of the left-eye vehicle contour image and the right-eye vehicle contour image respectively to obtain left-eye characteristic points and right-eye characteristic points, and determining the left-eye characteristic points and the right-eye characteristic points as characteristic point pairs when the left-eye characteristic points and the right-eye characteristic points meet preset conditions;
aiming at the pixel point coordinates corresponding to any one of the feature points in the feature point pairs, calculating the depth of the pixel point coordinates according to the feature point pairs by taking the triangulation principle as a basis;
taking the small hole imaging model as a basis, and calculating to obtain camera coordinates of pixel point coordinates under a camera coordinate system according to the feature point pairs and the depth;
and transforming the camera coordinates into reference coordinates in a map coordinate system through a homography matrix to obtain a reference coordinate set.
The feature point extraction can be used for extracting pixel points with preset features in an image, the feature point extraction mode can adopt modes of Harris corner detection, SIFT feature point extraction and the like, the preset condition can be a condition of feature point matching, the feature point matching can be performed through a normalized cross-correlation function and a random sampling consistency algorithm, the depth can be a distance from an actual position corresponding to the feature point to a camera imaging surface, and the small-hole imaging model can be a perspective projection model in physics.
Specifically, the focal length of the camera can be obtained by calculation based on the depth, the pixel point coordinates can be converted into camera coordinates under the camera coordinate system according to the coordinate information of the focal length and the feature point pairs of the camera, in this embodiment, the homography matrix can refer to a pose transformation matrix, the homography matrix includes rotation parameters and translation parameters, that is, external parameter data of the camera, the pose transformation matrix can convert the camera coordinates into reference coordinates under the map coordinate system, and the above processing is performed on each feature point, so as to obtain a reference coordinate set.
According to the method and the device for locating the vehicle, the pose transformation matrix of the camera is used as the homography matrix, so that the homography matrix can be obtained through calculation only according to the pose of the camera, the difficulty of camera calibration is reduced, the efficiency of camera calibration is improved, the complexity of preparation work in the early stage of auxiliary locating is further reduced, and the method and the device for locating the vehicle can be applied to auxiliary locating of the vehicle more quickly.
When the image recognition result is detected to contain the target vehicle, the pixel point coordinates representing the target vehicle in the image to be processed are obtained, the pixel point coordinates are transformed by using the homography matrix, the reference coordinate set of the target vehicle is determined by the change result, and the pixel point coordinates in the image coordinate system of the target vehicle are converted into the reference coordinates in the map coordinate system, so that the actual position of the target vehicle is obtained, the follow-up auxiliary positioning is facilitated, and the accuracy of the auxiliary positioning of the vehicle is improved.
Step S202, determining the reference coordinates meeting the preset conditions in the reference coordinate set as corner coordinates of the target vehicle, calculating the reference size of the target vehicle according to the corner coordinates, and obtaining the actual size of the target vehicle.
The corner coordinates may refer to reference coordinates of four corners at the top of the target vehicle, the reference dimensions may refer to a length and a width of the target vehicle calculated from the reference coordinates, and the actual dimensions may refer to a real length and a real width of the stored target vehicle.
Specifically, the method of obtaining the corner point may adopt a neural network model, a corner point detection method, and the like, for example, in this embodiment, the method of extracting the extracted point adopts a Harris corner point detection method, that is, the extracted point is an initial corner point, at this time, the initial corner point may be used to represent a point having gradient change nearby, and nearby may refer to a four-neighborhood, an eight-neighborhood, and the like.
The reference coordinates corresponding to the initial corner points are screened, the reference coordinate with the largest z coordinate can be reserved according to the z coordinate of the reference coordinates, and then the screening is carried out according to the corner point characteristics of the vehicle, wherein the corner point characteristics of the vehicle can mean that four corner points can form a rectangle.
And determining the reference coordinates meeting the preset conditions in the reference coordinate set as the angular point coordinates of the target vehicle, calculating the reference size of the target vehicle according to the angular point coordinates, and acquiring the actual size of the target vehicle.
In step S203, when the reference size is detected to be consistent with the actual size, a reference center point of the target vehicle is calculated according to the reference coordinate set, and the reference coordinate corresponding to the reference center point is determined as the auxiliary positioning coordinate.
The reference center point may refer to a centroid point of all reference coordinates, the centroid point may be used to represent a positioning point obtained by the target vehicle under the visual processing, and the auxiliary positioning coordinate may refer to an actual coordinate corresponding to the positioning point obtained by the visual processing.
Specifically, when the reference size is detected to be consistent with the actual size, it is indicated that there is no abnormality in the parameters of the camera at this time, and the parameters are not affected by parallax.
Optionally, after determining the reference coordinate corresponding to the reference center point as the auxiliary positioning coordinate, the method further includes:
Acquiring vehicle positioning coordinates sent by a sensor in a target vehicle;
and when the vehicle positioning coordinates are consistent with the auxiliary positioning coordinates, determining the position of the target vehicle in the preset topological electronic map according to the vehicle positioning coordinates.
The vehicle positioning coordinates may refer to coordinates under a map coordinate system obtained by positioning the target vehicle by the vehicle-mounted sensor, and the topological electronic map may refer to a preset vehicle position representation map for visualizing the real-time position of the target vehicle.
Specifically, when the vehicle positioning coordinates are consistent with the auxiliary positioning coordinates, the visual positioning and the sensor positioning are not deviated, and the vehicle positioning coordinates are directly used as real positioning coordinates.
When the vehicle positioning coordinates are inconsistent with the auxiliary positioning coordinates, the visual positioning and the sensor positioning are indicated to have deviation, the next target vehicle can be waited at the moment, the next target vehicle is positioned, if the vehicle positioning coordinates are inconsistent with the auxiliary positioning coordinates, the camera is indicated to be abnormal at the moment, the situations of lens offset, shake and the like are possibly caused, and if the positioning coordinates are consistent with the auxiliary positioning coordinates, the sensor in the target vehicle is indicated to be abnormal at the moment, the auxiliary positioning coordinates are taken as real positioning coordinates, and abnormal information is sent to prompt operation and maintenance staff to inspect and maintain the target vehicle.
According to the embodiment, the vehicle positioning coordinates acquired by the sensor are subjected to auxiliary positioning according to the auxiliary positioning coordinates, so that abnormal vehicle positioning caused by abnormal conditions can be avoided, and the accuracy of vehicle positioning is improved.
Optionally, the acquiring process of the homography matrix includes:
selecting calibration feature points from preset high-precision map data;
according to the calibration feature points, obtaining calibration pixel points from the acquired image to be calibrated in a matching way;
obtaining a camera calibration point corresponding to the calibration pixel point through an imaging model according to the calibration pixel point;
and calculating to obtain a homography matrix according to the calibration feature points and the camera calibration points.
The high-precision map data may refer to coordinate information data under a map coordinate system, the calibration feature points may refer to calibration points with obvious features, an implementer may manually set the calibration points, for example, a calibration plate is placed in a station area, the calibration pixel points may refer to pixel points corresponding to the calibration feature points in an image to be calibrated, the imaging model may refer to a small-hole imaging model, that is, camera internal reference data are known, and then image coordinates of the calibration pixel points may be converted into camera calibration point coordinates under a camera coordinate system.
According to the method, the homography matrix is solved in a pre-calibration mode, so that the situation that parallax exists in the acquired image to be processed is reduced as much as possible, and the accuracy of auxiliary positioning is improved.
When the reference size is detected to be consistent with the actual size, calculating the reference center point of the target vehicle according to the reference coordinate set, and determining the reference coordinate corresponding to the reference center point as the auxiliary positioning coordinate, verifying the accuracy of auxiliary positioning through the reference size and the actual size, and avoiding the situation of positioning errors caused by camera offset, thereby improving the accuracy of vehicle positioning.
According to the vehicle positioning method and device, vehicle positioning is carried out through the image to be processed, which is acquired by the camera, for the frequency of acquiring image data by the vehicle-mounted sensor is higher, when errors occur in positioning due to the abnormality of the vehicle-mounted sensor, vehicle positioning can be assisted, positioning verification is carried out through the vehicle size, the situation that positioning errors occur due to camera deviation is avoided, and therefore the accuracy of vehicle positioning is improved.
Referring to fig. 3, a flow chart of an auxiliary positioning method for a vehicle according to a second embodiment of the present invention is shown, where the reference dimension may be determined directly according to a rectangle formed by corner points, or may be determined according to a line segment obtained by fitting corner points.
In determining the rectangle formed according to the corner points, the reference dimension determining process is referred to in the first embodiment, and will not be described herein.
In the process of determining line segments obtained according to corner fitting, the reference dimension determining process comprises the following steps:
step S301, counting neighborhood coordinates of each reference coordinate by adopting a preset template, and determining the reference coordinate with the statistical result equal to a preset statistical threshold value as a corner coordinate;
step S302, according to the corner coordinates, a fitting line segment set is obtained through straight line fitting, and a fitting line segment with parallel fitting line segments in the fitting line segment set is determined to be a boundary line segment;
step S303, calculating the distance between the boundary line segment and the parallel fitting line segment corresponding to the boundary line segment, and determining the calculation result as the reference size of the target vehicle.
The preset template may be a corner extraction template, the statistical threshold may be used to determine whether the reference coordinates are corner coordinates, the fitted line segment may be a straight line fitting, the fitted line segment may be determined according to the straight line obtained by fitting, the straight line fitting may use a binary once equation as an expression of the straight line, at this time, the z values in the default reference coordinates are consistent, that is, the distances between each corner coordinate of the vehicle and a plane formed by a horizontal axis and a vertical axis of the map coordinate system are consistent, and the boundary line segment may be a line segment belonging to the boundary of the target vehicle.
In particular, since the reference coordinates are three-dimensional coordinates, the size of the template may be 3 x 3, i.e. the corner points are determined from 26 neighborhood coordinates of the reference coordinates, for example, elements within the template may all be set to 1, and after convolving the reference coordinate and its neighborhood coordinates with the template, comparing the convolution result with a statistical threshold, wherein the statistical threshold can be 4, and the statistical threshold can be 3 under the condition that only the neighborhood coordinates are counted without counting the reference coordinates, so that four corner coordinates with the same z value are obtained;
and connecting any two of the four corner points and fitting corresponding straight lines to obtain six fitting line segments, wherein two groups of line segments exist, the two fitting line segments in each group of line segments are parallel to each other, and then the fitting line segments contained in the two groups of line segments can be determined as boundary line segments.
And calculating the distance between the two parallel boundary line segments to obtain a first distance and a second distance, determining the larger of the first distance and the second distance as a reference length, and determining the smaller of the first distance and the second distance as a reference width.
According to the method, the line segments are fitted in the corner fitting mode, and the boundary line segments are determined according to the parallel relation among the line segments, so that the situation that a rectangle formed by the corner is difficult to obtain under the condition that tiny parallax exists is avoided, and the robustness of auxiliary positioning is improved.
Fig. 4 shows a system architecture diagram of an auxiliary positioning system for a vehicle according to a third embodiment of the present invention, where the auxiliary positioning system includes:
the system comprises an image collector, a memory, a controller and a positioning terminal;
the image collector is connected with the controller, is deployed at the top of the station platform in a fixed pose, continuously collects continuous images in a overlooking view angle, and sends the continuous images to the controller;
the controller is connected with the memory, and when the controller receives the continuous images, the controller identifies whether the continuous images contain a target vehicle or not;
when the fact that whether the continuous images contain the target vehicle or not is identified, the controller acquires pixel point coordinates representing the target vehicle in the continuous images, and transforms the pixel point coordinates by using a preset homography matrix to obtain a reference coordinate set of the target vehicle;
the controller calculates the reference size of the target vehicle according to the reference coordinate set, and acquires the stored actual size of the corresponding target vehicle from the memory;
when the fact that the reference size is consistent with the actual size is detected, the controller calculates a reference center point of the target vehicle according to the reference coordinate set, and determines a reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate;
The controller is connected with the positioning terminal and sends the auxiliary positioning coordinates to the positioning terminal for auxiliary positioning.
The image collector may refer to a camera, and because the pose of the camera is fixed, the internal parameters and external parameters of the camera are easily obtained through calibration and are not changed, so that the internal parameters and external parameters of the default camera are known, and because the variable parameters in the homography matrix are the internal parameters and the external parameters of the camera, the default homography matrix is known.
Specifically, when identifying whether the continuous image includes the target vehicle, the image may be processed by using the instance segmentation model to obtain the image position of the target vehicle in the image, so as to obtain a pixel point coordinate set of the target vehicle in the image, where the pixel point coordinate set includes pixel point coordinates of a plurality of target vehicles.
For any pixel point coordinate, the pixel point coordinate is expressed as a column vector, namely a vector of one row and three columns, elements in the vector are respectively an abscissa in the pixel point coordinate, a column coordinate in the pixel point coordinate and 1, a homography matrix is adopted to multiply the column vector to the right, and the obtained result is a reference coordinate corresponding to the pixel point coordinate.
The reference dimension may be determined by extracting corner points from the reference coordinate set and then determining the distance between the corner points, and in this embodiment, the corner point extraction method may adopt manners such as Harris corner point detection and SIFT feature point extraction.
In an embodiment, the bounding box corner points of the target vehicle obtained by the example segmentation may determine, as corner points, a reference point closest to each bounding box corner point in the reference coordinate set.
The reference center point can be obtained by adopting a centroid calculation mode, namely, the abscissa average value of all reference coordinates in the reference coordinate set is taken as the abscissa of the reference center point, and the ordinate average value of all reference coordinates is taken as the ordinate of the reference center point.
It should be noted that, because the content of the information interaction and the execution process of the system component is based on the same concept as the method embodiment of the present invention, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of each system component is illustrated, and in practical application, the allocation of functions corresponding to the above-described system component may be performed by different computing modules or computing units, that is, the internal structure of the system is divided into different computing units or computing modules, so as to perform all or part of the functions described above. The computing unit and the computing module corresponding to each system component in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the system components are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention. The specific working process of each system component in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above-described embodiment, and may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiment described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The present invention may also be implemented as a computer program product for implementing all or part of the steps of the method embodiments described above, when the computer program product is run on a computer device, causing the computer device to execute the steps of the method embodiments described above.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed system and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of system components described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be through some interfaces, indirect coupling or communication connection of system components, electrical, mechanical, or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method of assisting in locating a vehicle, the method comprising:
determining pixel point coordinates representing a target vehicle from an acquired image to be processed, transforming the pixel point coordinates by using a homography matrix, and determining a reference coordinate set representing the target vehicle by a change result, wherein the homography matrix is a transformation relation between a preset map coordinate system and an image coordinate system of the image to be processed;
Determining reference coordinates meeting preset conditions in the reference coordinate set as angular point coordinates of the target vehicle, calculating the reference size of the target vehicle according to the angular point coordinates, and acquiring the actual size of the target vehicle;
and when the reference size is detected to be consistent with the actual size, calculating a reference center point of the target vehicle according to the reference coordinate set, and determining a reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate.
2. The aided positioning method of claim 1, wherein the image to be processed is acquired by a binocular camera, the image to be processed comprising a left-eye image to be processed and a right-eye image to be processed;
before determining the pixel point coordinates representing the target vehicle from the acquired image to be processed, the method further comprises:
respectively inputting the left-eye to-be-processed image and the right-eye to-be-processed image into a pre-trained example segmentation model to obtain a left-eye segmentation image and a right-eye segmentation image;
multiplying the left eye segmentation image with the left eye to-be-processed image point by point to obtain a left eye vehicle contour image;
and multiplying the right-eye segmentation image with the right-eye image to be processed point by point to obtain a right-eye vehicle contour image, wherein the left-eye vehicle contour image and the right-eye vehicle contour image are used for screening identification objects from the image to be processed.
3. The aided positioning method of claim 2, further comprising, after said obtaining a right-eye vehicle contour image:
respectively inputting the left-eye vehicle contour image and the right-eye vehicle contour image into a pre-trained character recognition model to obtain a left-eye recognition result and a right-eye recognition result;
and when the left eye recognition result is consistent with the right eye recognition result, determining that the left eye recognition result or the right eye recognition result is the image recognition result, and executing the step of determining the pixel point coordinates representing the target vehicle from the acquired image to be processed after the image recognition result is obtained.
4. The aided positioning method of claim 3, wherein after obtaining the image recognition result, the step of performing the determination of the pixel coordinates characterizing the target vehicle from the acquired image to be processed includes:
obtaining N preset vehicle identifications, and carrying out similarity calculation on each vehicle identification and the image identification result to obtain a similarity calculation result, wherein N is an integer larger than zero;
and if the maximum similarity in the similarity calculation result is greater than a preset similarity threshold value, determining that the image recognition result contains the target vehicle, and executing the step of determining the pixel point coordinates representing the target vehicle from the acquired image to be processed.
5. The aided positioning method of claim 2 wherein said transforming said pixel coordinates using a homography matrix, determining a set of reference coordinates for which the result of the transformation characterizes said target vehicle comprises:
extracting characteristic points of the left-eye vehicle contour image and the right-eye vehicle contour image respectively to obtain a left-eye characteristic point and a right-eye characteristic point, and determining the left-eye characteristic point and the right-eye characteristic point as characteristic point pairs when the left-eye characteristic point and the right-eye characteristic point meet preset conditions;
aiming at the pixel point coordinates corresponding to any one of the characteristic points in the characteristic point pair, calculating the depth of the pixel point coordinates according to the characteristic point pair by taking a triangulation principle as a basis;
calculating to obtain camera coordinates of the pixel point coordinates under a camera coordinate system according to the characteristic point pairs and the depth by taking a small hole imaging model as a basis;
and transforming the camera coordinates into reference coordinates in the map coordinate system through the homography matrix to obtain the reference coordinate set.
6. The aided positioning method of claim 1, wherein said determining a reference coordinate satisfying a preset condition in the reference coordinate set as a corner coordinate of the target vehicle, and calculating a reference size of the target vehicle according to the corner coordinate comprises:
Counting the neighborhood coordinates of each reference coordinate by adopting a preset template, and determining the reference coordinates with the statistical result equal to a preset statistical threshold value as the corner coordinates;
according to the angular point coordinates, a fitting line segment set is obtained through straight line fitting, and a fitting line segment with parallel fitting line segments in the fitting line segment set is determined to be the boundary line segment;
and calculating the distance between the boundary line segment and the parallel fitting line segment corresponding to the boundary line segment, and determining the calculation result as the reference size of the target vehicle.
7. The aided positioning method of claim 1, further comprising, after determining that the reference coordinate corresponding to the reference center point is an aided positioning coordinate:
acquiring vehicle positioning coordinates sent by a sensor in the target vehicle;
and when the vehicle positioning coordinates are consistent with the auxiliary positioning coordinates, determining the position of the target vehicle in a preset topological electronic map according to the vehicle positioning coordinates.
8. The aided positioning method of any one of claims 1 to 7, wherein the homography matrix acquisition process includes:
selecting calibration feature points from preset high-precision map data;
According to the calibration characteristic points, calibration pixel points are obtained by matching the acquired images to be calibrated;
obtaining a camera calibration point corresponding to the calibration pixel point through an imaging model according to the calibration pixel point;
and calculating to obtain the homography matrix according to the calibration feature points and the camera calibration points.
9. An auxiliary positioning system for a vehicle, the auxiliary positioning system comprising:
the system comprises an image collector, a memory, a controller and a positioning terminal;
the image collector is connected with the controller, is deployed at the top of the station platform in a fixed pose, continuously collects continuous images in a overlooking view angle, and sends the continuous images to the controller;
the controller is connected with the memory, and when the controller receives the continuous images, the controller identifies whether the continuous images contain a target vehicle or not;
when the continuous images are identified to contain a target vehicle or not, the controller acquires pixel point coordinates representing the target vehicle in the continuous images, and transforms the pixel point coordinates by using a preset homography matrix to obtain a reference coordinate set of the target vehicle;
The controller calculates the reference size of the target vehicle according to the reference coordinate set, and acquires the stored actual size corresponding to the target vehicle from the memory;
when the reference size is detected to be consistent with the actual size, the controller calculates a reference center point of the target vehicle according to the reference coordinate set, and determines a reference coordinate corresponding to the reference center point as an auxiliary positioning coordinate;
the controller is connected with the positioning terminal and sends the auxiliary positioning coordinates to the positioning terminal for auxiliary positioning.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the assisted positioning method according to any of claims 1 to 8.
CN202210909225.3A 2022-07-29 2022-07-29 Auxiliary positioning method, system and computer medium for vehicle Pending CN117522952A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210909225.3A CN117522952A (en) 2022-07-29 2022-07-29 Auxiliary positioning method, system and computer medium for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210909225.3A CN117522952A (en) 2022-07-29 2022-07-29 Auxiliary positioning method, system and computer medium for vehicle

Publications (1)

Publication Number Publication Date
CN117522952A true CN117522952A (en) 2024-02-06

Family

ID=89761281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210909225.3A Pending CN117522952A (en) 2022-07-29 2022-07-29 Auxiliary positioning method, system and computer medium for vehicle

Country Status (1)

Country Link
CN (1) CN117522952A (en)

Similar Documents

Publication Publication Date Title
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
CN109784250B (en) Positioning method and device of automatic guide trolley
CN111222395A (en) Target detection method and device and electronic equipment
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN109447117B (en) Double-layer license plate recognition method and device, computer equipment and storage medium
CN111307039A (en) Object length identification method and device, terminal equipment and storage medium
CN111213153A (en) Target object motion state detection method, device and storage medium
WO2014002692A1 (en) Stereo camera
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN112683228A (en) Monocular camera ranging method and device
CN110926330A (en) Image processing apparatus, image processing method, and program
CN116433737A (en) Method and device for registering laser radar point cloud and image and intelligent terminal
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN114919584A (en) Motor vehicle fixed point target distance measuring method and device and computer readable storage medium
CN111161348A (en) Monocular camera-based object pose estimation method, device and equipment
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
CN112802112B (en) Visual positioning method, device, server and storage medium
CN117522952A (en) Auxiliary positioning method, system and computer medium for vehicle
CN113643355B (en) Target vehicle position and orientation detection method, system and storage medium
CN112800806B (en) Object pose detection tracking method and device, electronic equipment and storage medium
JP6492603B2 (en) Image processing apparatus, system, image processing method, and program
CN114762019A (en) Camera system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination