CN109101957B - Binocular solid data processing method, device, intelligent driving equipment and storage medium - Google Patents

Binocular solid data processing method, device, intelligent driving equipment and storage medium Download PDF

Info

Publication number
CN109101957B
CN109101957B CN201811265850.9A CN201811265850A CN109101957B CN 109101957 B CN109101957 B CN 109101957B CN 201811265850 A CN201811265850 A CN 201811265850A CN 109101957 B CN109101957 B CN 109101957B
Authority
CN
China
Prior art keywords
determining
region
lane
visual image
lane line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811265850.9A
Other languages
Chinese (zh)
Other versions
CN109101957A (en
Inventor
胡荣东
马源
唐铭希
彭美华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidi Intelligent Driving Hunan Co ltd
Original Assignee
Changsha Intelligent Driving Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Intelligent Driving Research Institute Co Ltd filed Critical Changsha Intelligent Driving Research Institute Co Ltd
Priority to CN201811265850.9A priority Critical patent/CN109101957B/en
Publication of CN109101957A publication Critical patent/CN109101957A/en
Application granted granted Critical
Publication of CN109101957B publication Critical patent/CN109101957B/en
Priority to PCT/CN2019/113102 priority patent/WO2020083349A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a kind of binocular solid data processing method, device, intelligent driving equipment and computer storage mediums, this method comprises: determining whether current field inside lane line information meets setting condition, matched target data processing strategie is selected according to definitive result;According to the target data processing strategie, the corresponding area-of-interest of a mesh visual pattern is determined;Stereo matching is carried out according to the area-of-interest and by the mesh visual pattern and another mesh visual pattern, obtains filtered binocular solid data.

Description

Binocular stereo data processing method and device, intelligent driving equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a binocular stereo data processing method and device, intelligent driving equipment and a computer storage medium.
Background
In the field of smart driving, camera-based front obstacle detection may be divided into monocular and monocular camera schemes. The monocular camera scheme refers to that information is acquired only by one camera, and the multi-view camera scheme refers to that information is acquired by more than two cameras simultaneously. Compared with a multi-view camera scheme, the common monocular camera cannot acquire depth information, so that the method is not suitable for high-precision obstacle detection.
The binocular stereo vision means that two cameras in different directions are used, or the same camera is used for shooting the same target by moving or virtualizing the two cameras through optical skills, the working mode of human eyes is simulated, and two visual images are correspondingly obtained. Because the binocular stereo vision is a working mode for simulating human eyes, the positions of the targets in the world coordinate system can be restored according to the position difference of the same target in the two cameras by processing the data of the obtained two visual images, and thus, high-precision three-dimensional information of an intelligent driving scene can be theoretically obtained. However, since there may be a large number of objects outside the travelable region in the field of view, such as trees, sky, buildings, traffic signs, etc., the accuracy of the matching algorithm may be affected, causing false targets, interfering with the positioning and measurement of obstacles.
Disclosure of Invention
In order to solve the existing technical problems, embodiments of the present invention provide a binocular stereo data processing method and apparatus, an intelligent driving device, and a computer storage medium, which can remove false target points and non-obstacle data to the greatest extent and improve the obstacle detection precision.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
a binocular stereo data processing method includes: determining whether the lane line information in the current view field meets the set conditions, and selecting a matched target data processing strategy according to the determination result; determining an interested area corresponding to a visual image according to the target data processing strategy; and performing stereo matching on the first visual image and the other visual image based on the interesting region to obtain filtered binocular stereo data.
Wherein, whether the lane line information in the current view field accords with the setting condition is determined, and the determining method comprises the following steps: and acquiring a visual image corresponding to the current view field, identifying the visual image, extracting lane line information in the visual image, and determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information.
Wherein, whether the lane line information in the current view field accords with the setting condition is determined, and the determining method comprises the following steps: detecting lane line information within the current view field; when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information; and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
When the lane line information meets the set conditions, determining an interested area corresponding to a visual image according to the target data processing strategy, comprising: acquiring end points of lane lines on two corresponding sides of a target lane in a visual image, wherein the end points comprise a first end point far away from a road vanishing point and a second end point close to the road vanishing point; determining a region of interest based on the positions of the first end point and the second end point, the slope at which the first end point and/or the second end point are located, and the lane line.
Wherein the determining the region of interest based on the positions of the first end point and the second end point, the slope of the positions of the first end point and/or the second end point, and the lane line comprises: determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line; determining a first lane extension line according to the position of the first end point and the slope of the first end point, and determining a first adjacent area based on an area formed by a connecting line of the first end point and the first lane extension line; determining a second lane extension line according to the position of the second endpoint and the slope at the second endpoint, and determining a height adjacent region based on a region formed by a connecting line of the second endpoint and the second lane extension line, or determining a height extension line at a set angle relative to the target lane according to the position of the second endpoint, and determining a height adjacent region based on a region formed by the connecting line of the second endpoint and the height extension line; and combining the first adjacent area, the height adjacent area and the initial region of interest to determine a region of interest.
When the lane line information does not meet the set condition, determining an interested area corresponding to a visual image according to the target data processing strategy, comprising: acquiring the position of a target vehicle, and determining an interested area under a world coordinate system in front of the position of the target vehicle according to the set lane width, vehicle height and effective detection distance; acquiring attitude data of an image acquisition device; and performing Euclidean transformation on the region of interest under the world coordinate system according to the attitude data, and determining the region of interest corresponding to the transformed visual image.
Wherein, the obtaining of the attitude data of the image acquisition device comprises: acquiring attitude data of the image acquisition device relative to the world coordinate system, wherein the attitude data comprises a pitch angle, a roll angle and a yaw angle; the Euclidean transformation is carried out on the region of interest under the world coordinate system according to the attitude data, and the region of interest corresponding to the transformed visual image is determined, wherein the Euclidean transformation comprises the following steps: and determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinates of the region of interest in the world coordinate system and the rotation matrix.
A binocular stereo data processing apparatus comprising: the strategy selection module is used for determining whether the lane line information in the current sight field meets the set conditions and selecting a matched target data processing strategy according to the determination result; the ROI determining module is used for determining a region of interest corresponding to a visual image according to the target data processing strategy; and the stereo matching module is used for carrying out stereo matching on the visual image and the other visual image based on the interesting region to obtain filtered binocular stereo data.
An intelligent driving apparatus comprising a processor and a memory for storing a computer program operable on the processor; the processor is configured to execute the binocular stereo data processing method according to any embodiment of the present application when the processor runs the computer program.
A computer storage medium having a computer program stored therein, the computer program, when executed by a processor, implementing a binocular stereo data processing method according to any one of the embodiments of the present application.
In the binocular stereo data processing method provided in the above embodiment, by determining whether the lane line information in the current field of view meets the setting condition, selecting a matched target data processing policy according to the determination result, determining an interest region corresponding to a visual image according to the target data processing policy, and performing stereo matching on the visual image and another visual image based on the interest region to obtain filtered binocular stereo data, where the target data processing policy is determined according to whether the lane line information in the current field of view meets the setting condition, so that the corresponding data processing policy can be set in a targeted manner based on the actual conditions of different fields of view, and it is convenient to more accurately determine the interest region corresponding to the visual image; aiming at a binocular vision image obtained by binocular stereo vision, a region of interest corresponding to a visual image determined according to a target data processing strategy and the stereoscopic matching of the visual image and another visual image are carried out, wherein the region of interest of the other visual image can be implicitly determined by the region of interest corresponding to the visual image or determined in the same way as the visual image, the calculated amount can be greatly reduced through reasonable determination of the region of interest, false target point and non-obstacle data can be effectively removed through the stereoscopic matching, and the obstacle detection precision is improved.
Drawings
Fig. 1 is an application environment diagram of a binocular stereo data processing method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a binocular stereo data processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of determining a region of interest based on lane line information in an embodiment of the present invention;
FIG. 4 is a schematic diagram of determining a region of interest based on lane line information in another embodiment of the present invention;
fig. 5 is a schematic diagram of determining a region of interest in a world coordinate system based on lane prior knowledge according to another embodiment of the present invention.
Fig. 6 is a schematic flow chart of a binocular stereo data processing method according to an alternative embodiment of the present invention;
fig. 7 is a schematic structural diagram of a binocular stereo data processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an intelligent driving apparatus according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
As shown in fig. 1, in one embodiment, an application environment diagram adopting the binocular stereo data processing method provided by the embodiment of the invention is provided, which includes a vehicle 100, a camera 200 disposed on the vehicle 100 for forming binocular stereo vision, and a smart driving device 300 disposed on the vehicle 100. The smart driving apparatus 300 may be a smart apparatus such as an on-vehicle computer, an on-vehicle controller, a vehicle driving control system, a mobile terminal, etc. that can run a computer program of a binocular stereo data processing method. In an automatic driving scene, the intelligent driving device 300 determines whether the lane line information in the current view field meets the set condition in real time, and selects a data processing strategy based on the lane line information as prior information as a target data processing strategy when the lane line information in the current view field meets the set condition; selecting a data processing strategy based on lane prior knowledge combined with camera attitude data as prior information as a target data processing strategy when the lane line information in the current view field does not meet the set conditions, determining a visual image such as an interesting region corresponding to a left visual image according to the target data processing strategy, performing stereo matching on the interesting region and the visual image and another visual image, such as performing stereo matching on the left visual image and a right visual image, filtering point cloud data obtained by the stereo matching according to the interesting region, or directly performing stereo matching according to the interesting region of the left visual image and the right visual image, wherein the interesting region in the right visual image can be determined by the interesting region in the left visual image in an implicit way, therefore, filtered binocular stereo data are obtained, and accurate and efficient detection of the obstacles in the current view field is achieved according to the filtered binocular stereo data. It is to be understood that a visual image herein may refer to a left visual image or a right visual image, and accordingly, when a visual image herein refers to a right visual image, another visual image refers to a left visual image, and when a visual image herein refers to a left visual image, another visual image refers to a right visual image.
As shown in fig. 2, in one embodiment, there is provided a binocular stereo data processing method which may be applied to the intelligent driving apparatus shown in fig. 1, the method including:
step 101, determining whether the lane line information in the current view field meets the set conditions, and selecting a matched target data processing strategy according to the determination result;
the lane line information is information indicating the position, number, shape, and the like of a lane included in a road surface on which the vehicle is currently traveling. The maximum range that a field-of-view vehicle can observe through a camera in a driving scene is usually expressed in terms of angles, and generally the larger the field of view, the larger the observation range. Here, determining whether the lane line information in the current field of view meets the setting condition means determining whether a confidence of the lane line information that can be acquired in the current field of view is higher than a threshold. When the lane line information meets the set conditions, the position, the shape and the like of the lane where the vehicle currently runs in the current view field can be determined according to the lane lines. When the lane line information does not meet the set condition, the road where the vehicle currently runs in the current view field is represented as an unstructured road or effective information capable of identifying the lane where the vehicle currently runs cannot be obtained.
Here, selecting the matched target data processing policy according to the determination result includes: when the corresponding lane line information accords with the set conditions, selecting a data processing strategy for masking the corresponding visual image based on the lane line information as prior information as a target data processing strategy; and when the corresponding lane line information does not accord with the setting condition, selecting a data processing strategy for masking point cloud data obtained after stereo matching of the binocular stereo image based on lane prior knowledge and camera attitude data as prior information as a target data processing strategy. Wherein, the prior information refers to experience or historical data used for determining the data processing strategy. Masking refers to the masking of the image to be processed (either wholly or partially) with a selected image, graphic or object to control the area or process of processing the image to be processed.
103, determining an interested area corresponding to a visual image according to the target data processing strategy;
here, the Region Of Interest (ROI) refers to a two-dimensional or three-dimensional image Region selected from a current visual Region, such as a two-dimensional or three-dimensional image, which is regarded as an important point Of Interest for image analysis. The visual image may be any one of binocular visual images, such as a left-eye visual image. By determining the region of interest corresponding to the visual image and then performing the next processing, the range of the image to be processed can be adjusted by reasonably determining the region of interest, the processing time is reduced, and the processing precision is increased. The target data processing strategy is matched according to the determination result of whether the lane line information in the current view field meets the set condition, and when the lane line information meets the set condition, the target data processing strategy based on the lane line information as prior information can be selected; when the lane line information does not accord with the setting condition, a target data processing strategy which takes the combination of lane priori knowledge and camera attitude data as the priori information can be selected, so that the accuracy of the region of interest can be improved by selecting the target data processing strategy which is matched with the determination result of whether the lane line information accords with the setting condition, the processing range can be accurately reduced, the false target point and non-obstacle data in the binocular stereoscopic vision image can be effectively removed, and the obstacle detection precision is improved.
And 105, performing stereo matching on the visual image and the other visual image according to the region of interest to obtain filtered binocular stereo data.
Here, Stereo Matching (Stereo Matching) refers to finding a Matching corresponding point from different visual images. The intelligent driving device can identify false target point or non-obstacle data according to a matching result of a target point determined by the interested area corresponding to the visual image in the other visual image by performing stereo matching on the basis of the interested area and the visual image and the other visual image. Here, the stereo matching of one visual image with another visual image according to the region of interest may be: firstly, stereo matching is carried out on a visual image and another visual image, and point cloud data obtained after stereo matching is filtered according to the region of interest; or, performing stereo matching according to the region of interest corresponding to the visual image and another visual image, wherein the region of interest corresponding to the other visual image can be implicitly determined by the region of interest of the visual image. By performing stereo matching on the visual image and the other visual image according to the region of interest, the calculation time overhead of the region of interest corresponding to the other visual image can be saved, the calculation amount is reduced, and the obstacle detection accuracy is ensured according to the determination of the region of interest corresponding to the visual image. For example, based on the position of the data point a in the region of interest corresponding to the left visual image, determining whether a data point a' matching the data point a exists in the right visual image, and if not, identifying the data point a as a false target point; if the obstacle exists, the position and the shape of the object which are jointly defined by the data points A' and the data points A can be further combined, so that the obstacle or the non-obstacle can be determined.
In the binocular stereo data processing method provided by the embodiment, whether the lane line information in the current view field meets the setting condition is determined, the matched target data processing strategy is selected according to the determination result, the region of interest corresponding to a visual image is determined according to the target data processing strategy, and stereo matching is performed according to the region of interest and the visual image and another visual image to obtain filtered binocular stereo data, wherein the target data processing strategy is determined according to whether the lane line information in the current view field meets the setting condition, so that the corresponding data processing strategy can be set in a targeted manner based on the actual conditions of different view fields, and the region of interest in the visual image can be determined more accurately; the method comprises the steps of determining an interested area corresponding to a visual image according to a target data processing strategy aiming at a binocular visual image obtained by binocular stereo vision, and carrying out stereo matching on the visual image and another visual image, wherein the calculating time overhead of the interested area corresponding to the other visual image can be saved through reasonably determining the interested area corresponding to the visual image, the calculated amount can be greatly reduced, false target point and non-obstacle data can be effectively eliminated through stereo matching, and the obstacle detection precision is improved.
In one embodiment, in step 101, the determining whether the lane line information in the current field of view meets the setting condition includes:
and acquiring a visual image corresponding to the current view field, identifying the visual image, extracting lane line information in the visual image, and determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information.
Here, the smart driving apparatus may determine lane line information through image recognition by acquiring a visual image corresponding to the current field of view obtained by the camera photographing. The confidence degree refers to the degree of the actual value of the corresponding parameter falling around the measurement result with a certain probability, and can be used for representing the credibility degree or the probability value of the measurement value of the measured parameter. Determining whether the lane line information in the current view field meets the set condition according to the confidence of the lane line information, namely identifying the lane line information according to the visual image corresponding to the current view field, and determining that the lane line information meets the set condition when the lane line information is identified and the confidence of the lane line information is higher than a threshold value; otherwise, if the effective lane line information cannot be identified or the confidence coefficient of the lane line information is smaller than the threshold value, determining that the lane line information does not meet the set condition. The lane line information is correspondingly determined to meet the set conditions through the fact that the confidence coefficient of the lane line information is higher than the threshold value, so that when a data processing strategy based on the lane line information as prior information is selected, the lane line information is accurate and complete, and the accuracy of subsequent data processing is ensured.
In another embodiment, in step 101, the determining whether the lane line information in the current field of view meets the setting condition includes:
detecting lane line information in the current view field;
when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information;
and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
Here, the smart driving apparatus may detect lane line information in the current field of view before acquiring the visual image corresponding to the current field of view. The confidence level is used to characterize the confidence level of the detected lane line information or the probability value that falls into the true value. When lane line information is detected, determining that the lane line information in the current view field meets the set conditions according to the fact that the confidence coefficient of the lane line information is higher than a threshold value; and when the confidence of the lane line information is smaller than a threshold value or the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition. By detecting that the confidence of the lane line information is higher than the threshold value and correspondingly determining that the lane line information meets the set condition, the lane line information can be ensured to be accurate and complete when a data processing strategy based on the lane line information as prior information is selected, and the accuracy of subsequent data processing is ensured. The lane line information is detected before the visual image is acquired, so that the lane line information can be conveniently detected by combining other detection algorithms except for image recognition, independent maintenance and upgrading of the functional module are facilitated, and the realization mode is more flexible.
In one embodiment, when the lane line information meets the setting condition, the step 103 of determining the region of interest in a visual image according to the target data processing policy includes:
acquiring end points of lane lines on two corresponding sides of a target lane in a visual image, wherein the end points comprise a first end point far away from a road vanishing point and a second end point close to the road vanishing point;
determining a region of interest based on the positions of the first end point and the second end point, the slope at which the first end point and/or the second end point are located, and the lane line.
And when the lane line information meets the set conditions, correspondingly selecting a data processing strategy based on the lane line information as prior information as a target data processing strategy. And determining an interested area corresponding to a visual image according to a data processing strategy taking the lane line information as the prior information. The target lane refers to a current driving lane of the vehicle. The end points of the lane lines on the two corresponding sides of the target lane can be determined according to the road vanishing point of the target lane in the visual image, wherein the end point of the target lane at one end far away from the road vanishing point is a first end point, and the end point at one end close to the road vanishing point is a second end point. The road vanishing point is an intersection point where lane lines on two sides of the target lane in the visual image extend along the vehicle driving direction. It will be appreciated that, for the example of a forward vehicle, the first and second end points are both located forward of the vehicle's own position. As shown in fig. 3, the lane lines on the opposite sides of the target lane are denoted by L1 and L2, respectively, and the end points of the lane lines are denoted by first end points a1 and a2, and second end points B1 and B2, respectively. According to the positions of the first end point and the second end point, the slope of the position of the first end point and/or the second end point and the lane line, a lane and an adjacent area of the lane can be determined as an area which needs to detect whether an obstacle exists in the driving process of the vehicle, and the area is determined as an interested area in the visual image, so that the accuracy of obstacle detection can be ensured, the driving safety can be ensured, and meanwhile, the interference information can be reduced to the maximum extent, the calculation amount can be reduced, and the detection efficiency can be improved.
Further, the determining the region of interest based on the positions of the first end point and the second end point, the slope of the positions of the first end point and/or the second end point, and the lane line includes:
determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line;
determining a first lane extension line according to the position of the first end point and the slope of the first end point, and determining a first adjacent area based on an area formed by a connecting line of the first end point and the first lane extension line;
determining a second lane extension line according to the position of the second endpoint and the slope at the second endpoint, and determining a height adjacent region based on a region formed by a connecting line of the second endpoint and the second lane extension line, or determining a height extension line at a set angle relative to the target lane according to the position of the second endpoint, and determining a height adjacent region based on a region formed by the connecting line of the second endpoint and the height extension line;
and combining the first adjacent area, the height adjacent area and the initial region of interest to determine a region of interest.
Here, the first neighboring region is determined as a part of the region of interest according to the position of the first end point located far from the road vanishing point of the target lane and the slope at the first end point, so that the reliability of the obstacle detection can be improved, and the safety in front of the driving direction can be further secured. According to the position of the second endpoint, close to the road vanishing point, of the target lane, the height extension line is determined at a set angle relative to the target lane, the set angle is usually 90 degrees, the height adjacent area is determined to be a part of the interested area, the obstacles with the front height higher than the road vanishing point can be ensured to be detected completely, and the detection accuracy and the driving safety are improved. Referring again to fig. 3, first lane extensions determined according to the positions of the first end points a1, a2 and the slopes at the first end points a1, a2 are respectively denoted by L3, L4, an initial region of interest is denoted by region 1, a first adjacent region is denoted by region 2, height extensions are denoted by L5, L6, and a height adjacent region is denoted by region 3. Optionally, referring to fig. 4, a second lane extension line may be determined according to the positions of the second end points B1 and B2 and the slopes of the second end points B1 and B2, and the height proximity area may be determined by the second lane extension line. The second lane extensions are denoted by L7 and L8, and the region of high proximity, denoted by region 4 in fig. 4, is determined based on the second lane extensions, so that the range of the region of interest can be narrowed down, the interference information can be eliminated to the maximum extent, and the detection efficiency and accuracy can be improved, while ensuring that the front obstacle having a height higher than the road vanishing point can be detected.
In an embodiment, when the lane line information does not meet the setting condition, the step 103 of determining an interesting region corresponding to a visual image according to the target data processing policy includes:
acquiring the position of a target vehicle, and determining an interested area under a world coordinate system in front of the position of the target vehicle according to the set lane width, vehicle height and effective detection distance;
acquiring attitude data of an image acquisition device;
and performing Euclidean transformation on the region of interest under the world coordinate system according to the attitude data, and determining the region of interest corresponding to the transformed visual image.
And when the lane line information does not accord with the set conditions, correspondingly selecting a data processing strategy which takes the prior knowledge of the lane and the attitude data of the camera as the prior information as a target data processing strategy. And determining an interested region corresponding to a visual image according to a data processing strategy which takes the lane prior knowledge and the camera attitude data as the prior information. The target vehicle refers to a vehicle adopting the intelligent driving device. Here, the lane prior knowledge includes the set lane width W, the vehicle height H, and the effective detection distance L. The set lane width and the vehicle height can be determined according to the conventional lane width and the vehicle height respectively. The effective detection distance is determined according to the maximum detection distance which can be detected by the current image acquisition device, and is usually not more than the maximum detection distance of the current image acquisition device. The method comprises the steps of determining an area to be driven in front of a vehicle according to the position of a target vehicle, and determining the area as an area needing to detect whether an obstacle exists in the driving process of the vehicle, so that the area is determined as an interested area in a visual image.
The image acquisition device refers to a device for acquiring binocular vision images, such as a camera. Determining the origin of a world coordinate system according to the position of a target vehicle, determining the vertex coordinates of the region of interest on a coordinate axis corresponding to the world coordinate system according to the lane width, the vehicle height and the effective detection distance, and performing Euclidean transformation on the region of interest determined in the world coordinate system by combining the posture data of an image acquisition device to obtain the region of interest in an image acquisition mode coordinate system, so that the region of interest corresponding to a visual image can be determined. It should be noted that, in the embodiment of the present invention, the images for binocular stereoscopic vision may be obtained by corresponding to two cameras, or may refer to one of the images for binocular stereoscopic vision obtained by a monocular camera, and another image for binocular vision obtained by converting the one image for binocular stereoscopic vision is obtained, which is not limited herein. For the image capturing device referred to as a binocular camera, a target camera herein corresponds to a visual image. For the image acquisition device is a monocular camera, the region of interest in the image acquisition coordinate system is obtained here, and it can be understood that when the visual image directly acquired by the monocular camera is a left visual image, the region of interest in the camera coordinate system is a region of interest corresponding to the left visual image, or when the visual image directly acquired by the monocular camera is a right visual image, the region of interest in the camera coordinate system is correspondingly a region of interest corresponding to the right visual image.
Further, the acquiring of the posture data of the image acquisition device includes:
acquiring attitude data of the image acquisition device relative to the world coordinate system, wherein the attitude data comprises a pitch angle, a roll angle and a yaw angle;
the Euclidean transformation is carried out on the region of interest under the world coordinate system according to the attitude data, and the region of interest corresponding to the transformed visual image is determined, wherein the Euclidean transformation comprises the following steps:
and determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinates of the region of interest in the world coordinate system and the rotation matrix.
Referring to fig. 5, the attitude data of the image capturing device may include a pitch angle θ and a roll angleAnd a yaw angle phi. In an alternative embodiment, the pitch angle θ and the roll angle are determined according to the pitch angle θ and the roll angleAnd the rotation matrix determined by the yaw angle phi is represented by R as follows:
determining the interested region under the coordinate system of the image acquisition device according to the product of the vertex coordinates of the interested region under the world coordinate system and the rotation matrix, thereby determining the interested region corresponding to the visual image, wherein a certain vertex P is (P ═ P)x,py,pz) For example, the vertex P is euclidean transformed into P '═ P'x,p'y,p'z) Wherein P ═ RP. Any point in the region of interest under the world coordinate system can be converted through the Euclidean transformation to obtain a coordinate point corresponding to the coordinate system under the image acquisition mode coordinate system, so that the region of interest under the image acquisition mode coordinate system can be determined according to the region of interest under the world coordinate system, namely, the region of interest in the corresponding visual image is determined.
In the embodiment of the invention, when the lane line information does not meet the set conditions, the situation that the vehicle can possibly run on an unstructured road under the current view field or cannot obtain effective lane line information in the running process on a structured road is shown, the region of interest is determined according to the region into which the vehicle is about to run based on the current position of the target vehicle, whether obstacles exist in front of the vehicle or not can be accurately detected under different road environments, the three-dimensional region of interest is determined by creating a world coordinate system, the region of interest in the visual image is obtained by performing Euclidean transformation based on the posture data of the image acquisition device, the accuracy of determining the region of interest can be improved, the range of the region of interest is reduced as much as possible, interference information is eliminated to the greatest extent, and the detection efficiency and the accuracy are improved.
Referring to fig. 6, an implementation process of the binocular stereo data processing method according to an embodiment of the present invention is described below with reference to an alternative embodiment as an example, in which an image capturing device is specifically a camera, and the method includes:
step S11, obtaining lane line information;
step S13, determining whether the lane line information is successfully acquired; if yes, go to step S14; if not, executing S25-S28;
step S14, determining whether the confidence of the lane line information is higher than a threshold value; if yes, go to steps S15-S18; if not, executing steps S25-S28;
step S15, generating an initial ROI according to the lane line information;
step S16, obtaining a first adjacent area and merging the first adjacent area into the ROI based on the extension of the lane line at the end point far away from one end of the road vanishing point;
step S17, obtaining a highly adjacent region and merging the highly adjacent region into the ROI based on the extension of the lane line at the end point close to one end of the road vanishing point;
step S18, obtaining filtered binocular stereo data by stereo matching of the ROI corresponding to one visual image and the other visual image;
step S25, forming an ROI under an initialized world coordinate system according to the lane priori knowledge;
step S26, acquiring camera attitude data, wherein the camera attitude data comprises a pitch angle, a roll angle and a yaw angle;
step S27, determining a conversion matrix according to the camera attitude data, and converting the ROI under the world coordinate system to obtain a corresponding ROI under the camera coordinate system;
and step S28, performing stereo matching on the visual image and the other visual image, and filtering binocular stereo point cloud data obtained by matching according to the ROI under the camera coordinate system to obtain filtered binocular stereo data.
In the embodiment of the invention, the generation mode of the matched ROI can be selected according to the lane line information and the confidence thereof, so that the ROI in the visual image can be accurately and efficiently determined according to different actual conditions of a road in the current driving scene of a vehicle or different actual conditions of lane line information acquisition, the range of the ROI can be reduced as much as possible, interference information can be eliminated to the greatest extent on the premise of ensuring accurate and efficient determination of the ROI, and the detection efficiency and accuracy are improved.
As shown in fig. 7, in one embodiment, there is provided a binocular stereo data processing apparatus including a policy selection module 11, a ROI determination module 13, and a stereo matching module 15. And the strategy selection module 11 is used for determining whether the lane line information in the current view field meets the set conditions and selecting a matched target data processing strategy according to the determination result. And the ROI determining module 13 is used for determining a region of interest corresponding to the visual image according to the target data processing strategy. And the stereo matching module 15 is used for carrying out stereo matching on the interested region, the visual image and the other visual image to obtain filtered binocular stereo data.
In an embodiment, the policy selection module 11 is specifically configured to acquire a visual image corresponding to the current field of view, identify the visual image to extract lane line information in the visual image, and determine whether the lane line information in the current field of view meets a setting condition according to a confidence level of the lane line information.
In another embodiment, the policy selection module 11 is specifically configured to detect lane line information in the current field of view; when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information; and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
In one embodiment, the ROI determining module 13 includes an endpoint unit and an ROI unit, where when the lane line information meets the setting condition, the endpoint unit is configured to obtain endpoints of lane lines on two sides corresponding to a target lane in a visual image, where the endpoints include a first endpoint far from a road vanishing point and a second endpoint close to the road vanishing point; the ROI unit is used for determining an interested area based on the positions of the first endpoint and the second endpoint, the slope of the position of the first endpoint and/or the second endpoint and the lane line.
The ROI unit comprises an initialization unit, a first adjacent region determination unit, a second adjacent region determination unit and a merging unit. The initialization unit is used for determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line; a first adjacent area determination unit configured to determine a first lane extension line according to a position of the first end point and a slope at the first end point, and determine a first adjacent area based on an area formed by a connection line of the first end point and the first lane extension line; a second adjacent region determining unit configured to determine a second lane extension line from the position of the second end point and the slope at the second end point, determine a highly adjacent region based on a region formed by a line connecting the second end point and the second lane extension line, or determine a highly adjacent region at a set angle with respect to the target lane based on the position of the second end point, and determine a highly adjacent region based on a region formed by a line connecting the second end point and the highly extending line; and the merging unit is used for merging the second adjacent area, the height adjacent area and the initial region of interest to determine the region of interest.
In another embodiment, the ROI determination module 13 includes a first ROI unit, a conversion unit, and a second ROI unit. The first ROI unit is used for acquiring the position of a target vehicle, and determining an interested area in a world coordinate system in front of the position of the target vehicle according to the set lane width, the vehicle height and the effective detection distance. And the conversion unit is used for acquiring the attitude data of the image acquisition device. And the second ROI unit is used for carrying out Euclidean transformation on the region of interest under the world coordinate system according to the posture data and determining the region of interest corresponding to the transformed visual image.
The conversion unit is specifically configured to acquire attitude data of the image acquisition device relative to the world coordinate system, where the attitude data includes a pitch angle, a roll angle, and a yaw angle. And the second ROI unit is specifically used for determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinate of the region of interest in the world coordinate system and the rotation matrix.
It should be noted that: the binocular stereo data processing device provided in the above embodiment is exemplified by only the division of the above program modules when filtering the binocular stereo data, and in practical applications, the above steps may be distributed by different program modules as needed, that is, the internal structure of the device may be divided into different program modules to complete all or part of the above-described processing. In addition, the binocular stereo data processing apparatus and the binocular stereo data processing method provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail, and are not described herein again.
The embodiment of the present invention further provides an intelligent driving device, which may be a vehicle-mounted device installed on a vehicle as shown in fig. 1, and it can be understood that the intelligent driving device may also refer to a vehicle and the like including the corresponding vehicle-mounted device. Referring to fig. 8, the intelligent driving device includes a processor 201 and a memory 202 for storing a computer program capable of running on the processor 201, wherein the processor 201 is configured to execute the steps of the binocular stereo data processing method provided in any embodiment of the present application when running the computer program. The processor 201 and the memory 202 here do not refer to a corresponding number of one, but may be one or more. The intelligent driving device further comprises a memory 203, a network interface 204, and a system bus 205 connecting the memory 203, the network interface 204, the processor 201, and the storage 202. The memory stores an operating system and a virtual binocular stereo data processing device corresponding to a computer program for implementing the binocular stereo data processing method provided by the embodiment of the invention. The processor 201 is used to support the movement of the entire smart driving device. The memory 203 may be used to provide an environment for the execution of computer programs in the storage 202. The network interface 204 may be used for external server devices, terminal devices, and the like to perform network communication, receive or transmit data, such as to obtain driving control instructions input by a user.
Embodiments of the present invention further provide a computer storage medium, for example, a memory storing a computer program, where the computer program is executable by a processor to perform the steps of the binocular stereo data processing method provided in any embodiment of the present invention. The computer storage medium can be FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. The scope of the invention is to be determined by the scope of the appended claims.

Claims (8)

1. A binocular stereo data processing method is characterized by comprising the following steps:
determining whether the lane line information in the current view field meets the set conditions, and selecting a matched target data processing strategy according to the determination result;
determining an interested area corresponding to a visual image according to the corresponding target data processing strategy;
performing stereo matching on the visual image and the other visual image according to the interesting area to obtain filtered binocular stereo data;
when the lane line information meets the set conditions, determining an interested area corresponding to a visual image according to the corresponding target data processing strategy, including: acquiring end points of lane lines on two corresponding sides of a target lane in a visual image, wherein the end points comprise a first end point far away from a road vanishing point and a second end point close to the road vanishing point; determining a region of interest based on the positions of the first end point and the second end point, the slope of the positions of the first end point and/or the second end point, and the lane line;
when the lane line information does not meet the set condition, determining an interested area corresponding to a visual image according to the corresponding target data processing strategy, comprising: acquiring the position of a target vehicle, and determining an interested area under a world coordinate system in front of the position of the target vehicle according to the set lane width, vehicle height and effective detection distance; acquiring attitude data of an image acquisition device; and performing Euclidean transformation on the region of interest under the world coordinate system according to the attitude data, and determining the region of interest corresponding to the transformed visual image.
2. The method of claim 1, wherein the determining whether lane line information in the current field of view meets a set condition comprises:
and acquiring a visual image corresponding to the current view field, identifying the visual image, extracting lane line information in the visual image, and determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information.
3. The method of claim 1, wherein the determining whether lane line information in the current field of view meets a set condition comprises:
detecting lane line information within the current view field;
when lane line information is detected, determining whether the lane line information in the current view field meets a set condition according to the confidence of the lane line information;
and when the lane line information is not detected, determining that the lane line information in the current view field does not accord with the set condition.
4. The method of claim 1, wherein determining the region of interest based on the locations of the first end point and the second end point, the slope at the locations of the first end point and/or the second end point, and the lane line comprises:
determining an initial region of interest based on a region formed by the connecting line of the first end point, the connecting line of the second end point and the lane line;
determining a first lane extension line according to the position of the first end point and the slope of the first end point, and determining a first adjacent area based on an area formed by a connecting line of the first end point and the first lane extension line;
determining a second lane extension line according to the position of the second endpoint and the slope at the second endpoint, and determining a height adjacent region based on a region formed by a connecting line of the second endpoint and the second lane extension line, or determining a height extension line at a set angle relative to the target lane according to the position of the second endpoint, and determining a height adjacent region based on a region formed by the connecting line of the second endpoint and the height extension line;
and combining the first adjacent area, the height adjacent area and the initial region of interest to determine a region of interest.
5. The method of claim 1, wherein the acquiring pose data of an image acquisition device comprises:
acquiring attitude data of the image acquisition device relative to the world coordinate system, wherein the attitude data comprises a pitch angle, a roll angle and a yaw angle;
the Euclidean transformation is carried out on the region of interest under the world coordinate system according to the attitude data, and the region of interest corresponding to the transformed visual image is determined, wherein the Euclidean transformation comprises the following steps:
and determining a rotation matrix according to the pitch angle, the roll angle and the yaw angle, and determining the region of interest corresponding to the transformed visual image according to the product of the vertex coordinates of the region of interest in the world coordinate system and the rotation matrix.
6. A binocular stereo data processing apparatus, comprising:
the strategy selection module is used for determining whether the lane line information in the current sight field meets the set conditions and selecting a matched target data processing strategy according to the determination result;
the ROI determining module is used for determining an interested area corresponding to a visual image according to the corresponding target data processing strategy;
the stereo matching module is used for carrying out stereo matching on the visual image and the other visual image according to the interesting region to obtain filtered binocular stereo data;
the ROI determining module is specifically used for acquiring end points of lane lines on two corresponding sides of a target lane in a visual image when the lane line information meets a set condition, wherein the end points comprise a first end point far away from a road vanishing point and a second end point close to the road vanishing point; determining a region of interest based on the positions of the first end point and the second end point, the slope of the positions of the first end point and/or the second end point, and the lane line;
when the lane line information does not accord with the set conditions, the position of a target vehicle is obtained, and an interested area under a world coordinate system is determined in front of the position of the target vehicle according to the set lane width, the vehicle height and the effective detection distance; acquiring attitude data of an image acquisition device; and performing Euclidean transformation on the region of interest under the world coordinate system according to the attitude data, and determining the region of interest corresponding to the transformed visual image.
7. An intelligent driving apparatus, comprising a processor and a memory for storing a computer program operable on the processor; wherein,
the processor is configured to execute the binocular stereo data processing method according to any one of claims 1 to 5 when the computer program is executed.
8. A computer storage medium, characterized in that a computer program is stored in the computer storage medium, which when executed by a processor implements the binocular stereo data processing method of any one of claims 1 to 5.
CN201811265850.9A 2018-10-24 2018-10-29 Binocular solid data processing method, device, intelligent driving equipment and storage medium Active CN109101957B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811265850.9A CN109101957B (en) 2018-10-29 2018-10-29 Binocular solid data processing method, device, intelligent driving equipment and storage medium
PCT/CN2019/113102 WO2020083349A1 (en) 2018-10-24 2019-10-24 Method and device for data processing for use in intelligent driving equipment, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811265850.9A CN109101957B (en) 2018-10-29 2018-10-29 Binocular solid data processing method, device, intelligent driving equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109101957A CN109101957A (en) 2018-12-28
CN109101957B true CN109101957B (en) 2019-07-12

Family

ID=64869544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811265850.9A Active CN109101957B (en) 2018-10-24 2018-10-29 Binocular solid data processing method, device, intelligent driving equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109101957B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020083349A1 (en) * 2018-10-24 2020-04-30 长沙智能驾驶研究院有限公司 Method and device for data processing for use in intelligent driving equipment, and storage medium
CN109902637B (en) * 2019-03-05 2021-03-19 长沙智能驾驶研究院有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN111783502A (en) * 2019-04-03 2020-10-16 长沙智能驾驶研究院有限公司 Visual information fusion processing method and device based on vehicle-road cooperation and storage medium
TWI711007B (en) * 2019-05-02 2020-11-21 緯創資通股份有限公司 Method and computing device for adjusting region of interest
US11227167B2 (en) * 2019-06-28 2022-01-18 Baidu Usa Llc Determining vanishing points based on lane lines
CN110675635B (en) * 2019-10-09 2021-08-03 北京百度网讯科技有限公司 Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN110988801A (en) * 2019-10-25 2020-04-10 东软睿驰汽车技术(沈阳)有限公司 Radar installation angle adjusting method and device
CN111160086B (en) * 2019-11-21 2023-10-13 芜湖迈驰智行科技有限公司 Lane line identification method, device, equipment and storage medium
CN112597846B (en) * 2020-12-14 2022-11-11 合肥英睿系统技术有限公司 Lane line detection method, lane line detection device, computer device, and storage medium
CN116958265A (en) * 2023-09-19 2023-10-27 交通运输部天津水运工程科学研究所 Ship pose measurement method and system based on binocular vision

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477629A (en) * 2008-12-29 2009-07-08 东软集团股份有限公司 Interested region extraction process and apparatus for traffic lane
CN102184562A (en) * 2011-05-10 2011-09-14 深圳大学 Method and system for automatically constructing three-dimensional face animation model
WO2011141016A1 (en) * 2010-05-14 2011-11-17 Conti Temic Microelectronic Gmbh Method for detecting traffic signs
CN102521589A (en) * 2011-11-18 2012-06-27 深圳市宝捷信科技有限公司 Method and system for detecting lane marked lines
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
CN105718865A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 System and method for road safety detection based on binocular cameras for automatic driving
CN106203383A (en) * 2016-07-21 2016-12-07 成都之达科技有限公司 Vehicle safety method for early warning based on image
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system
CN108413971A (en) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 Vehicle positioning technology based on lane line and application

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101477629A (en) * 2008-12-29 2009-07-08 东软集团股份有限公司 Interested region extraction process and apparatus for traffic lane
WO2011141016A1 (en) * 2010-05-14 2011-11-17 Conti Temic Microelectronic Gmbh Method for detecting traffic signs
CN102184562A (en) * 2011-05-10 2011-09-14 深圳大学 Method and system for automatically constructing three-dimensional face animation model
CN102521589A (en) * 2011-11-18 2012-06-27 深圳市宝捷信科技有限公司 Method and system for detecting lane marked lines
CN105225482A (en) * 2015-09-02 2016-01-06 上海大学 Based on vehicle detecting system and the method for binocular stereo vision
CN105718865A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 System and method for road safety detection based on binocular cameras for automatic driving
CN106203383A (en) * 2016-07-21 2016-12-07 成都之达科技有限公司 Vehicle safety method for early warning based on image
CN107025432A (en) * 2017-02-28 2017-08-08 合肥工业大学 A kind of efficient lane detection tracking and system
CN108413971A (en) * 2017-12-29 2018-08-17 驭势科技(北京)有限公司 Vehicle positioning technology based on lane line and application

Also Published As

Publication number Publication date
CN109101957A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109101957B (en) Binocular solid data processing method, device, intelligent driving equipment and storage medium
CN110426051B (en) Lane line drawing method and device and storage medium
US11530924B2 (en) Apparatus and method for updating high definition map for autonomous driving
US20200293058A1 (en) Data processing method, apparatus and terminal
JP7240367B2 (en) Methods, apparatus, electronic devices and storage media used for vehicle localization
CN108845574B (en) Target identification and tracking method, device, equipment and medium
CN108406731B (en) Positioning device, method and robot based on depth vision
CN111326023B (en) Unmanned aerial vehicle route early warning method, device, equipment and storage medium
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
EP2958054B1 (en) Hazard detection in a scene with moving shadows
CN111263960B (en) Apparatus and method for updating high definition map
Li et al. Easy calibration of a blind-spot-free fisheye camera system using a scene of a parking space
CN108508916B (en) Control method, device and equipment for unmanned aerial vehicle formation and storage medium
WO2015024407A1 (en) Power robot based binocular vision navigation system and method based on
WO2019197140A1 (en) Apparatus for determining an angle of a trailer attached to a vehicle
CN112365549B (en) Attitude correction method and device for vehicle-mounted camera, storage medium and electronic device
CN103366155B (en) Temporal coherence in unobstructed pathways detection
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
JP2009288885A (en) Lane detection device, lane detection method and lane detection program
CN112541416A (en) Cross-radar obstacle tracking method and device, electronic equipment and storage medium
CN113537047A (en) Obstacle detection method, obstacle detection device, vehicle and storage medium
CN108107897A (en) Real time sensor control method and device
Manivannan et al. Vision based intelligent vehicle steering control using single camera for automated highway system
CN111860084B (en) Image feature matching and positioning method and device and positioning system
CN116403191A (en) Three-dimensional vehicle tracking method and device based on monocular vision and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Binocular stereo data processing method, device, intelligent driving equipment and storage medium

Effective date of registration: 20220303

Granted publication date: 20190712

Pledgee: China Minsheng Bank Co.,Ltd. Xiangtan sub branch

Pledgor: CHANGSHA INTELLIGENT DRIVING RESEARCH INSTITUTE Co.,Ltd.

Registration number: Y2022430000015

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230601

Granted publication date: 20190712

Pledgee: China Minsheng Bank Co.,Ltd. Xiangtan sub branch

Pledgor: CHANGSHA INTELLIGENT DRIVING RESEARCH INSTITUTE Co.,Ltd.

Registration number: Y2022430000015

CP03 Change of name, title or address

Address after: Building A3 and A4 in Hunan Inspection and testing characteristic industrial park, no.336, bachelor's road, Yuelu District, Changsha City, Hunan Province, 410000

Patentee after: Xidi Intelligent Driving (Hunan) Co.,Ltd.

Country or region after: China

Address before: Building A3 and A4 in Hunan Inspection and testing characteristic industrial park, no.336, bachelor's road, Yuelu District, Changsha City, Hunan Province, 410000

Patentee before: CHANGSHA INTELLIGENT DRIVING RESEARCH INSTITUTE Co.,Ltd.

Country or region before: China