CN111311902B - Data processing method, device, equipment and machine readable medium - Google Patents

Data processing method, device, equipment and machine readable medium Download PDF

Info

Publication number
CN111311902B
CN111311902B CN201811519987.2A CN201811519987A CN111311902B CN 111311902 B CN111311902 B CN 111311902B CN 201811519987 A CN201811519987 A CN 201811519987A CN 111311902 B CN111311902 B CN 111311902B
Authority
CN
China
Prior art keywords
lane
determining
vehicle
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811519987.2A
Other languages
Chinese (zh)
Other versions
CN111311902A (en
Inventor
刘进锋
詹中伟
刘欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201811519987.2A priority Critical patent/CN111311902B/en
Priority to TW108130585A priority patent/TW202033932A/en
Priority to PCT/CN2019/123214 priority patent/WO2020119567A1/en
Publication of CN111311902A publication Critical patent/CN111311902A/en
Application granted granted Critical
Publication of CN111311902B publication Critical patent/CN111311902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled

Abstract

The embodiment of the application provides a data processing method, a device, equipment and a machine readable medium, wherein the method comprises the following steps: determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle; determining lane position characteristics corresponding to the vehicle according to the positioning data; determining lane number characteristics corresponding to the vehicle according to the map data; determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature; and determining a target lane corresponding to the vehicle according to the lane boundary. The embodiment of the application can realize the positioning of the lane under the condition of lower cost.

Description

Data processing method, device, equipment and machine readable medium
Technical Field
The present application relates to the field of intelligent transportation technologies, and in particular, to a data processing method, a data processing apparatus, a device, and a machine-readable medium.
Background
The intelligent transportation system applies the advanced electronic information technology to transportation to realize high-efficiency value-added services, wherein a plurality of services are based on the position information of vehicles, so that positioning is a basis in the intelligent transportation system. The lane is a basic unit in the driving process of the vehicle, is a key technology in the field of intelligent traffic for positioning the lane, and can provide technical support for automatic/semi-automatic driving control, navigation, lane departure early warning and the like of the vehicle.
A Positioning method can realize the Positioning of a lane through a high-precision GPS (Global Positioning System) and a high-precision electronic map.
However, the high-precision GPS and the high-precision electronic map are high in cost, and for example, the precision of the high-precision GPS is generally smaller than the lateral dimension of a half lane, which limits the application range of the above-described positioning method.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a data processing method, which can realize the positioning of a lane at a low cost.
Correspondingly, the embodiment of the application also provides a data processing device, equipment, a machine readable medium, a navigation method and an auxiliary driving method, which are used for ensuring the realization and application of the method.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including:
determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
determining lane position characteristics corresponding to the vehicle according to the positioning data;
determining lane number characteristics corresponding to the vehicle according to the map data;
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
and determining a target lane corresponding to the vehicle according to the lane boundary.
On the other hand, the embodiment of the present application further discloses a data processing apparatus, including:
the image processing module is used for determining the lane image characteristics corresponding to the vehicle according to the road image corresponding to the vehicle;
the positioning module is used for determining the lane position characteristics corresponding to the vehicle according to the positioning data;
the map processing module is used for determining the lane number characteristics corresponding to the vehicle according to the map data;
the lane boundary determining module is used for determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature; and
and the target lane determining module is used for determining a target lane corresponding to the vehicle according to the lane boundary.
In another aspect, an embodiment of the present application further discloses an apparatus, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the apparatus to perform one or more of the methods described above.
In yet another aspect, embodiments of the present application disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform one or more of the methods recited above.
In another aspect, an embodiment of the present application further discloses a navigation method, including:
determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
determining lane position characteristics corresponding to the vehicle according to the positioning data;
determining lane number characteristics corresponding to the vehicle according to the map data;
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
determining a target lane corresponding to the vehicle according to the lane boundary;
and determining navigation information corresponding to the vehicle according to the target lane.
In another aspect, an embodiment of the present application discloses a driving assistance method, including:
determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
determining lane position characteristics corresponding to the vehicle according to the positioning data;
determining lane number characteristics corresponding to the vehicle according to the map data;
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
determining a target lane corresponding to the vehicle according to the lane boundary;
and determining auxiliary driving information corresponding to the vehicle according to the target lane.
Compared with the prior art, the embodiment of the application has the following advantages:
the embodiment of the application comprehensively utilizes the image data, the positioning data and the map data to realize the positioning of the lane where the vehicle is located. The image data can be used as a basis for determining the lane image characteristics; the positioning data can be used as a basis for determining lane position characteristics; the map data can be used as the basis for determining the number characteristics of the lanes; the lane image characteristic and the lane position characteristic can be fused so as to convert the lane characteristic from an image coordinate system to a map coordinate system; therefore, the lane boundary corresponding to the vehicle can be determined according to the lane characteristics and the lane number characteristics of the map coordinate system, and further the target lane corresponding to the vehicle can be determined according to the lane boundary, namely the lane in which the vehicle is located.
Because the image data of this application embodiment can be for obtaining through the image acquisition device of camera for example, and the precision requirement to locating data and map data is lower, consequently, this application embodiment can be under the lower cost's of expense circumstances, realize the location of lane.
Drawings
FIG. 1 is an illustration of an application environment for a data processing method of the present application;
FIG. 2 is a flow chart of steps of a first embodiment of a data processing method of the present application;
FIG. 3 is an illustration of a road condition of an embodiment of the present application;
FIG. 4 is an illustration of a road condition of an embodiment of the present application;
FIG. 5 is an illustration of a road condition of an embodiment of the present application;
FIG. 6 is a schematic representation of a relationship between a vehicle and a lane boundary in an embodiment of the present application;
FIG. 7 is a flowchart illustrating the steps of a second embodiment of a data processing method according to the present application;
FIG. 8 is a flowchart of the steps of a third embodiment of a data processing method of the present application;
FIG. 9 is a block diagram of an embodiment of a data processing apparatus of the present application;
FIG. 10 is a schematic diagram of data interaction of a data processing apparatus according to an embodiment of the present application; and
fig. 11 is a schematic structural diagram of an apparatus provided in an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments that can be derived from the embodiments given herein by a person of ordinary skill in the art are intended to be within the scope of the present disclosure.
While the concepts of the present application are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the description above is not intended to limit the application to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the application.
Reference in the specification to "one embodiment," "an embodiment," "a particular embodiment," or the like, means that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may or may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, where a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. In addition, it should be understood that items in the list included in the form "at least one of a, B, and C" may include the following possible items: (A) (ii) a (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C). Likewise, a listing of items in the form of "at least one of a, B, or C" may mean (a); (B) (ii) a (C) (ii) a (A and B); (A and C); (B and C); or (A, B and C).
In some cases, the disclosed embodiments may be implemented as hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried or stored in one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be executed by one or more processors. A machine-readable storage medium may be implemented as a storage device, mechanism, or other physical structure (e.g., a volatile or non-volatile memory, a media disk, or other media other physical structure device) for storing or transmitting information in a form readable by a machine.
In the drawings, some structural or methodical features may be shown in a particular arrangement and/or ordering. Preferably, however, such specific arrangement and/or ordering is not necessary. Rather, in some embodiments, such features may be arranged in different ways and/or orders than as shown in the figures. Moreover, the inclusion of structural or methodical features in particular figures is not meant to imply that such features are required in all embodiments and that, in some embodiments, such features may not be included or may be combined with other features.
Aiming at the technical problem that both the high-precision GPS and the high-precision electronic map have higher cost, the embodiment of the application provides a data processing scheme, which can comprise the following steps: determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle; determining lane position characteristics corresponding to the vehicle according to the positioning data; determining lane number characteristics corresponding to the vehicle according to the map data; determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature; and determining a target lane corresponding to the vehicle according to the lane boundary.
The embodiment of the application comprehensively utilizes the image data, the positioning data and the map data to determine the target lane corresponding to the vehicle, so that the positioning of the lane where the vehicle is located can be realized.
The image data can be obtained by an image acquisition device such as a camera, and can be used as a basis for determining lane image characteristics; the lane image feature may be a lane feature in image dimension.
The positioning data may be used as a basis for determining a lane position characteristic, which may be a lane characteristic of a position dimension. The embodiment of the application has low requirement on the accuracy of the positioning data, the positioning data can be derived from sensors with common positioning accuracy, such as a GPS sensor and a GNSS (Global Navigation Satellite System) sensor, and the common positioning accuracy is usually about 10 meters.
The map data can be used as a basis for determining the number characteristics of the lanes, so that the accuracy requirement on the map data is low in the embodiment of the application, and the accuracy of the map data can be non-high accuracy, namely ordinary accuracy.
The lane image characteristic and the lane position characteristic can be fused so as to convert the lane characteristic from an image coordinate system to a map coordinate system; furthermore, the lane boundary corresponding to the vehicle can be determined according to the lane characteristics and the lane number characteristics of the map coordinate system, and then the target lane corresponding to the vehicle can be determined according to the lane boundary, namely the lane in which the vehicle is located.
Because the image data of this application embodiment can be for obtaining through the image acquisition device of camera for example, and the precision requirement to locating data and map data is lower, consequently, this application embodiment can be under the lower cost's of expense circumstances, realize the location of lane.
The embodiment of the application can be applied to intelligent traffic scenes, lane level positioning is provided in the intelligent traffic scenes, and the driving experience of users can be improved. Lane, also known as lane line or roadway, is a road on which vehicles travel. Both on ordinary roads and on expressways, there are arrangements, which use lanes with legal rules, such as traffic lanes and passing lanes.
According to an embodiment, the intelligent traffic scene may be a navigation scene. For example, in a navigation scene, lane-level guidance may be provided, which may improve accuracy and precision of navigation in the case of entering a complex intersection, a multi-level overpass road, or a multi-entrance road. In addition, lane-level positioning may provide fundamental data for AR (Augmented Reality) navigation.
According to another embodiment, the intelligent traffic scene may be an assisted driving scene or an unmanned driving scene. The auxiliary driving scene needs manual monitoring, and the unmanned driving scene does not need manual monitoring. Since the driving assistance information can be provided according to the positioning of the lane level, the safety of the vehicle can be improved. The driving assistance information may include: the target lane broadcast information, lane keeping information, lane change information, and the like. For example, in the case where the road condition of the target lane is clear, lane keeping information may be output; as another example, lane change information may be output in the case where the target lane is not suitable for turning.
It is understood that the above-mentioned intelligent traffic scenario is only an optional embodiment, and in fact, any application scenario that requires lane-level positioning is within the protection scope of the application scenario of the embodiment of the present application.
The data processing scheme provided by the embodiment of the present application can be applied to the application environment shown in fig. 1, as shown in fig. 1, the client 100 and the server 200 are located in a wired or wireless network, and the client 100 and the server 200 perform data interaction through the wired or wireless network.
Optionally, the client may run on the device, for example, the client may be an APP running on the terminal, such as a navigation APP, an e-commerce APP, an instant messaging APP, an input method APP, or an APP carried by the operating system itself, and the specific APP corresponding to the client is not limited in the embodiment of the present application.
Optionally, the device may be provided with a built-in or external screen, and the screen is used for displaying information. The device may also have a built-in or external speaker for playing information. The information may include: information of the target lane. For example, the road on which the vehicle is located includes 4 lanes, and the 4 lanes may be numbered correspondingly, for example, according to the sequence from left to right in the lane direction, the numbers of the 4 lanes are: 1. 2, 3, 4, the information of the target lane may be the number of the target lane. Optionally, the following information may be played: "you are currently in the X-th lane", wherein the range of X is 1-4.
The device may be a vehicle-mounted device or a device owned by a user. The above devices may specifically include but are not limited to: smart phones, tablet computers, electronic book readers, MP3 (Moving Picture Experts Group Audio Layer III) players, MP4 (Moving Picture Experts Group Audio Layer IV) players, laptop portable computers, car-mounted devices, PCs (Personal computers), set-top boxes, smart televisions, wearable devices, and the like. It is to be understood that the embodiments of the present application are not limited to the specific devices.
Examples of the in-vehicle device may include: HUDs (Head Up displays), etc., which are usually installed in front of the driver and provide the driver with necessary driving information during driving, such as navigation information, etc., which may include: information of a target lane; in other words, HUD can collect multiple functions in an organic whole, makes things convenient for the driver to pay close attention to driving road conditions.
Method embodiment one
Referring to fig. 2, a flowchart illustrating steps of a first embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
step 201, determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
step 202, determining lane position characteristics corresponding to the vehicle according to the positioning data;
step 203, determining lane number characteristics corresponding to the vehicle according to the map data;
step 204, determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
step 205, determining a target lane corresponding to the vehicle according to the lane boundary.
At least one step included in the method of the embodiment of the present application may be executed by a client and/or a server, and of course, the embodiment of the present application does not limit a specific execution subject of the step of the method.
In step 201, a road image may be obtained by an image capturing device such as a camera or a video camera. Alternatively, the number of the image capturing devices may be 1, or may be greater than 1. Alternatively, the image capturing device may be disposed at a periphery of the vehicle, and the periphery may include: a front, the front may include: straight ahead or oblique ahead. The camera may be a monocular camera, or a binocular camera.
In an application example of the present application, the camera may be installed at a position on the vehicle longitudinal central axis in front of the roof, the camera is aligned with the front of the vehicle, and the height of the camera from the ground, the pitch angle of the camera, the yaw angle of the camera, and the roll angle may be determined according to actual application requirements. During the driving process of the vehicle, the camera can continuously acquire the road image right in front of the vehicle. Of course, the camera may also be located at a position on the longitudinal central axis of the vehicle in front of the vehicle projection, and it is understood that the embodiment of the present application does not limit the specific image capturing device and its orientation.
According to an embodiment, step 201 may determine the lane image feature corresponding to the vehicle from the road image corresponding to the vehicle by using an image processing technique.
The image processing technique may include: and (4) filtering technology. The filtering technology can be used for filtering noise in the road image so as to reduce the interference of the noise on the lane image characteristics.
The image processing technique may include: image recognition techniques. Image recognition refers to a technique for processing, analyzing, and understanding an image with a machine to recognize various different patterns of image objects. In particular to the embodiments of the present application, a machine may be utilized to process, analyze, and understand road images to identify various different patterns of image objects. The image objects may include: the lane image characteristic of the embodiment of the application.
Usually, the lane image feature in the road image may correspond to a certain image area in the road image. According to the embodiment of the application, the image area corresponding to the image feature of the single lane can be determined through an edge detection technology, and the image feature of the lane corresponding to the image area can be determined.
In an optional embodiment of the present application, the step 201 of determining the lane image feature corresponding to the vehicle may include: and detecting an image target in the road image, and analyzing the obtained image target by using a depth learning method to obtain corresponding image target information, namely lane image characteristics. The image target information may include: image, name, category, etc. of the image object.
According to another embodiment, step 201 may determine the lane image feature corresponding to the vehicle from the road image corresponding to the vehicle by using a photogrammetric technique. The photogrammetry technology can utilize the photo obtained by an optical camera or a digital camera to be processed to obtain the position, the shape, the size, the characteristic and the mutual relation of the object to be shot
Referring to table 1, an example of a lane image feature of an embodiment of the present application is shown. The lane image feature may specifically include at least one of the following features: lane feature points, lane feature lines, and lane feature areas.
The lane feature point may be a point-type lane image feature. The lane feature points may specifically include at least one of the following feature points:
end points of lane boundaries; and
an intersection between the lane boundary and a perpendicular to the lane boundary.
The end points of the lane boundaries may include: lane line end points, or road edge end points.
The lane feature line may be a line type lane image feature. The lane characteristic line may specifically include at least one of the following characteristic lines:
a lane line;
a roadside line;
contour lines of road edge facilities;
contour lines of above-road facilities; and
tunnel portal contour lines.
The outline, also called "outer line", refers to the boundary between the outer edges of objects, and is the boundary between one object and another object and between the object and the background.
The lane feature region may be a lane image feature of a region type or a face type. The lane feature region may specifically include at least one of:
a zebra crossing region;
green belt areas; and
a vehicle area. The vehicle herein may refer to a vehicle on the road other than the vehicle on which the image pickup device is located.
TABLE 1
Figure BDA0001902984600000101
In step 202, the positioning data may be derived from a sensor with a general positioning accuracy, such as a GPS sensor, a GNSS sensor, or the like.
The lane position characteristics determined at step 202 may include: a vehicle location feature. The vehicle position feature is used to characterize the position of the vehicle.
Alternatively, the vehicle position characteristic may be determined based on the positioning data and the lane image characteristic obtained in step 201. The lane image feature can reflect the surrounding environment of the vehicle, so that the accuracy of the vehicle position feature can be improved.
The lane position characteristics determined in step 202 may further include: and the position characteristic corresponds to the lane image characteristic. Accordingly, step 202 may receive the lane image feature obtained in step 201, and determine a position feature corresponding to the lane image feature according to the positioning data.
Alternatively, the location feature corresponding to the lane image feature may be determined by utilizing a SLAM (Simultaneous Localization and Mapping) method. The principle of SLAM can be: the robot starts to move from an unknown position in an unknown environment, self-positioning is carried out according to position estimation and a map in the moving process, meanwhile, an incremental map is built on the basis of self-positioning, and autonomous positioning and navigation of the robot are achieved.
In step 203, the map data may be used as a basis for determining the lane number characteristics, so that the accuracy requirement on the map data is low in the embodiment of the present application, and the accuracy of the map data may be non-high accuracy, that is, ordinary accuracy.
Alternatively, step 203 may determine the lane number characteristic according to the lane position characteristic obtained in step 202. The lane number characteristic may be used to characterize the number of lanes in the road on which the vehicle is currently located or is about to be located.
Optionally, step 203 may also determine the lane direction characteristics from the map data. The lane may include: a single-lane or a double-lane. Wherein, the direction of the double-row road can include: the lane direction characteristic may be indicative of one of the directions, i.e., the direction of travel of the vehicle, in opposite directions.
Step 204 may fuse the lane image features, the lane position features, and the lane number features to obtain lane boundaries corresponding to the vehicle. A lane boundary may refer to a boundary of a lane, which may be a boundary between lanes or between lanes and other objects. The lane boundary may include: lane lines, road edges, etc.
The principle of determining the lane boundary corresponding to the vehicle in the embodiment of the application can be as follows: and resolving the lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature. The above calculation may convert the lane feature from the image coordinate system to the map coordinate system to determine a parameter (such as a lane boundary position feature, a name, etc.) of the lane boundary in the map coordinate system, and may further determine a target lane corresponding to the vehicle based on a relative position between the vehicle position feature and the lane boundary position feature.
According to an embodiment, the lane image feature may include information of all lane boundaries. In this case, the information of all lane boundaries can be determined according to the lane image features, that is, the parameters of all lane boundaries in the map coordinate system can be determined.
According to another embodiment, the image capturing device is greatly influenced by factors such as environment, climate, light and the like, and the complexity and discontinuity (intersection) of the lane line itself make the lane image feature may only include information of partial lane boundaries, that is, the current lane boundary corresponding to the vehicle is incomplete.
For example, in the case where the lane line is discontinuous, it is easy to make the lane image feature described above possibly include only information of a partial lane boundary; reasons for lane line discontinuities may include: the collection range of the image collection device is limited under the condition that the lane line is stained or the intersection is not covered. For example, under the condition of road conditions, pedestrians and vehicles are shielded on the road surface, the turning angle is large, and the camera cannot shoot all lane lines.
For another example, in the case of weak light, the capturing definition of the image capturing device is reduced, which easily makes the lane image feature may include only information of a part of lane boundaries. The weak light condition may include: severe weather conditions, or night conditions, etc.
For another example, in the case of road congestion, pedestrians and vehicles block the road surface, the collection range of the image collection device is reduced, and it is easy to make the lane image feature only include information of a part of lane boundaries. The road congestion may include: congestion in roads, or traffic congestion, etc.
Under the condition that the current lane boundary corresponding to the vehicle is incomplete, the embodiment of the application can determine all lane boundaries by determining the target lane corresponding to the vehicle according to the following technical scheme:
the technical scheme 1,
In technical solution 1, the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: obtaining two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries; determining the distance between the two lane feature points according to the lane position features; and determining a second lane boundary corresponding to the vehicle according to the distance between the two lane feature points and the lane number feature.
Technical solution 1 may determine the second lane boundary when two lane feature points of the first lane boundary are known, and may determine all lane boundaries because expansion of the lane boundaries may be achieved.
Referring to fig. 3, a schematic of a road condition according to an embodiment of the present application is shown, wherein the road condition may be derived from a road image, a vehicle may travel in the vicinity of an intersection, and the road may include: a crosswalk 301 and several lanes 302. The lane feature points may be end points PA and PB of one lane line. In fig. 3, the end points PA and PB of the lane lines may be the starting end points of the lane lines, and the lane lines to which PA and PB belong are adjacent. In this case, the distance between PA and PB may be the width of one lane (lane width for short); since the widths of different lanes in the same road are usually the same, the unknown lane line can be determined according to the number of lanes and the lane width.
Technical scheme 2,
In technical solution 2, the lane image feature may specifically include: the process of determining the lane boundary corresponding to the vehicle in step 204 may specifically include: determining the road width according to the contour line of the peripheral facilities of the road and the lane position characteristics; and determining the lane boundary corresponding to the vehicle according to the road width and the lane number characteristics.
Technical means 2 can realize the expansion of the lane boundary in the case where the contour line of the road periphery facility is known, and therefore, the entire lane boundary can be determined.
Referring to fig. 4, a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, and the road condition may include: a number of lane lines 401, over-road facilities 402 and roadside lines 403. Part of lane lines are shielded by vehicles on the road, and under the condition, the width of the road can be determined according to the contour lines of facilities above the road, and the width of the road can represent the width corresponding to all lanes; since the widths of different lanes in the same road are usually the same, the unknown lane lines can be determined according to the number of lanes and the width of the road.
The above-road facility 402 shown in fig. 4 is specifically a transportation portal frame, and it is understood that the above-road facility shown in fig. 4 is only an example, and actually, the above-road facility may also be a tunnel, and it is understood that the specific above-road facility is not limited in the embodiment of the present application.
In addition, the above-road facility is only an alternative embodiment, and actually, the road peripheral facility may further include: greenbelts, etc., for example, the road width may be determined according to greenbelts on both sides of the road. It is understood that the embodiments of the present application do not impose limitations on specific road perimeter facilities.
According to the technical scheme 1 and the technical scheme 2, under the condition that part of lane boundaries are collected in the road image, the lane boundaries are expanded, the problem that the lane boundaries are inaccurate due to the problems that an image collecting device is shielded by light rays and the lane lines are incomplete and discontinuous and the like can be solved to a certain extent, and the accuracy rate, the reliability and the continuous availability of the lane boundaries can be improved.
It can be understood that, a person skilled in the art may adopt any one or a combination of technical solution 1 and technical solution 2 according to the actual application requirements, and the embodiment of the present application does not impose a limitation on the specific process for determining the target lane corresponding to the vehicle.
Step 205 may determine a target lane corresponding to the vehicle according to the lane boundary obtained in step 204.
The principle of determining the target lane corresponding to the vehicle in the embodiment of the application can be as follows: and determining a target lane corresponding to the vehicle according to the relative position between the vehicle and the lane boundary, wherein the target lane is the lane where the vehicle is located.
In an optional embodiment of the present application, the process of determining the target lane corresponding to the vehicle in step 203 may specifically include: determining lane characteristic points and a first direction respectively corresponding to a plurality of lane boundaries; determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction may be a direction in which the lane feature point corresponds to a vehicle feature point.
Referring to fig. 5, a schematic diagram of a road condition according to an embodiment of the present application is shown, where the road condition may be derived from a road image, and the road condition may include: a number of lane lines 501, vehicles 502 and roadside lines 503. The vertical line 504 at the same position on the road where the vehicle 502 is located can be determined, and the lane feature points can be: the intersection point of the perpendicular line 504 and the lane line, where the first direction may be a direction of the lane line, the second direction may be a direction of a connection line between the feature point and the vehicle feature point, and the vehicle feature point may be a position point where the image capturing device is located; in this way, the target lane corresponding to the vehicle 502 may be determined according to the position relationship (e.g., the included angle) between the first direction and the second direction, and in fig. 5, the included angle corresponding to the 3 rd lane is the smallest, so that the target lane may be determined to be the 3 rd lane.
In practical applications, since the uncertainty factor is many in the case of a vehicle traveling at an intersection, it makes the determination of the target lane difficult.
In view of the above situation, in an optional embodiment of the present application, the step 204 of determining the lane boundary corresponding to the vehicle may specifically include: and determining a target lane corresponding to the vehicle according to the lane boundary and the continuous motion information corresponding to the vehicle.
The continuous motion information can represent the motion condition of the vehicle in a time period, and according to the continuous motion information, the embodiment of the application can determine the target time when the vehicle enters the lane boundary, and determine the target lane according to the relative position between the vehicle and the lane boundary at the target time. The vehicle enters the lane boundary at the target time, so that the target lane is determined at the time, the situation that the target lane is determined by mistake under the condition that the vehicle does not enter the lane boundary can be avoided, and the accuracy of the target lane can be improved.
Referring to fig. 6, a schematic of a relationship between a vehicle and a lane boundary according to an embodiment of the present application is shown, where time T0 may be a time when all lane lines are determined, and if a target lane is determined at time T0, a target lane error may easily occur.
The time T1 may be a time when the vehicle enters the lane line, and in the embodiment of the present application, the accumulated motion vector in a time period from the time T0 to the time T1 may be accumulated, and the time T1 may be determined according to the accumulated motion vector, so that the target lane may be accurately determined.
The continuous motion information of the embodiment of the application can be obtained through an inertial sensor. The INS sensors may be existing sensors on the vehicle and therefore may not cost additional sensors. The inertial sensor may include: IMU (Inertial measurement unit). The IMU may include: accelerometers, gyroscopes, etc.
Alternatively, the continuous motion information may be determined by an INS (Inertial Navigation System). The principle of the INS determining the continuous motion information may be as follows: and calculating the position and the posture of the current moment according to the motion state change of the vehicle measured by the inertial sensor and the position and the posture of the previous moment. Optionally, the INS may also utilize trip data provided by an odometer.
In practical applications, the lane after the vehicle initially enters the road may be determined according to steps 201 to 205 included in fig. 2, that is, the target lane of the embodiment of the present application may include: an initial target lane. After the initial target lane is determined, real-time updating of the target lane may be performed using lane tracking and/or lane change detection methods.
The intersection condition is complex and changeable, and the intersection has a blank area, so the embodiment of the application describes the processing process corresponding to the intersection condition in detail, and it can be understood that the embodiment of the application can be applied to other conditions except the road condition.
In practical application, the embodiment of the application can output the information of the target lane in a visual mode and/or an auditory mode. The visual mode can display the information of the target lane through a screen, and the auditory mode can play the information of the target lane through a loudspeaker.
In summary, the data processing method of the embodiment of the application comprehensively utilizes the image data, the positioning data and the map data to realize the positioning of the lane where the vehicle is located. The image data can be used as a basis for determining the lane image characteristics; the positioning data can be used as a basis for determining lane position characteristics; the map data can be used as the basis for determining the number characteristics of the lanes; the lane image characteristic and the lane position characteristic can be fused so as to convert the lane characteristic from an image coordinate system to a map coordinate system; therefore, the lane boundary corresponding to the vehicle can be determined according to the lane characteristics and the lane number characteristics of the map coordinate system, and further the target lane corresponding to the vehicle can be determined according to the lane boundary, namely the lane in which the vehicle is located.
Because the image data of this application embodiment can be for obtaining through the image acquisition device of camera for example, and the precision requirement to locating data and map data is lower, consequently, this application embodiment can be under the lower cost's of expense circumstances, realize the location of lane.
Method embodiment two
Referring to fig. 7, a flowchart of steps of a second embodiment of the data processing method in the present application is shown, which specifically includes the following steps:
step 701, determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
step 702, determining lane position characteristics corresponding to the vehicle according to the positioning data;
703, determining lane quantity characteristics corresponding to the vehicle according to the map data;
step 704, determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
step 705, under the condition that the current lane boundary corresponding to the vehicle is incomplete, determining the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle and the current lane boundary;
step 706, determining a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature and the lane number feature corresponding to the vehicle;
and 707, determining a target lane corresponding to the vehicle according to the latest lane boundary.
The lane boundary determining method and device can re-determine the lane boundary under the condition that the current lane boundary corresponding to the vehicle is incomplete, specifically, re-determine the lane image feature and the lane position feature according to the current lane boundary, and determine the latest lane boundary according to the re-determined lane image feature and the lane position feature. Wherein, the current lane boundary may refer to a lane boundary at a current time, the current time may refer to a device time in a case where the step is performed, and the current lane boundary may be updated as the current time is updated.
The current lane boundary can provide rich information for re-determining the lane image features, and therefore can be used as a basis for determining the lane image features, and particularly, the current lane boundary can provide a basis for secondary feature extraction of an image region. For example, the road image includes a discontinuous lane line L1, and step 704 may combine the lane image feature, the lane position feature, and the lane number feature to obtain a continuous lane line L1; and step 705 may further combine the lane image feature, the lane position feature, and the lane number feature to obtain a lane line L2 adjacent to the lane line L1, then step 706 may obtain more lane image features according to the continuous lane line L1 and the lane line L2, such as an end point feature of the lane line L2, or a feature of the lane line L3 adjacent to the lane line L2, and so on.
Since the latest lane image feature and the latest lane position feature are updated and more accurate features, the lane boundary is determined according to the latest lane image feature and the latest lane position feature, and the accuracy of the lane boundary can be improved.
Method embodiment three
Referring to fig. 8, a flowchart illustrating steps of a third embodiment of the data processing method in the present application is shown, which may specifically include the following steps:
step 801, determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
step 802, determining lane position characteristics corresponding to the vehicle according to the positioning data;
step 803, determining lane number characteristics corresponding to the vehicle according to the map data;
step 804, determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
step 805, determining a predicted lane boundary under the condition that the current lane boundary corresponding to the vehicle is incomplete;
step 806, determining the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary and the predicted lane boundary;
step 807, determining a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature and the lane number feature corresponding to the vehicle;
and 808, determining a target lane corresponding to the vehicle according to the latest lane boundary.
According to the method and the device, the predicted lane boundary can be determined under the condition that the current lane boundary corresponding to the vehicle is incomplete. For example, the lane boundary obtained by the expansion may be used as the predicted lane boundary in accordance with the aforementioned technical means 1 and/or 2.
The predicted lane boundaries may be used as a basis for determining the latest lane image features. Alternatively, the extraction of the lane image feature may be performed anew depending on the predicted lane boundary.
For example, in the case of weak light, the image feature of the lane line L3 cannot be extracted from the road image at first, and the embodiment of the present application may predict the lane line L3 and mark the lane line L3 at the corresponding position of the road image, and then extract the image feature of the lane line L3 according to the marked road image with the lane line L3; the image feature of the lane line L3 is added again to the determination of the lane boundary, and since information in the determination of the lane boundary can be added, the accuracy of the lane boundary can be improved.
Similarly, when the lane line is not continuous, the continuous lane line can be predicted. Not described in detail herein, but referred to each other.
It should be noted that the latest lane image feature may be located to obtain the latest lane position feature. For example, the latest lane position features may include: and predicting the position characteristics corresponding to the lane boundaries.
In summary, according to the data processing method of the embodiment of the present application, the predicted lane boundary is determined when the current lane boundary corresponding to the vehicle is incomplete. The predicted lane boundary can be used as a basis for determining the latest lane image characteristics so as to obtain more lane image characteristics. Since information (such as the latest lane image feature and the latest lane position feature) in the determination process of the lane boundary can be increased, the accuracy of the lane boundary can be improved.
The embodiment of the application further provides a navigation method, which specifically comprises the following steps:
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature corresponding to the vehicle; the lane image features are determined according to road images corresponding to the vehicles, the position features are determined according to positioning data, and the lane number features are determined according to map data;
determining a target lane corresponding to the vehicle according to the lane boundary;
and determining navigation information corresponding to the vehicle according to the target lane.
In practical applications, the navigation information is used to guide the vehicle to travel.
According to an embodiment, the navigation information may include: and the target lane information in the form of voice or lane boundary lines corresponding to the target lane drawn on the map so that the user can determine the target lane in which the vehicle is located.
According to another embodiment, the navigation information may include: a target lane based navigation route, etc.
In summary, the embodiment of the application can provide lane-level guidance in a navigation scene, and the lane-level guidance can improve the accuracy and precision of navigation under the condition of entering a complex intersection and a multi-layer interchange multi-entrance road. Additionally, the location of the lane level may provide basic data for AR navigation.
The embodiment of the application further provides a driving assistance method, which specifically comprises the following steps:
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature corresponding to the vehicle; the lane image features are determined according to road images corresponding to the vehicles, the position features are determined according to positioning data, and the lane number features are determined according to map data;
determining a target lane corresponding to the vehicle according to the lane boundary;
and determining auxiliary driving information corresponding to the vehicle according to the target lane.
The driving assistance information may include: target lane broadcast information, lane keeping information, lane change information, and the like. For example, in the case where the road condition of the target lane is clear, lane keeping information may be output; as another example, lane change information may be output in the case where the target lane is not suitable for turning. It is to be understood that the embodiment of the present application is not limited to specific driving assistance information, and for example, the driving assistance information may further include: brake prompt information, etc.
According to the embodiment of the application, the auxiliary driving information can be provided according to the positioning of the lane level, so that the safety of the vehicle and the reasonability of the auxiliary driving information can be improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
The embodiment of the application also provides a data processing device.
Referring to fig. 9, a block diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
the image processing module 901 is configured to determine lane image features corresponding to a vehicle according to a road image corresponding to the vehicle;
the positioning module 902 is configured to determine a lane position feature corresponding to the vehicle according to the positioning data;
the map processing module 903 is used for determining lane number characteristics corresponding to the vehicle according to the map data;
a lane boundary determining module 904, configured to determine a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature, and the lane number feature; and
and a target lane determining module 905, configured to determine a target lane corresponding to the vehicle according to the lane boundary.
Optionally, the lane boundary determining module 904 may include:
the lane characteristic point determining module is used for obtaining two lane characteristic points according to the lane image characteristics; the two lane feature points belong to different first lane boundaries;
the distance determining module is used for determining the distance between the two lane feature points according to the lane position features; and
and the first boundary determining module is used for determining a second lane boundary corresponding to the vehicle according to the distance between the two lane feature points and the lane number feature.
Optionally, the lane image feature may include: the lane boundary determination module 904 may include:
the road width determining module is used for determining the road width according to the contour line of the peripheral facilities of the road and the lane position characteristics;
and the second boundary determining module is used for determining the lane boundary corresponding to the vehicle according to the road width and the lane number characteristics.
Optionally, the target lane determination module 905 may include:
and the first target lane determining module is used for determining a target lane corresponding to the vehicle according to the lane boundary and the continuous motion information corresponding to the vehicle.
Optionally, the target lane determination module 905 may include:
the characteristic point and direction determining module is used for determining lane characteristic points and a first direction which correspond to a plurality of lane boundaries respectively;
the second target lane determining module is used for determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction is the direction corresponding to the lane characteristic point and the vehicle characteristic point.
Optionally, the image processing module 901 is further configured to, when the current lane boundary corresponding to the vehicle is incomplete, determine a latest lane image feature corresponding to the vehicle according to a road image corresponding to the vehicle and the current lane boundary;
the positioning module 902 is further configured to determine a latest lane position feature corresponding to the vehicle according to a road image corresponding to the vehicle and a current lane boundary when the current lane boundary corresponding to the vehicle is incomplete;
the lane boundary determining module 904 is further configured to determine a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature, and the lane number feature corresponding to the vehicle;
the target lane determining module 905 is further configured to determine a target lane corresponding to the vehicle according to the latest lane boundary.
Optionally, the lane boundary determining module 904 is further configured to determine a predicted lane boundary if a current lane boundary corresponding to the vehicle is incomplete;
the image processing module 901 is further configured to determine a latest lane image feature corresponding to a vehicle according to a road image corresponding to the vehicle, the current lane boundary, and the predicted lane boundary;
the positioning module 902 is further configured to determine a latest lane position feature according to a road image corresponding to a vehicle, the current lane boundary, and the predicted lane boundary;
the lane boundary determining module 904 is further configured to determine a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature, and the lane number feature corresponding to the vehicle;
the target lane determining module 905 is further configured to determine a target lane corresponding to the vehicle according to the latest lane boundary.
Optionally, the lane image feature may include at least one of the following features:
lane feature points, lane feature lines, and lane feature areas.
Optionally, the lane feature points may include at least one of the following feature points:
end points of lane boundaries; and
an intersection between the lane boundary and a perpendicular to the lane boundary.
Optionally, the lane characteristic line may include at least one of the following characteristic lines:
a lane line;
a roadside line;
the contour line of the road edge facility;
contour lines of above-road facilities; and
and (5) a tunnel portal contour line.
Optionally, the lane feature region may include at least one of:
a zebra crossing region;
green belt areas; and
a vehicle area.
For the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
Referring to fig. 10, a data interaction schematic of a data processing device according to an embodiment of the present application is shown, where an image processing module 901 is configured to perform image processing on a road image acquired by an image acquisition device, where the image processing may include: image recognition and feature extraction, etc., that interact with both the lane boundary determination module 904 and the location module 902. The image processing results may be input to the lane boundary determination module 904; in addition, the image processing result is also transmitted to the positioning module 902 to improve the accuracy of the lane position feature.
The positioning module 902 is used for determining a vehicle position feature and a position feature corresponding to the lane image feature. The localization module 902 may convert the image processing results into a map coordinate system.
Since the image recognition result can provide a basis for the positioning module 902, the positioning accuracy of the lane position feature can be improved. Additionally, the lane position features may also be fused with the image processing results as input to the lane boundary determination module 904. The positioning module 902 may further interact with the map processing module 903 to obtain road characteristics of a road where the vehicle is located, where the road characteristics may include: number of lanes, lane direction characteristics, etc.
And a map processing module 903, configured to determine a road feature where the vehicle is located, where the road feature is fused with the image processing result and the lane position feature in the lane boundary determining module 904.
The lane boundary determination module 904 and the target lane determination module 905 may be provided integrally or separately. The two methods are used for determining an initial target lane, and after the initial target lane is determined, the target lane can be updated in real time by using a lane tracking and/or lane change detection method.
In one embodiment of the present application, the data passed by the image processing module 901 to the lane boundary determination module 904 may include: lane image features, which may specifically include but are not limited to: lane lines, road edges, characteristic points on two sides of the road and the like. The data passed by the lane boundary determination module 904 to the image processing module 901 may include: current lane boundaries, lane position characteristics, continuous motion information of the vehicle, and the attitude, road characteristics, etc. of the image acquisition device. The posture of the image acquisition device can be used as a basis for image processing, and the posture can include: head-up, head-down, etc.
In an example of the present application, assuming that the number of lanes is 4, and one lane line is interfered, the image processing module 901 may attempt to extract an image feature corresponding to the interfered lane line; and the lane boundary determining module 904 may determine the disturbed lane line according to the fused various data.
In one embodiment of the present application, the data communicated by the positioning module 902 to the lane boundary determination module 904 may include: the vehicle position characteristics, the continuous motion information of the vehicle, the posture of the image acquisition device, the lane position characteristics, the lane change detection result and the like. The data passed by the lane boundary determination module 904 to the positioning module may include: current lane boundaries, vehicle position characteristics, continuous motion information of the vehicle, the attitude of the image acquisition device, road characteristics, and the like.
The image processing module 901 is configured to determine, according to a road image corresponding to a vehicle, lane image features corresponding to the vehicle;
the positioning module 902 is configured to determine a lane position feature corresponding to the vehicle according to the positioning data;
the map processing module 903 is used for determining lane number characteristics corresponding to the vehicle according to the map data;
the lane boundary determining module 904 performs data interaction with the image processing module 901, the positioning module 902 and the map processing module 903, so that the lane boundary can be determined better, and useful information can be circulated among different modules, so that the performance and efficiency of the modules can be improved.
Optionally, the lane boundary determining module 904 may determine lane characteristics such as the number of lanes, the lane type, and the lane direction of the road segment where the vehicle is located according to the positioning result, that is, the lane position characteristic, and combine the image recognition result, the positioning result, and the lane characteristic; the image recognition result may be first preprocessed, and the preprocessing may include: performing gross error detection and elimination, performing lane boundary calculation on the preprocessed data, wherein the calculation result can be information of the lane boundary in a coordinate system with positioning and map unification, and if all the lane boundaries are determined, entering the next step of determining a target lane; otherwise, the remaining lane boundaries may be predicted on the basis of the existing first lane boundary, the predicted lane boundaries are returned to the image recognition module 901, the image recognition module 901 performs secondary extraction on the specific area according to the predicted lane boundaries, and the extracted result is added to the lane boundary resolving process again until the determination of all lane boundaries is completed. The return of the predicted lane boundary can apply more information to the determination of the lane boundary.
Alternatively, after determining all lane boundaries, accumulating the motion vectors of the vehicle is started, so that the vehicle is considered to be possibly in lane change or just driving in a vacant area of an intersection at the moment, and finally, the target lane where the vehicle is located is judged according to the accumulated motion vectors, so that the determination of the initial target lane can be completed.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Embodiments of the application can be implemented as a system or apparatus employing any suitable hardware and/or software for the desired configuration. Fig. 11 schematically illustrates an example device 1300 that can be used to implement various embodiments described herein.
For one embodiment, fig. 11 illustrates an exemplary apparatus 1300, which apparatus 1300 may comprise: one or more processors 1302, a system control module (chipset) 1304 coupled to at least one of the processors 1302, system memory 1306 coupled to the system control module 1304, non-volatile memory (NVM)/storage 1308 coupled to the system control module 1304, one or more input/output devices 1310 coupled to the system control module 1304, and a network interface 1312 coupled to the system control module 1306. The system memory 1306 may include: instruction 1362, the instruction 1362 executable by the one or more processors 1302.
Processor 1302 may include one or more single-core or multi-core processors, and processor 1302 may include any combination of general-purpose processors or special-purpose processors (e.g., graphics processors, application processors, baseband processors, etc.). In some embodiments, the device 1300 can function as a server, a target device, a wireless device, etc., as described in embodiments herein.
In some embodiments, device 1300 may include one or more machine-readable media (e.g., system memory 1306 or NVM/storage 1308) having instructions thereon and one or more processors 1302, which in combination with the one or more machine-readable media, are configured to execute the instructions to implement the modules included in the aforementioned means to perform the actions described in embodiments of the present application.
System control module 1304 for one embodiment may include any suitable interface controller to provide any suitable interface to at least one of processors 1302 and/or any suitable device or component in communication with system control module 1304.
System control module 1304 for one embodiment may include one or more memory controllers to provide an interface to system memory 1306. The memory controller may be a hardware module, a software module, and/or a firmware module.
System memory 1306 for one embodiment may be used to load and store data and/or instructions 1362. For one embodiment, system memory 1306 may include any suitable volatile memory, such as suitable DRAM (dynamic random access memory). In some embodiments, system memory 1306 may include: double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
System control module 1304 for one embodiment may include one or more input/output controllers to provide an interface to NVM/storage 1308 and input/output device(s) 1310.
NVM/storage 1308 for one embodiment may be used to store data and/or instructions 1382. NVM/storage 1308 may include any suitable non-volatile memory (e.g., flash memory, etc.) and/or may include any suitable non-volatile storage device(s), e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives, etc.
The NVM/storage 1308 may include storage resources that are physically part of the device on which the apparatus 1300 is installed or may be accessible by the device and not necessarily part of the device. For example, the NVM/storage 1308 may be accessed over a network via the network interface 1312 and/or through the input/output devices 1310.
Input/output device(s) 1310 for one embodiment may provide an interface for device 1300 to communicate with any other suitable device, and input/output devices 1310 may include communication components, audio components, sensor components, and so forth.
Network interface 1312 of one embodiment may provide an interface for device 1300 to communicate with one or more networks and/or with any other suitable apparatus, and device 1300 may communicate wirelessly with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols, such as to access a communication standard-based wireless network, such as WiFi, 2G, or 3G, or a combination thereof.
For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers (e.g., memory controllers) of the system control module 1304. For one embodiment, at least one of the processors 1302 may be packaged together with logic for one or more controllers of the system control module 1304 to form a System In Package (SiP). For one embodiment, at least one of the processors 1302 may be integrated on the same novelty as the logic of one or more controllers of the system control module 1304. For one embodiment, at least one of processors 1302 may be integrated on the same chip with logic for one or more controllers of system control module 1304 to form a system on a chip (SoC).
In various embodiments, apparatus 1300 may include, but is not limited to: a computing device such as a desktop computing device or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet, a netbook, etc.). In various embodiments, device 1300 may have more or fewer components and/or different architectures. For example, in some embodiments, device 1300 may include one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
Wherein, if the display comprises a touch panel, the display screen may be implemented as a touch screen display to receive input signals from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The present application also provides a non-transitory readable storage medium, where one or more modules (programs) are stored in the storage medium, and when the one or more modules are applied to an apparatus, the apparatus may be caused to execute instructions (instructions) of methods in the present application.
Provided in one example is an apparatus comprising: one or more processors; and, instructions in one or more machine-readable media stored thereon, which when executed by the one or more processors, cause the apparatus to perform a method as in embodiments of the present application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or 9.
One or more machine-readable media are also provided in one example, having instructions stored thereon, which when executed by one or more processors, cause an apparatus to perform a method as in embodiments of the application, which may include: the method shown in fig. 2 or fig. 3 or fig. 4 or fig. 5 or fig. 6 or fig. 7 or fig. 8 or fig. 9.
The specific manner in which each module performs operations of the apparatus in the above embodiments has been described in detail in the embodiments related to the method, and will not be described in detail here, and reference may be made to part of the description of the method embodiments for relevant points.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing detailed description has provided a data processing method, a data processing apparatus, a device, and a machine-readable medium, which are provided by the present application, and specific examples are applied herein to explain the principles and embodiments of the present application, and the descriptions of the foregoing examples are only used to help understand the method and the core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A data processing method, comprising:
determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
determining lane position characteristics corresponding to the vehicle according to the positioning data;
determining lane number characteristics corresponding to the vehicle according to the map data;
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature;
determining a target lane corresponding to the vehicle according to the lane boundary;
wherein, the determining the target lane corresponding to the vehicle comprises:
determining lane characteristic points and a first direction respectively corresponding to a plurality of lane boundaries;
determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction is the direction corresponding to the lane characteristic point and the vehicle characteristic point.
2. The method of claim 1, wherein the determining the lane boundary corresponding to the vehicle comprises:
obtaining two lane feature points according to the lane image features; the two lane feature points belong to different first lane boundaries;
determining the distance between the two lane feature points according to the lane position features;
and determining a second lane boundary corresponding to the vehicle according to the distance between the two lane feature points and the lane number feature.
3. The method of claim 1, wherein the lane image features comprise: the contour line of the road periphery facility, the lane boundary corresponding to the vehicle is determined, and the method comprises the following steps:
determining the width of the road according to the contour line of the facilities around the road and the lane position characteristics;
and determining the lane boundary corresponding to the vehicle according to the road width and the lane number characteristics.
4. The method of claim 1, wherein the determining the target lane for the vehicle comprises:
and determining a target lane corresponding to the vehicle according to the lane boundary and the continuous motion information corresponding to the vehicle.
5. The method of claim 1, further comprising:
determining a latest lane image feature and a latest lane position feature corresponding to the vehicle according to a road image corresponding to the vehicle and the current lane boundary under the condition that the current lane boundary corresponding to the vehicle is incomplete;
determining a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature and the lane number feature corresponding to the vehicle;
and determining a target lane corresponding to the vehicle according to the latest lane boundary.
6. The method of claim 1, further comprising:
determining a predicted lane boundary under the condition that a current lane boundary corresponding to the vehicle is incomplete;
determining the latest lane image feature and the latest lane position feature corresponding to the vehicle according to the road image corresponding to the vehicle, the current lane boundary and the predicted lane boundary;
determining a latest lane boundary corresponding to the vehicle according to the latest lane image feature, the latest lane position feature and the lane number feature corresponding to the vehicle;
and determining a target lane corresponding to the vehicle according to the latest lane boundary.
7. The method according to any one of claims 1 to 6, wherein the lane image features comprise at least one of the following features:
lane feature points, lane feature lines, and lane feature areas.
8. The method of claim 7, wherein the lane feature points comprise at least one of the following feature points:
end points of lane boundaries; and
an intersection between the lane boundary and a perpendicular to the lane boundary.
9. The method of claim 7, wherein the lane characteristic line comprises at least one of the following characteristic lines:
a lane line;
a roadside line;
the contour line of the road edge facility;
contour lines of above-road facilities; and
and (5) a tunnel portal contour line.
10. The method of claim 7, wherein the lane feature area comprises at least one of:
a zebra crossing region;
green belt areas; and
a vehicle area.
11. A data processing apparatus, comprising:
the image processing module is used for determining lane image characteristics corresponding to a vehicle according to a road image corresponding to the vehicle;
the positioning module is used for determining the lane position characteristics corresponding to the vehicle according to the positioning data;
the map processing module is used for determining the lane number characteristics corresponding to the vehicle according to the map data;
the lane boundary determining module is used for determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature; and
the target lane determining module is used for determining a target lane corresponding to the vehicle according to the lane boundary;
wherein the target lane determination module comprises:
the characteristic point and direction determining module is used for determining lane characteristic points and a first direction which correspond to a plurality of lane boundaries respectively;
the second target lane determining module is used for determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction is the direction corresponding to the lane characteristic point and the vehicle characteristic point.
12. An apparatus, comprising:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause data processing apparatus to perform the method recited by one or more of claims 1-10.
13. One or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause data processing apparatus to perform the method recited by one or more of claims 1-10.
14. A navigation method, comprising:
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature corresponding to the vehicle; the lane image features are determined according to road images corresponding to the vehicles, the position features are determined according to positioning data, and the lane number features are determined according to map data;
determining a target lane corresponding to the vehicle according to the lane boundary;
determining navigation information corresponding to the vehicle according to the target lane;
wherein, the determining the target lane corresponding to the vehicle comprises:
determining lane characteristic points and a first direction respectively corresponding to a plurality of lane boundaries;
determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction is the direction corresponding to the lane characteristic point and the vehicle characteristic point.
15. A driving assist method characterized by comprising:
determining a lane boundary corresponding to the vehicle according to the lane image feature, the lane position feature and the lane number feature corresponding to the vehicle; the lane image features are determined according to road images corresponding to the vehicles, the position features are determined according to positioning data, and the lane number features are determined according to map data;
determining a target lane corresponding to the vehicle according to the lane boundary;
determining auxiliary driving information corresponding to the vehicle according to the target lane;
wherein the determining the target lane corresponding to the vehicle comprises:
determining lane characteristic points and a first direction respectively corresponding to a plurality of lane boundaries;
determining a target lane corresponding to the vehicle according to the relation between the first direction and the second direction; the second direction is the direction corresponding to the lane characteristic point and the vehicle characteristic point.
CN201811519987.2A 2018-12-12 2018-12-12 Data processing method, device, equipment and machine readable medium Active CN111311902B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201811519987.2A CN111311902B (en) 2018-12-12 2018-12-12 Data processing method, device, equipment and machine readable medium
TW108130585A TW202033932A (en) 2018-12-12 2019-08-27 Data processing method, apparatus, device and machine readable medium
PCT/CN2019/123214 WO2020119567A1 (en) 2018-12-12 2019-12-05 Data processing method, apparatus, device and machine readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811519987.2A CN111311902B (en) 2018-12-12 2018-12-12 Data processing method, device, equipment and machine readable medium

Publications (2)

Publication Number Publication Date
CN111311902A CN111311902A (en) 2020-06-19
CN111311902B true CN111311902B (en) 2022-05-24

Family

ID=71076765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811519987.2A Active CN111311902B (en) 2018-12-12 2018-12-12 Data processing method, device, equipment and machine readable medium

Country Status (3)

Country Link
CN (1) CN111311902B (en)
TW (1) TW202033932A (en)
WO (1) WO2020119567A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7315904B2 (en) * 2020-06-19 2023-07-27 トヨタ自動車株式会社 vehicle controller
CN111914651A (en) * 2020-07-01 2020-11-10 浙江大华技术股份有限公司 Method and device for judging driving lane and storage medium
CN112115219A (en) * 2020-08-31 2020-12-22 汉海信息技术(上海)有限公司 Position determination method, device, equipment and storage medium
CN114518120A (en) * 2020-11-18 2022-05-20 阿里巴巴集团控股有限公司 Navigation guidance method, road shape data generation method, apparatus, device and medium
CN113286096B (en) * 2021-05-19 2022-08-16 中移(上海)信息通信科技有限公司 Video identification method and system
CN115451982A (en) * 2021-06-09 2022-12-09 腾讯科技(深圳)有限公司 Positioning method and related device
CN113642533B (en) * 2021-10-13 2022-08-09 宁波均联智行科技股份有限公司 Lane level positioning method and electronic equipment
CN117058647B (en) * 2023-10-13 2024-01-23 腾讯科技(深圳)有限公司 Lane line processing method, device and equipment and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10327869A1 (en) * 2003-06-18 2005-01-13 Siemens Ag Navigation system with lane references
CN105674992A (en) * 2014-11-20 2016-06-15 高德软件有限公司 Navigation method and apparatus

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002175599A (en) * 2000-12-05 2002-06-21 Hitachi Ltd Lane position estimating device for precedent vehicle or target
JP4377284B2 (en) * 2004-06-02 2009-12-02 株式会社ザナヴィ・インフォマティクス Car navigation system
JP4437556B2 (en) * 2007-03-30 2010-03-24 アイシン・エィ・ダブリュ株式会社 Feature information collecting apparatus and feature information collecting method
JP4886597B2 (en) * 2007-05-25 2012-02-29 アイシン・エィ・ダブリュ株式会社 Lane determination device, lane determination method, and navigation device using the same
JP4780534B2 (en) * 2009-01-23 2011-09-28 トヨタ自動車株式会社 Road marking line detection device
US9077958B2 (en) * 2010-08-30 2015-07-07 Honda Motor Co., Ltd. Road departure warning system
CN102184535B (en) * 2011-04-14 2013-08-14 西北工业大学 Method for detecting boundary of lane where vehicle is
US8989914B1 (en) * 2011-12-19 2015-03-24 Lytx, Inc. Driver identification based on driving maneuver signature
CN103942959B (en) * 2014-04-22 2016-08-24 深圳市宏电技术股份有限公司 A kind of lane detection method and device
KR101610502B1 (en) * 2014-09-02 2016-04-07 현대자동차주식회사 Apparatus and method for recognizing driving enviroment for autonomous vehicle
US9721471B2 (en) * 2014-12-16 2017-08-01 Here Global B.V. Learning lanes from radar data
US20160209219A1 (en) * 2015-01-15 2016-07-21 Applied Telemetrics Holdings Inc. Method of autonomous lane identification for a multilane vehicle roadway
US10013610B2 (en) * 2015-10-23 2018-07-03 Nokia Technologies Oy Integration of positional data and overhead images for lane identification
CN106056100B (en) * 2016-06-28 2019-03-08 重庆邮电大学 A kind of vehicle assisted location method based on lane detection and target following
CN106740841B (en) * 2017-02-14 2018-07-10 驭势科技(北京)有限公司 Method for detecting lane lines, device and mobile unit based on dynamic control
US10942525B2 (en) * 2017-05-09 2021-03-09 Uatc, Llc Navigational constraints for autonomous vehicles
KR101870229B1 (en) * 2018-02-12 2018-06-22 주식회사 사라다 System and method for determinig lane road position of vehicle
CN108957475A (en) * 2018-06-26 2018-12-07 东软集团股份有限公司 A kind of Method for Road Boundary Detection and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10327869A1 (en) * 2003-06-18 2005-01-13 Siemens Ag Navigation system with lane references
CN105674992A (en) * 2014-11-20 2016-06-15 高德软件有限公司 Navigation method and apparatus

Also Published As

Publication number Publication date
CN111311902A (en) 2020-06-19
WO2020119567A1 (en) 2020-06-18
TW202033932A (en) 2020-09-16

Similar Documents

Publication Publication Date Title
CN111311902B (en) Data processing method, device, equipment and machine readable medium
EP3343172B1 (en) Creation and use of enhanced maps
CN107328411B (en) Vehicle-mounted positioning system and automatic driving vehicle
CN108253975B (en) Method and equipment for establishing map information and positioning vehicle
CN113554698B (en) Vehicle pose information generation method and device, electronic equipment and storage medium
US20200173803A1 (en) Vision augmented navigation
JP6241422B2 (en) Driving support device, driving support method, and recording medium for storing driving support program
CN108362295A (en) Vehicle route guides device and method
CN109489673A (en) Data-driven map updating system for automatic driving vehicle
JP2018084573A (en) Robust and efficient algorithm for vehicle positioning and infrastructure
CN110389580A (en) Method for planning the drift correction in the path of automatic driving vehicle
KR20130072437A (en) Apparatus and method for recognizing vehicle location using in-vehicle network and image sensor
CN109754636B (en) Parking space cooperative sensing identification and parking assistance method and device
CN110119138A (en) For the method for self-locating of automatic driving vehicle, system and machine readable media
JP5742558B2 (en) POSITION DETERMINING DEVICE, NAVIGATION DEVICE, POSITION DETERMINING METHOD, AND PROGRAM
US11443627B2 (en) Navigation system with parking space identification mechanism and method of operation thereof
JP2015102449A (en) Vehicle self position estimation apparatus and vehicle self position estimation method
CN114248778A (en) Positioning method and positioning device of mobile equipment
US20210048819A1 (en) Apparatus and method for determining junction
JP2007071539A (en) On-vehicle navigation device
JP2009109341A (en) Own-vehicle position recognizer, own-vehicle position recognition program, and navigation apparatus using the same
CN115950441A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
JP5742559B2 (en) POSITION DETERMINING DEVICE, NAVIGATION DEVICE, POSITION DETERMINING METHOD, AND PROGRAM
CN114743395A (en) Signal lamp detection method, device, equipment and medium
CN102200444A (en) Real-time augmented reality device and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant