CN117931962A - Lane-level event reminding method and device - Google Patents

Lane-level event reminding method and device Download PDF

Info

Publication number
CN117931962A
CN117931962A CN202211268801.7A CN202211268801A CN117931962A CN 117931962 A CN117931962 A CN 117931962A CN 202211268801 A CN202211268801 A CN 202211268801A CN 117931962 A CN117931962 A CN 117931962A
Authority
CN
China
Prior art keywords
lane
target
road
road condition
condition image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211268801.7A
Other languages
Chinese (zh)
Inventor
张敏
胡登
周琦
邢腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202211268801.7A priority Critical patent/CN117931962A/en
Publication of CN117931962A publication Critical patent/CN117931962A/en
Pending legal-status Critical Current

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

According to embodiments of the present disclosure, a method, apparatus, electronic device, computer storage medium, and computer program product for lane-level event alerting are provided. The method described herein comprises: acquiring at least one road condition image of a target road; identifying a lane-level event associated with a target lane in a target road based on at least one road condition image; and providing the target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road. Based on the above, embodiments of the present disclosure can determine lane-level events in a road in real-time based on road condition images and cause associated identifications to present reminders regarding the lane-level events, which may be able to provide more efficient assistance to user travel.

Description

Lane-level event reminding method and device
Technical Field
Example embodiments of the present disclosure relate generally to the field of computers, and in particular, relate to a method, apparatus, electronic device, computer storage medium, and computer program product for lane-level event alerting.
Background
With the development of computer technology, various internet services can bring greater convenience to people's life. The map service or the navigation service is a service used by a person with high frequency words, and it is generally desired for the person to acquire auxiliary information about travel through the map service or the navigation service.
In recent years, high-precision map services and navigation services have been attracting attention. It is also more desirable to be able to obtain information about related traffic events in more real time and to be able to obtain a richer description about traffic events to assist in the travel of people.
Disclosure of Invention
In a first aspect of the present disclosure, a method for lane-level event alerting is provided. The method comprises the following steps: acquiring at least one road condition image of a target road; identifying a lane-level event associated with a target lane in a target road based on at least one road condition image; and providing the target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road.
In a second aspect of the present disclosure, an apparatus for lane-level event alerting is provided. The device comprises: the acquisition module is configured to acquire at least one road condition image of the target road; an identification module configured to identify a lane-level event associated with a target lane in a target road based on at least one road condition image; and a control module configured to provide the target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road.
In a third aspect of the present disclosure, there is provided an electronic device comprising: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executable by the processor to implement a method according to the first aspect of the present disclosure.
In a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer executable instructions which, when executed by a processor, implement a method according to the first aspect of the present disclosure.
The embodiment of the disclosure can determine lane-level events (e.g., lane traffic accidents, lane traffic control, lane construction, etc.) occurring in a road based on road condition images of the road, and can provide descriptive information of the lane-level events to a desired user, so that the fineness of traffic information can be improved, and user travel can be more effectively assisted.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments in accordance with the present disclosure may be implemented;
FIG. 2 illustrates a schematic diagram of an example process of lane-level event reminding according to some embodiments of the present disclosure;
FIG. 3 illustrates a schematic block diagram of an apparatus for lane-level event alerting in accordance with some embodiments of the present disclosure;
Fig. 4 illustrates a block diagram of an electronic device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object. Other explicit and implicit definitions are also possible below.
As discussed above, high-precision map services and navigation services are increasingly becoming a focus of attention. It is also more desirable to be able to obtain information about related traffic events in more real time and to be able to obtain a richer description about traffic events to assist in the travel of people.
Especially for map services and navigation services, people have become increasingly not satisfied with the reminding of road-level events, such as congestion or accidents on a certain road, and people are more expected to acquire more fine-grained lane-level information so as to facilitate travel decisions of users or lane-level path adjustment implemented by a lane-level navigation system.
In view of this, embodiments of the present disclosure provide a scheme for lane-level information reminding. According to this scheme, first, at least one road condition image of the target road may be acquired. Further, lane-level events associated with a target lane in the target road, such as traffic accidents in the target lane, target lane traffic regulations, target lane construction, etc., may be identified based on the at least one road condition image.
Further, descriptive information about the lane-level event may be provided to the target device, the forensic descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target link.
Thus, the embodiments of the present disclosure can determine a lane-level event (e.g., a lane traffic accident, a lane traffic control, a lane construction, etc.) occurring in a road based on a road condition image of the road, and can provide description information of the lane-level event to a desired user, so that the fineness of traffic information can be improved, and travel of the user can be more effectively assisted.
Various example implementations of this scheme will be described in detail below with reference to the accompanying drawings.
Example Environment
FIG. 1 illustrates a block diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. As shown in fig. 1, an environment 100 includes a vehicle 120 traveling on a target road 110. Herein, the target link 110 is intended to represent a section of a link having a predetermined length, and does not necessarily represent all parts of the link having the same name. For example, the target road 110 may be a road segment of 200 meters in length having a length of one kilometer "XX road".
In some implementations, an image capturing device 125 may be mounted on the vehicle 120 to capture a road condition image 150 of the target road 110. The image capturing device 125 may be, for example, an in-vehicle device mounted on the vehicle 120, such as a drive recorder.
Alternatively, the image capturing device 125 may also be a smart device with a camera, such as a smart phone, tablet, smart watch, or smart glasses, etc. Such a smart device may be appropriately installed, for example, for capturing a road condition image 150 of the target road 110.
In some embodiments, such a smart device may be a mobile device associated with the vehicle 120, such as a smart phone used by a driver or passenger in the vehicle 120. Such a vehicle 120 may also be, for example, a vehicle for providing travel services, such as a net taxi or a taxi, for example. The driver or passenger may trigger the capturing or uploading of the road condition image 150 through an application interface of the travel service, for example.
In some embodiments, the road condition image 150 of the target link 110 may also be captured by the road side device 140 associated with the target link 110. For example, the roadside apparatus 140 may be installed at one side of the target road 110 for capturing a road condition image 150 of a specific area in the target road 110.
Further, such road condition images 150 may be used to identify lane-level events associated with the target vehicle in the target road 110. Such lane-level events may include, for example, traffic accidents, construction or traffic control, etc., occurring in a particular lane.
The identification process of such lane-level events may be implemented, for example, by an onboard device associated with the vehicle 120, a coordination of one or more of a mobile device, a roadside device 140, a cloud device 160 associated with the vehicle 120.
Further, descriptive information about the lane-level event may be provided to the target device 170. It should be appreciated that while shown in the figures as sending descriptive information to the target device 170 via the cloud device 160, this is indicative of an exemplary. In the case of the identification of the lane-level event being performed by the in-vehicle device, the mobile device or the roadside device 140, this description information may also be transmitted by the in-vehicle device, the mobile device or the roadside device 140 directly to the target device 170, for example, without via the cloud device 160; or the description information may be sent to the cloud device 160 by the vehicle device, the mobile device, or the roadside device 140, and forwarded to the target device 170. For example, the description information may indicate that the rear-end collision accident has occurred with the vehicle 130-1 and the vehicle 130-2 in the rightmost lane of the target road 110.
In some embodiments, the target device 170 may include any suitable electronic device, such as a smart phone, tablet, notebook, desktop, smart watch, smart glasses, in-vehicle device, or the like.
In some embodiments, the target device 170 may have, for example, a display for providing the graphical interface 180 to the user. As shown in fig. 1, in the graphical interface 180, the target device 170 may, for example, present the user with descriptive information 184 about the lane-level event to indicate the event type of the lane-level event (e.g., rear-end accident) and the lane position (e.g., rightmost lane) of the target lane in the road 110 where the lane-level event occurred.
In some embodiments, as shown in FIG. 1, the target device 170 may also associatively present at least a partial image 182 of the road condition image 150 to enable the user to more intuitively learn about the lane-level event.
In some embodiments, descriptive information 184 may also be presented to the user in other suitable manners, e.g., may be presented to the user as text information, voice information, image information, and/or video information.
In some embodiments, the target device 170 may also not present descriptive information, but rather utilize the descriptive information to conduct lane-level navigation planning or control the travel of the autonomous vehicle. For example, the target device 170 may utilize a navigation application to generate or adjust a lane-level navigation path based on descriptive information of lane-level events. Or the target device 170 may also generate or adjust a travel path of an autonomous vehicle associated with the target device based on the descriptive information of the lane-level event.
The identification and provision process of the lane-level event will be described in detail below.
Example procedure
Fig. 2 illustrates a flow chart of a process 200 for lane-level information reminding in accordance with various embodiments of the present disclosure. The process 200 may be performed by suitable electronic devices, which may include, for example, an on-board device of the vehicle 120 as shown in fig. 1, a mobile device associated with the vehicle 120, a roadside device 140, a cloud device 160, or a combination of the above.
As shown in fig. 2, at block 210, the electronic device obtains at least one road condition image 150 of the target road 110.
In some embodiments, the road condition image 150 may be captured by an in-vehicle device onboard a first vehicle traveling in the target road. For example, an onboard trip recorder of the vehicle 120 may capture the road condition image 150.
Alternatively or additionally, the road condition image 150 may be captured by a mobile device associated with a second vehicle traveling in the target road 110. For example, a driver or passenger in the vehicle 120 may utilize a mobile device to capture the road condition image 150.
Alternatively or additionally, the road condition image 150 may also be captured by a road side device 140 associated with the target road 110.
In some embodiments, where the road condition image 150 needs to be transmitted over a network to a remote device for lane-level event identification, a first device for capturing the road condition image 150 may desensitize at least one item of sensitive information in the road condition image 150 and transmit a desensitized version of the road condition image 150 to the remote device for identifying lane-level events.
In some embodiments, such sensitive information may include, for example, vehicle identifications, human face portions, and/or other data in the road condition image 150 that needs to be desensitized according to legal regulations.
For example, the first device may identify a vehicle logo (e.g., license plate number) and/or a face portion in the road condition image 150 and desensitize the vehicle logo and/or face portion. For example, such desensitization processing may be a mosaic processing of the vehicle identification and/or the human face portion, replacement of the vehicle identification with a default vehicle identification that does not reveal actual information, replacement of the human face portion with a cartoon character, or the like.
With continued reference to fig. 2, at block 220, the electronic device identifies a lane-level event associated with a target lane in the target road 110 based on the at least one road condition image 150.
In some embodiments, the electronic device may utilize an appropriate pattern recognition method to determine the associated lane-level event based on the at least one road condition image 150. For example, the electronic device may utilize an appropriate machine learning model to process at least one road condition image 150 and output a corresponding lane-level event.
Or the electronic device may also identify a particular visual element in the at least one road condition image 150 using, for example, an image recognition algorithm, and determine the lane-level time based on the identified particular visual element. For example, the electronic device may identify a tripod in the road condition image to determine a traffic accident; the traffic jam detection method can identify the triangular pyramid or the traffic forbidden mark in the road condition image to determine that the specific lane is blocked.
It should be appreciated that the lane-level event may also be determined based on the road condition image 150 in other suitable manners, and the present disclosure is not intended to be limited to the manner in which the lane-level event is identified.
In some embodiments, the electronic device needs to determine the target lane where the lane-level event occurs. Specifically, the electronic device may identify a lane line in the at least one road condition image 150 and determine a lane position of the target lane in the target road based on the identified lane line.
For example, the electronic device may identify a lane line based on the at least one road condition image 150 and determine that the target road 110 includes two lanes according to the identified lane line, and may further determine that the target lane where the lane-level event occurs is the rightmost lane in the target road 110.
In some cases, at least one road condition image 150 may not cover all lanes of the target road 110, or some lane lines may be blocked. Accordingly, in the event that at least one road condition image 150 is acquired by a capture device associated with the vehicle 120, the electronic device may first determine the lane in which the vehicle 120 is located.
It should be appreciated that any suitable manner may be employed to determine the lane in which the vehicle 120 is currently located. For example, the vehicle 120 itself may have lane-level positioning capability that may determine that the vehicle 120 is currently traveling in the leftmost lane of the target road 110.
Further, the electronic device may determine a deviation of the lane in which the lane-level event is located relative to the lane in which the vehicle 120 is traveling based on the lane lines identified in the road condition image 150. For example, taking fig. 1 as an example, the electronic device may determine that a lane-level event occurred in a lane to the right of the lane in which the vehicle 120 is traveling, for example, based on the road condition image 150.
Further, the electronic device may determine a lane position of the target lane in the target road based on the first lane and the offset. For example, the electronic device may determine that the lane in which the vehicle 120 is traveling is the left first lane and determine that the lane-level event occurs in the right lane of the lane in which the vehicle 120 is traveling. Thus, the electronic device may determine that the target lane in which the lane-level event occurred is the left second lane.
In some embodiments, the electronic device may also determine a target location for the lane-level event to occur. Specifically, the electronic device may determine a first location associated with a capture device for capturing at least one road condition image 150 and determine a target location for the lane-level event occurrence based on the first location. For example, the electronic device may acquire location information of an in-vehicle device, a mobile device, or a roadside device for capturing at least one road condition image 150.
In some embodiments, the capture device may be associated with a service vehicle for providing travel services, for example. For example, the capture device may be a vehicle recorder onboard a net taxi or a net taxi, or a mobile device used by a driver or passenger in a net taxi or a net taxi.
Further, the electronic device may determine an order associated with the service vehicle and determine a first location of the capture device based on track information associated with the order. It will be appreciated that the service vehicle or a terminal device associated with the service vehicle may periodically report location information as track information for the order, as is user-acknowledged and allowed by law.
Further, the electronic device may further determine a relative position of the location of the lane-level event relative to the first location based on the at least one road condition image. For example, the electronic device may determine the relative position of the occurrence location of the lane-level event with respect to the capture device based on the image coordinates of the portion corresponding to the lane-level event in the at least one road condition image 150 and based on the conversion of the image coordinates to a spatial coordinate system based on the capture device. Accordingly, the electronic device may determine a target location at which the lane-level event occurs based on the first location and the relative location.
Based on such a manner, embodiments of the present disclosure are able to determine a lane-level event occurring in a road based on a road condition image of the road, and to determine a geographic location, a lane location, an event type, etc. where the lane-level event occurs.
In some embodiments, the electronic device may also cause the lane-level event determined based on the road condition image to be audited, in consideration of the fact that there may be a lack of descriptive information or an identification error of the lane-level event determined based on the road condition image.
In some embodiments, initial information of the lane-level event determined based on the road condition image 150 may be provided to an auditor for auditing. Such initial information may include, for example: an initial event type and/or an initial lane position, etc.
Further, to facilitate auditing by the auditor, the road condition image 150 and/or a desensitized version of the road condition image 150 may also be provided to the auditor for auditing by the auditor.
In some embodiments, the electronic device may receive audit information from an auditor. In some embodiments, the audit information may indicate confirmation or modification of the initial event type with respect to the lane-level event. For example, the auditor may determine that the event type (e.g., traffic accident) identified based on the road condition image 150 is accurate. Or the auditor may also modify the event type identified based on the road condition image, for example, from traffic control to construction.
Alternatively or additionally, the audit information may also indicate confirmation or modification of the initial lane position with respect to the lane-level event. For example, the auditor may determine that the initial lane position (e.g., the rightmost lane) identified based on the road condition image 150 is accurate. Alternatively, the auditor can also modify the lane position of the lane-level event, for example, from the initial rightmost lane to the right second lane.
Alternatively or additionally, the audit information may also indicate additional descriptive information about the lane-level event, wherein the additional descriptive information is created by the auditor based on the at least one road condition image. For example, the auditor may determine more rich descriptive information about the lane-level event based on the road condition image 150 or a desensitized version of the road condition image 150.
For example, for traffic accident, the auditor may determine that the traffic accident is a rear-end collision accident, and is a three-vehicle rear-end collision accident, for example, further based on the road condition image 150. As yet another example, the auditor may add a more detailed description of the traffic accident, e.g., a black SUV vehicle colliding with a bicycle, for example.
Such additional descriptive information may further assist the user in knowing more about the lane-level event.
With continued reference to fig. 2, at block 230, the electronic device provides descriptive information about the lane-level event to the target device 170, wherein the descriptive information indicates at least an event type of the lane-level event and a lane position of the target lane in the target link 110.
In some embodiments, the electronic device may generate the descriptive information of the lane-level event from initial information automatically generated based on the road condition image 150 and/or audit information obtained from an auditor.
In some embodiments, the electronic device may provide descriptive information of lane-level events to all terminal devices (e.g., cell phones, tablet computers, notebook computers, desktop computers, vehicle-mounted devices, etc.) that use the map service or navigation service.
Alternatively, the electronic device may also enter a specific target device 170 to provide descriptive information about the lane-level event.
In some embodiments, the target device 170 that receives the description information of the lane-level event may include, for example, a first terminal device, where the geographic area corresponding to the map viewing interface of the first terminal device includes the target location at which the lane-level event occurred. For example, if a user views a specific area using a terminal device, if a lane-level event occurs in the specific area, description information on the lane-level event may be provided to the terminal device.
In some embodiments, the first terminal device may receive the descriptive information of the lane-level event only if the zoom level of the map viewing interface is greater than a threshold level.
For example, if the zoom level of the map is small, such as a map of an entire city viewed by the user, such detail lane-level information may be limited to the user's assistance. In contrast, if the zoom level of the map is large, for example, when the user is viewing the road condition of a specific road, the electronic device may provide such terminal device with descriptive information of a lane-level event occurring in the area.
In some embodiments, the target device 170 that receives the description information of the lane-level event may include, for example, a second terminal device, wherein the planned route or the remaining route in the navigation interface of the second terminal device approaches the target location where the lane-level event occurred.
For example, if the user is planning a navigation route using the terminal device and the planned navigation route approaches the target location at which the lane-level event occurs, the second terminal device may acquire descriptive information of the lane-level event associated with the route to be planned.
As another example, if the user is navigating using the terminal device and the remaining route of the navigation approaches the target location where the lane-level event occurs, the second terminal device may acquire the description information of the lane-level event.
Further, the target device 170 may provide descriptive information, for example, at a map viewing interface or a navigation interface, wherein the descriptive information includes at least one of: text information, voice information, image information, and video information. In some embodiments, the target device 170 may provide the user with descriptive information of the lane-level event, for example, if the zoom level of the map viewing interface is greater than a threshold level.
In some embodiments, as shown in FIG. 1, descriptive information 184 may be presented in association with a target location 186 at which a lane-level event occurs.
In some embodiments, as shown in FIG. 1, the target device 170 may also associatively present at least a partial image 182 of the road condition image 150 to enable the user to more intuitively learn about the lane-level event.
In some embodiments, such at least partial image 182 may include a desensitized version of the at least one road condition image 150 such that at least one item of sensitive information in the at least one road condition image 150 is desensitized presented at the target device 170.
In some embodiments, such sensitive information may include, for example, vehicle identifications, human face portions, and/or other data in the road condition image 150 that needs to be desensitized according to legal regulations.
In some embodiments, the electronic device may utilize the descriptive information in addition to or in lieu of presenting the descriptive information to conduct lane-level navigation planning or to control travel of the autonomous vehicle. For example, the target device 170 may utilize a navigation application to generate or adjust a lane-level navigation path based on descriptive information of lane-level events. For example, the target device 170 may adjust the lane-level navigation path to avoid the target lane where the lane-level event occurred.
As another example, the target device 170 may also generate or adjust a travel path of an autonomous vehicle associated with the target device based on the descriptive information of the lane-level event. For example, the target device 170 may control the autonomous vehicle to detour the lane where the lane-level event occurs.
Based on such a manner, the embodiments of the present disclosure can determine a lane-level event (e.g., a lane traffic accident, a lane traffic control, a lane construction, etc.) occurring in a road based on a road condition image of the road, and can provide descriptive information of the lane-level event to a desired user, so that the fineness of traffic information can be improved, and travel of the user can be more effectively assisted. Furthermore, such descriptive information can also assist in automated lane-level navigation or autonomous vehicle path planning.
Example apparatus and apparatus
Embodiments of the present disclosure also provide corresponding apparatus for implementing the above-described methods or processes. Fig. 3 illustrates a schematic block diagram of an apparatus 300 for lane-level event alerting according to some embodiments of the present disclosure.
As shown in fig. 3, the apparatus 300 includes an acquisition module 310 configured to acquire at least one road condition image of a target road.
The apparatus 300 further includes an identification module 320 configured to identify a lane-level event associated with a target lane in the target road based on the at least one road condition image.
The apparatus 300 further comprises a providing module 330 configured to provide the target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road.
In some embodiments, the acquisition module 310 is configured to perform at least one of: acquiring a road condition image captured by vehicle-mounted equipment carried by a first vehicle running on a target road; acquiring a road condition image captured by a mobile device associated with a second vehicle traveling in a target road; or acquiring a road condition image captured by a road side device associated with the target road.
In some embodiments, the identification module 320 is further configured to: and determining the lane position of the target lane in the target road based on at least one road condition image.
In some embodiments, the identification module 320 is further configured to: marking lane lines in at least one road condition image; and determining a lane position of the target lane in the target road based on the lane lines.
In some embodiments, at least one road condition image is captured by a capture device associated with the target vehicle, and in some embodiments, the identification module 320 is further configured to: determining a first lane in which a target vehicle travels; determining an offset of a lane in which the lane-level event is located relative to the first lane based on the identified lane line; and determining a lane position of the target lane in the target road based on the first lane and the offset.
In some embodiments, the providing module 330 is further configured to: determining a first location associated with a capturing device for capturing at least one road condition image; determining a target location at which a lane-level event occurs based on the first location; and associating the descriptive information with the target location for presentation by the target device.
In some embodiments, the providing module 330 is further configured to: determining a relative position of a position at which the lane-level event occurs with respect to the first position based on at least one road condition image; and determining a target location based on the first location and the relative location.
In some embodiments, the capture device is associated with a service vehicle for providing travel services, and the providing module 330 is further configured to: a first location associated with the capture device is determined based on track information of an order associated with the service vehicle.
In some embodiments, the providing module 330 is further configured to: the target device is caused to present at least a portion of the at least one road condition image.
In some embodiments, at least one item of sensitive information in at least one road condition image is desensitized presented at the target device.
In some embodiments, at least one road condition image is captured by the first device, and the acquisition module 310 is further configured to: a desensitized version of the at least one road condition image is received by the second device from the first device, wherein the second device is a remote device relative to the first device, and the desensitized version causes at least one item of sensitive information in the at least one road condition image to be desensitized.
In some embodiments, the at least one item of sensitive information includes at least one of: a vehicle identifier included in the at least one road condition image; or a face portion included in at least one road condition image.
In some embodiments, the identification module 320 is further configured to: the method includes obtaining audit information about the identified lane-level event, and generating descriptive information based on the audit information.
In some embodiments, the audit information indicates at least one of: confirming or modifying an initial event type of the lane-level event, the initial event type being determined based on at least one road condition image; confirming or modifying an initial lane position with respect to the lane-level event, the initial lane position being determined based on the at least one road condition image; or additional descriptive information about the lane-level event, the additional descriptive information being created by the auditor based on at least one road condition image.
The providing module 330, in some embodiments, the target device includes at least one of: the map viewing interface of the first terminal device corresponds to a geographic area comprising a target position of the occurrence of the lane-level event; or the second terminal device, the planned route or the target position of the occurrence of the remaining route approach lane-level event in the navigation interface of the second terminal device.
In some embodiments, the zoom level of the map viewing interface of the first terminal device is greater than the threshold level.
In some embodiments, the providing module 330 is further configured to: enabling the target device to provide descriptive information on a map viewing interface or a navigation interface, wherein the descriptive information comprises at least one of the following: text information, voice information, image information, and video information.
In some embodiments, the providing module 330 is further configured to: the navigation application of the target device is caused to generate or adjust a lane-level path based on the descriptive information of the lane-level event.
In some embodiments, the providing module 330 is further configured to: the method may include causing the target device to generate or adjust a travel path of an autonomous vehicle associated with the target device based on the descriptive information of the lane-level event.
The elements included in apparatus 300 may be implemented in various ways, including software, hardware, firmware, or any combination thereof. In some embodiments, one or more units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to or in lieu of machine-executable instructions, some or all of the elements in apparatus 300 may be at least partially implemented by one or more hardware logic components. By way of example and not limitation, exemplary types of hardware logic components that can be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
Fig. 4 illustrates a block diagram of an electronic device/server 400 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device/server 400 illustrated in fig. 4 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein.
As shown in fig. 4, the electronic device/server 400 is in the form of a general-purpose electronic device. The components of electronic device/server 400 may include, but are not limited to, one or more processors or processing units 410, memory 420, storage 430, one or more communication units 440, one or more input devices 450, and one or more output devices 460. The processing unit 410 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 420. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of electronic device/server 400.
The electronic device/server 400 typically includes a number of computer storage media. Such media may be any available media that is accessible by electronic device/server 400 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 420 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 430 may be a removable or non-removable media and may include machine-readable media such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (e.g., training data for training) and may be accessed within electronic device/server 400.
The electronic device/server 400 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 4, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 420 may include a computer program product 425 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
The communication unit 440 enables communication with other electronic devices through a communication medium. Additionally, the functionality of the components of the electronic device/server 400 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communication connection. Thus, the electronic device/server 400 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 450 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 440 may be one or more output devices such as a display, speakers, printer, etc. The electronic device/server 400 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as needed through the communication unit 440, with one or more devices that enable a user to interact with the electronic device/server 400, or with any device (e.g., network card, modem, etc.) that enables the electronic device/server 400 to communicate with one or more other electronic devices. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (23)

1. A method for lane-level event alerting, comprising:
Acquiring at least one road condition image of a target road;
Identifying a lane-level event associated with a target lane in the target road based on the at least one road condition image; and
Providing a target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road.
2. The method of claim 1, wherein acquiring at least one road condition image of a target road comprises at least one of:
acquiring a road condition image captured by vehicle-mounted equipment carried by a first vehicle running on the target road;
acquiring a road condition image captured by a mobile device associated with a second vehicle traveling in the target road; or (b)
A road condition image captured by a road side device associated with the target road is acquired.
3. The method of claim 1, further comprising:
And determining the lane position of the target lane in the target road based on the at least one road condition image.
4. The method of claim 3, wherein determining the lane position of the target lane in the target link comprises:
Marking a lane line in the at least one road condition image; and
Based on the lane lines, the lane position of the target lane in the target road is determined.
5. The method of claim 4, wherein the at least one road condition image is captured by a capture device associated with a target vehicle, and determining the lane position of the target lane in the target road based on the lane line comprises:
Determining a first lane in which the target vehicle travels;
Determining an offset of the lane in which the lane-level event is located relative to the first lane based on the identified lane line; and
The lane position of the target lane in the target road is determined based on the first lane and the offset.
6. The method of claim 1, further comprising:
Determining a first location associated with a capturing device for capturing the at least one road condition image;
determining a target location at which the lane-level event occurs based on the first location; and
The descriptive information is associated with the target location for presentation by the target device.
7. The method of claim 6, wherein determining a target location for the lane-level event to occur based on the first location comprises:
Determining a relative position of the lane-level event occurrence relative to the first position based on the at least one road condition image; and
The target position is determined based on the first position and the relative position.
8. The method of claim 6, wherein the capture device is associated with a service vehicle for providing travel services, and determining a first location associated with the capture device comprises:
The first location associated with the capture device is determined based on track information of an order associated with the service vehicle.
9. The method of claim 1, further comprising:
And enabling the target equipment to present at least part of the at least one road condition image.
10. The method of claim 9, wherein at least one item of sensitive information in the at least one road condition image is desensitized presented at the target device.
11. The method of claim 1, wherein the at least one road condition image is captured by a first device, and the obtaining the at least one road condition image of the target road comprises:
A desensitized version of the at least one road condition image is received by a second device from a first device, wherein the second device is a remote device relative to the first device, and the desensitized version causes at least one item of sensitive information in the at least one road condition image to be desensitized.
12. The method of claim 10 or 11, wherein the at least one item of sensitive information comprises at least one of:
A vehicle identifier included in the at least one road condition image; or (b)
And the face part is included in the at least one road condition image.
13. The method of claim 1, further comprising:
Acquiring audit information about the identified lane-level event, and
And generating the description information based on the audit information.
14. The method of claim 13, wherein the audit information indicates at least one of:
A confirmation or modification of an initial event type for the lane-level event, the initial event type being determined based on the at least one road condition image;
confirmation or modification of an initial lane position with respect to the lane-level event, the initial lane position being determined based on the at least one road condition image; or (b)
Additional descriptive information regarding the lane-level event, the additional descriptive information created by an auditor based on the at least one road condition image.
15. The method of claim 1, wherein the target device comprises at least one of:
the geographic area corresponding to the map viewing interface of the first terminal device comprises a target position of the lane-level event; or (b)
And the second terminal equipment is used for planning a route or remaining route in a navigation interface of the second terminal equipment to the target position where the lane-level event occurs.
16. The method of claim 15, wherein a zoom level of the map viewing interface of the first terminal device is greater than a threshold level.
17. The method of claim 1, further comprising:
And enabling the target equipment to provide the description information on a map viewing interface or a navigation interface, wherein the description information comprises at least one of the following items: text information, voice information, image information, and video information.
18. The method of claim 1, further comprising:
Causing a navigation application of the target device to generate or adjust a lane-level path based on the descriptive information of the lane-level event.
19. The method of claim 1, further comprising:
Causing the target device to generate or adjust a travel path of an autonomous vehicle associated with the target device based on the descriptive information of the lane-level event.
20. An apparatus for lane-level event alerting, comprising:
the acquisition module is configured to acquire at least one road condition image of the target road;
An identification module configured to identify a lane-level event associated with a target lane in the target road based on the at least one road condition image; and
A providing module configured to provide a target device with descriptive information about the lane-level event, the descriptive information indicating at least an event type of the lane-level event and a lane position of the target lane in the target road.
21. An electronic device, comprising:
a memory and a processor;
Wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any one of claims 1 to 19.
22. A computer readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement the method of any of claims 1 to 19.
23. A computer program product comprising computer executable instructions which when executed by a processor implement the method of any one of claims 1 to 19.
CN202211268801.7A 2022-10-17 2022-10-17 Lane-level event reminding method and device Pending CN117931962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211268801.7A CN117931962A (en) 2022-10-17 2022-10-17 Lane-level event reminding method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211268801.7A CN117931962A (en) 2022-10-17 2022-10-17 Lane-level event reminding method and device

Publications (1)

Publication Number Publication Date
CN117931962A true CN117931962A (en) 2024-04-26

Family

ID=90759768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211268801.7A Pending CN117931962A (en) 2022-10-17 2022-10-17 Lane-level event reminding method and device

Country Status (1)

Country Link
CN (1) CN117931962A (en)

Similar Documents

Publication Publication Date Title
US9558665B2 (en) Method and system for avoidance of parking violations
US9928735B2 (en) Systems and methods for traffic violation avoidance
US20180174446A1 (en) System and method for traffic violation avoidance
JP6175846B2 (en) Vehicle tracking program, server device, and vehicle tracking method
US9064416B1 (en) Systems and methods for providing alerts regarding expiration of authorized parking
EP3994423B1 (en) Collecting user-contributed data relating to a navigable network
US20080262713A1 (en) Map information display apparatus and method thereof
EP3751480A1 (en) System and method for detecting on-street parking violations
US10495480B1 (en) Automated travel lane recommendation
Cao et al. Amateur: Augmented reality based vehicle navigation system
KR20230005140A (en) Systems and methods for image-based positioning and parking monitoring
CN109785637A (en) The assay method and device of rule-breaking vehicle
CN112805762B (en) System and method for improving traffic condition visualization
JP2017027546A (en) Parking position guide system
CN112926575A (en) Traffic accident recognition method, device, electronic device and medium
US11270136B2 (en) Driving support device, vehicle, information providing device, driving support system, and driving support method
JP5667907B2 (en) Information provision system
CN110855734A (en) Event reconstruction based on unmanned aerial vehicle
CN112700648B (en) Method and device for determining traffic violation position
US20220165155A1 (en) Parking Guidance Method Based on Temporal and Spatial Features and Its Device, Equipment, and Storage Medium
CN117931962A (en) Lane-level event reminding method and device
KR20170128684A (en) Electronical device for driving information using augmented reality technique
CN114694066A (en) Image processing method and device, electronic equipment and storage medium
CN112885087A (en) Method, apparatus, device and medium for determining road condition information and program product
JP2019074862A (en) Recommendable driving output device, recommendable driving output system, and recommendable driving output method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination