Disclosure of Invention
An object of the present disclosure is to provide a driving assistance scheme capable of providing convenience for a user to drive a vehicle.
According to a first aspect of the present disclosure, there is provided a driving assist method including: identifying environmental information around the vehicle; determining lane information of the vehicle; based on the environmental information and the lane information, traffic sign information for guiding driving behavior is generated.
Optionally, the traffic sign information is lane-level sign information.
Optionally, the step of generating traffic sign information further comprises: based on the navigation information, the environment information, and the lane information, traffic sign information corresponding to the navigation information is generated.
Optionally, the step of generating traffic sign information corresponding to the navigation information comprises: and if the navigation information is prompt information for straight ahead, generating a straight ahead mark for guiding the vehicle to go straight ahead along the current lane.
Alternatively, the length of the straight ahead flag varies with the distance between the vehicle and the preceding vehicle.
Optionally, the step of generating traffic sign information corresponding to the navigation information comprises: and if the navigation information is the prompting information of steering driving, generating a steering driving mark for guiding the vehicle to drive to the steering position from the current lane and steering the vehicle.
Optionally, the steering driving mark is a straight guiding mark when the distance between the vehicle and the steering position is greater than a first distance, the steering driving mark is a combination of the straight guiding mark and the steering guiding mark when the distance between the vehicle and the steering position is less than the first distance and greater than a second distance, the second distance is less than the first distance, and the steering driving mark is a steering guiding mark when the distance between the vehicle and the steering position is less than the second distance.
Optionally, the step of generating traffic sign information corresponding to the navigation information comprises: and generating a merging sign for guiding the vehicle to drive from the current lane to the target lane when the vehicle is determined to drive from the current lane to the target lane according to the navigation information.
Optionally, the method further comprises: the step of generating traffic sign information for guiding the driving behavior is performed in response to the arrival or imminent arrival of the vehicle at a predetermined scene.
Optionally, the predetermined scenario comprises at least one of: the navigation engine generates a scene of an enlarged intersection; an overpass scene; a roundabout scenario.
Optionally, the environmental information comprises at least one of: a lane line; a vehicle; a pedestrian; a traffic sign; road surface obstacles.
Optionally, the traffic sign information comprises at least one of: straight line straight mark; a curve straight line mark; a u-turn driving sign; a steering driving sign; marking the doubling; driving into the roundabout sign; driving out of the roundabout sign; driving in the high-speed mark; driving out the high-speed mark; and (4) error marking.
Optionally, the method further comprises: the traffic sign information is visually presented.
Optionally, the step of visually presenting the traffic sign information comprises: and displaying the traffic sign information on the live-action road in an overlapping manner.
Optionally, the step of displaying the traffic sign information on the live-action road in an overlapping manner comprises: acquiring a live-action road image; determining rendering parameter information based on the live-action road image; and rendering the image by using the rendering parameter information so as to display the traffic sign information on the live-action road in an overlapping manner.
Optionally, the step of rendering the image using the rendering parameter information includes: drawing the traffic sign information to the live-action road image based on the rendering parameter information, wherein the method further comprises the following steps: and presenting the rendered live-action road image on the vehicle-mounted display screen.
Optionally, the step of rendering the image using the rendering parameter information includes: and drawing the traffic sign information on the vehicle-mounted display screen based on the rendering parameter information so that the traffic sign information displayed on the vehicle-mounted display screen is matched with the live-action road.
Optionally, the step of determining rendering parameter information includes: identifying image characteristic points in the live-action road image; determining the relation between the road picture size and the actual distance in the real-scene road image based on the image feature points; and generating rendering parameter information based on the relation between the road picture size and the actual distance.
Optionally, the rendering parameter information comprises at least one of: a location; rotating the angle; a change in size; the duration of time.
Optionally, the step of identifying environmental information around the vehicle includes: imaging the surroundings of the vehicle; the resultant image is analyzed to identify environmental information around the vehicle.
Optionally, the step of determining the lane information where the vehicle is located includes: determining lane information where the vehicle is based on one or more of a signal location algorithm, a dead reckoning algorithm, and an environmental feature matching algorithm.
Optionally, the method further comprises: and playing voice broadcast information consistent with the traffic sign information.
According to a second aspect of the present disclosure, there is also provided a driving assistance system including: the system comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for imaging the periphery of the vehicle to obtain an image containing environmental information of the periphery of the vehicle; the image recognition module is used for analyzing the image generated by the image acquisition module so as to recognize the environmental information around the vehicle; the lane positioning module is used for determining lane information of the vehicle; and a sign generation module for generating traffic sign information for guiding driving behavior based on the environment information and the lane information.
According to a third aspect of the present disclosure, there is also provided a computing device comprising: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform a method as set forth in the first aspect of the disclosure.
According to a fourth aspect of the present disclosure, there is also provided a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method as set forth in the first aspect of the present disclosure.
The present disclosure generates traffic sign information for guiding driving behavior by means of positioning information of a lane level and environmental information around a vehicle. Therefore, convenience can be provided for a user to drive a vehicle or drive the vehicle without people. In addition, the method can be combined with a navigation tool (such as a navigation engine) to convert the navigation information into more refined traffic sign information so as to provide more accurate and convenient navigation service for users.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The present disclosure provides a driving assistance scheme in which traffic sign information for guiding driving behavior is generated by means of positioning information at a lane level and environmental information around a vehicle. The traffic sign information is sign information that can guide driving behavior, and for example, the traffic sign information may be sign information for guiding key driving actions such as straight running, turning, merging, turning around, and entering a rotary. Wherein the traffic sign information may be lane-level sign information in order to guide driving behavior at a finer lane level.
The driving assistance scheme can be combined with the existing navigation tool (such as a navigation engine) to convert the navigation information into more refined traffic sign information, such as lane-level traffic sign information, so as to provide more accurate and convenient navigation service for users. For example, when the navigation engine generates the navigation prompt information of "go straight ahead by 100 meters" according to the vehicle position and the navigation path, the straight-ahead mark for guiding the vehicle to go straight ahead along the current lane may be generated according to the recognized environment information and the lane information where the vehicle is located; when the navigation engine generates the prompt information of 'steering driving', a steering driving mark for guiding the vehicle to steer to the target lane along the current lane can be generated according to the environment information and the lane information acquired in real time.
Moreover, when the driving assistance scheme of the disclosure is combined with the existing navigation tool (such as a navigation engine), the disclosure can also avoid driving accidents caused by inaccurate navigation to a certain extent. For example, in the case where the navigation information suggests traveling 300 meters ahead along the current road but no road is present 100 meters ahead, if the user travels completely in accordance with the navigation, traffic accidents may be inevitably generated, based on the driving assistance scheme of the present disclosure, traffic sign information proceeding along the current lane may be displayed, and when no road is detected ahead, the traffic sign information may become a wrong sign to prompt the user.
In addition, the driving assistance scheme of the present disclosure may also automatically generate traffic sign information for guiding driving behavior during driving of the vehicle without depending on navigation information of a navigation engine. For example, it may be determined whether the current driving behavior violates a traffic rule according to the environment information and the lane information where the vehicle is located, which are acquired in real time, and in case of violating the traffic rule, traffic sign information for guiding the user to make a correct driving action may be generated. For another example, the driving error condition of the user may be counted according to the historical driving behavior of the user (which may be a group user or a single user), a scene with a high driving error (such as a viaduct scene, a roundabout and other complex scenes) may be determined, and when the vehicle travels to such a scene with a high error rate, the auxiliary driving scheme according to the present disclosure may be automatically executed, and the flag information for guiding the driving behavior at the lane level may be generated, so as to reduce the driving error rate of the user in such a scene. For example, some special scenes including, but not limited to, a scene of an enlarged intersection, a viaduct scene, and a roundabout scene generated by the navigation engine may also be defined according to expert experience, and when the vehicle travels into such special scenes, the driving assistance scheme of the present disclosure may be automatically executed to generate traffic sign information for guiding driving behaviors, so as to reduce the driving error rate of the user in the special scenes. The early stage of screening of the special scenes can be determined through expert experience, the driving error rate of a driver can be known through machine learning in the execution process, and the special scenes are automatically marked.
Further, traffic sign information generated based on the driving assistance scheme disclosed by the invention can be displayed on vehicle-mounted screens such as a central control screen, an instrument panel and a HUD screen in a way of being matched with road scenes, so that AR display is realized, a user is helped to obtain more intuitive and friendly navigation experience, and the problem of difficulty in recognizing traditional 2D or 3D navigation pictures is solved.
The following further describes aspects of the present disclosure.
[ term interpretation ]
AR: augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images and videos.
Image recognition technology: refers to a technique of processing, analyzing and understanding an image with a computer to recognize various different modes of objects and objects.
Lane positioning: the positioning progress can be accurate to a specific lane, and compared with the traditional navigation positioning, the method only can support path positioning and navigation.
[ Driving assistance System ]
Fig. 1 is a schematic block diagram showing the structure of a driving assistance system according to an embodiment of the present disclosure. The driving assistance system 100 may implement the driving assistance scheme of the present disclosure in combination with technologies of image recognition, lane positioning, image rendering, and the like. The driving assistance system 100 may be mounted in an in-vehicle operating system to provide a driving assistance service for a user or for unmanned driving, for example, a navigation module 140 may be combined to provide a more refined navigation service for the user.
FIG. 1 illustrates a number of modules that may be involved in some embodiments. It should be understood that not all of these modules are necessary to implement the technical solution of the present disclosure. For example, as will be apparent from reading the following detailed description of embodiments, not all of these modules may be required to participate in some scenarios.
The image capturing module 110 is configured to image the surroundings of the vehicle to obtain an image including environment information around the vehicle, where the image captured by the image capturing module 110 may be a video or a picture. Alternatively, the image capture module 110 may be a video signal capture module, and the video signal capture module may include a right-side road surface capture module, a front main road capture module, and a left-side road surface capture module mounted on the vehicle. By way of example, the video signal acquisition module may be an in-vehicle camera, and the in-vehicle camera may acquire the road conditions in front of the vehicle and on the left and right sides of the vehicle in real time to obtain a video information stream containing environmental information around the vehicle.
The image captured by the image capture module 110 may be passed to the image recognition module 120. The image recognition module 120 may analyze (image recognition) the image captured by the image capturing module 110 to recognize environmental information around the vehicle. The environmental information mentioned here can be one or more kinds of road condition information such as the road where the vehicle is located and the lane lines, pedestrians, traffic signs, road surface obstacles, etc. on the road.
The present disclosure mainly generates the lane-level sign information, and thus requires the image recognition module 120 to perform more detailed recognition of the lane lines. Preferably, the identified lane lines may include, but are not limited to, white single solid lines, yellow double solid lines, white dashed lines, yellow dashed lines, diversion lines, zebra stripes, driving stop lines, road speed limit signs, yellow dashed solid lines, strong wearing line sheltering, vehicle and pedestrian sheltering edge lines, and the like. The specific image recognition algorithm is not described herein again.
The lane locating module 130 is used to determine lane information where the vehicle is located. The lane positioning module 130 may determine lane information of the vehicle based on a plurality of fusion positioning algorithms such as signal positioning (e.g., GPS signal positioning), dead reckoning, environmental feature matching, and the like. In addition, the lane positioning module 130 may further combine the lane line identified by the image recognition module 120 in the process of positioning the lane where the vehicle is located, so as to achieve more accurate lane positioning. The signal positioning, the dead reckoning and the environmental feature matching are all existing positioning technologies, and the specific implementation process of lane positioning based on multiple positioning technologies is not repeated here.
The sign generation module 150 is a core rule module of the driving assistance system 100, and is configured to generate traffic sign information for guiding driving behavior based on a specific rule. For example, the user may perform a corresponding driving action according to the traffic sign information, and may also provide a decision for unmanned driving according to the generated traffic sign information.
Generation of flag information
The traffic sign information is sign information capable of guiding driving behaviors, and may alternatively be lane-level sign information capable of guiding driving behaviors at a lane level, such as sign information for guiding key driving actions such as straight running, steering, merging, turning around, driving in and out of a roundabout, driving in and out of a high speed, and the like. Specifically, the traffic sign information may include, but is not limited to, a straight-line sign, a curved-line sign, a u-turn sign, a turn-around sign, a merge sign, an inbound roundabout sign, an outbound roundabout sign, an inbound high-speed sign, an outbound high-speed sign, an error sign, and the like. Briefly, the straight-line sign refers to a sign that goes straight ahead along the extending direction of the current lane; the curve straight mark refers to a mark which advances along the curve of the current lane, and if the lane is a curved lane with a certain radian, the curve straight mark is generated; the U-turn driving mark can be a mark for turning from the current lane to the target lane; the steering driving mark can be a left-turning driving mark or a right-turning driving mark; the merging mark can be a mark from the merging of the current lane to the target lane, can be left merging or right merging; the input roundabout mark can be a guide mark generated when the input roundabout is input or is about to input, and the output roundabout mark can be a guide mark generated when the output roundabout is output or about to output; the entry highway mark may be a guidance mark generated when entering or about to enter the highway section, and the exit highway mark may be a guidance mark generated when exiting or about to exit the highway section; the error flag may be a lane error flag, or may be another error flag such as a route error flag.
Generation method one
In one embodiment of the present disclosure, the navigation module 140 may be used to obtain navigation information, for example, the navigation module 140 may be an in-vehicle navigation engine, and may perform navigation planning in real time based on the position and destination of the vehicle and generate navigation prompt information. In addition, the navigation module 140 may also be connected to a navigation engine (e.g., a mobile navigation engine such as a mobile phone) for receiving navigation information generated by the navigation engine. The navigation information may include route planning information and navigation prompt information generated based on the vehicle position.
The sign generation module 150 may generate traffic sign information corresponding to the navigation information according to the navigation information, the identified environment information, and the lane information where the vehicle is located. The navigation information may include route planning information and navigation prompt information, and thus, the traffic sign information may be generated with reference to the route planning information and also with reference to the navigation prompt information.
For example, it may be determined whether the vehicle is traveling in the correct lane according to the path planning information and the lane information in which the vehicle is currently located, and in the case where the vehicle is not traveling in the correct lane, a merge sign for guiding the vehicle from the current lane to the correct lane may be generated. For another example, it may also be determined whether the driving direction of the vehicle is wrong according to the path planning information, the information of the lane where the vehicle is currently located, and the turning driving sign that turns around from the current lane to the target lane may be generated when the driving direction of the vehicle is wrong. For example, traffic sign information corresponding to the navigation prompt information may be generated based on the navigation prompt information generated in real time, and the generated traffic sign information may be regarded as a result of conversion of the navigation prompt information at a lane level.
The following is an exemplary description of the generation process and animation display effect of several kinds of traffic sign information.
1. Straight line straight mark
In the case where the navigation information is prompt information for straight ahead (such as "straight ahead 300 m"), a straight-ahead sign for guiding the vehicle to go straight ahead along the current lane may be generated. Wherein, the length of the straight-line sign can change along with the change of the distance between the vehicle and the front vehicle.
As an example, a straight traveling guide line for guiding the straight traveling (i.e., straight traveling) of the vehicle may be generated as the straight traveling flag when the navigation function is turned on. The length of the straight guide line can vary with the distance of the front vehicle. The animation display rule of the straight guide line can be set to be that the fluid effect and the length do not exceed the horizon line and the tail part of the front vehicle, and a pixel gap is left between the fluid effect and the tail part of the front vehicle. And when other marks (such as a steering mark and a doubling mark) are triggered, the straight-line straight mark can be closed.
2. Steering running mark
In the case where the navigation information is prompt information for steering travel (e.g., "turn right at the intersection ahead"), a steering travel flag for guiding the vehicle to travel from the current lane to the steering position and turn to travel may be generated.
As an example, the detection may be initiated when the navigation engine generates turn-around cue information. And generating a corresponding steering driving mark according to the distance between the vehicle and the steering position. The turn-by-turn indicator may be a straight guide indicator if the vehicle is more than a first distance (e.g., 30 meters) from the turn, may be a combination of a straight guide indicator and a turn-by-turn indicator if the vehicle is less than the first distance and more than a second distance (e.g., 10 meters) from the turn, and may be a turn-by-turn indicator if the vehicle is less than the second distance from the turn. Wherein the second distance is smaller than the first distance, the straight guiding mark may be a straight guiding line, and the turning guiding mark may be a turning guiding arrow. The direction of the steering guide mark needs to be consistent with the navigation prompting direction, and left and right are distinguished. The turn-around running mark may be turned off after the navigation engine detects that the turn-around is completed, or the turn-around running mark may be turned off when the rear vehicle arrives, or the turn-around running mark may be turned off when the turn-around lamp is turned off.
3. Doubling mark
In the case where the vehicle needs to be driven from the current lane to the target lane based on the navigation information, a merge sign for guiding the vehicle from the current lane to the target lane may be generated. For example, when it is determined that the current vehicle is located in a lane not corresponding to the navigation information and needs to travel to the target lane, the merge sign for traveling from the current lane to the target lane may be generated. Wherein the generation of the doubling flag may be triggered when the distance from the solid line (solid line side of single solid line, double solid line, dashed solid line) is more than 3 m. Moreover, the judgment of the trigger condition of the doubling mark can be realized in various ways, and is not described herein again.
In the case where it is determined that the doubling is required, a doubling flag visually guiding the user to make a doubling driving operation may be generated when there is no vehicle in the target lane or when there is a vehicle in the target lane and the distance from the vehicle is greater than 10 m. For example, the merging sign may be a visual guidance arrow, the visual guidance arrow may extend furthest to the tail of the leading vehicle, and when the target lane has a vehicle and is less than 10m, the merging sign may be displayed normally and a vehicle highlight is performed.
As shown in fig. 2, when the vehicle is distant from the front vehicle in the target lane, the merge sign may be displayed at a specific position on a head-up display (HUD) so that the merge sign is visually displayed superimposed on the live-action road. The drawing shows a doubling flag including a doubling flag 1 and a doubling flag 2. The doubling sign 1 is a visual guide arrow with a short line distance and is used for guiding the doubling operation, and the doubling sign 2 is displayed from a far position and extends to the tail of a front vehicle in a target lane and is used for identifying the front vehicle in the target lane. The merging sign 1 and the merging sign 2 may be in a color different from that of the road and highlighted, for example, may be highlighted green.
When it is detected that the vehicle enters an area of a lane line (a solid line, a solid line side of a virtual line and a real line) of the non-lane-changing, lane-changing reminding can be omitted. After the merging, the navigation engine can perform route re-planning according to the route to be driven into the corresponding lane.
So far, taking a straight line straight mark, a steering driving mark and a doubling mark as examples, the generation mechanism and the display effect of the doubling marks are exemplarily described. As described above, the traffic sign information may further include various other signs such as a curvable straight sign, a u-turn driving sign, a driving-in roundabout sign, a driving-out roundabout sign, a driving-in high-speed sign, a driving-out high-speed sign, and an error sign, and lane-level signs of different categories may have corresponding generation mechanisms and display rules, which are not described in detail in this disclosure.
Generation mode two
In another embodiment of the present disclosure, the sign generating module 150 may also automatically generate traffic sign information for guiding driving behavior during driving of the vehicle by the user without depending on navigation information of the navigation engine. For example, the sign generation module 150 may determine whether the current driving behavior violates a traffic rule according to the environment information and the lane information where the vehicle is located, which are acquired in real time, and generate traffic sign information for guiding the user to make a correct driving action in case of violating the traffic rule. For another example, the driving error condition of the user may be counted according to the historical driving behavior of the user (which may be a group user or a single user), a scene with a high driving error (such as a viaduct scene, a roundabout and other complex scenes) may be determined, and when the vehicle travels to such a scene with a high error rate, the sign generation module 150 may generate the sign information for guiding the driving behavior at the lane level, so as to reduce the driving error rate of the user. For example, some special complex scenes may be defined according to expert experience, including but not limited to a scene in which the navigation engine generates an enlarged intersection (e.g., a scene in which the 2D navigation occurs an enlarged intersection), a viaduct scene, and a roundabout scene. The sign generation module 150 may generate sign information for guiding driving behavior at a lane level when the vehicle travels into such a special scene to reduce a driving error rate of the user. The early stage of the screening of the complex scenes can be determined through expert experience, the driving error rate of a driver can be known through machine learning in the execution process, and the special scenes are automatically marked.
Display of logo information
The generated traffic sign information may be displayed on the display module 170. The display module 170 may be an on-vehicle display module, and may include, but is not limited to, an on-vehicle display screen such as a center control screen, a dashboard, and a HUD screen. Preferably, the display module 170 may display the traffic sign information on the live-action road in an overlapping manner, that is, in an AR display manner, so as to help the user obtain a more intuitive and friendly driving experience and solve the problem of cognitive difficulty of the conventional 2D or 3D navigation picture. Preferably, the traffic sign information may be displayed superimposed on the corresponding lane in the live-action road, so that the user may make the corresponding driving action directly based on what is seen, without any further cognitive conversion of what is seen.
The traffic sign information may be drawn onto the display module 170 by the rendering module 160. Specifically, the rendering module 160 may first acquire the live-action road image from the image capturing module 110, and determine rendering parameter information based on the live-action road image. For example, the rendering module 160 may identify image feature points such as a road center axis and a road horizon in the real road image, and based on the identified image feature points, may determine a relationship between a road image size and an actual distance in the real road image, for example, the relationship between the road image size and the actual distance in the real road image may be determined by comparing the image feature points with a distance feature library prepared in advance. Based on the relationship between the road picture size and the actual distance, parameter information for rendering may be generated, such as parameter information that may include position, rotation angle, size change, duration, and the like. Finally, the rendering module 160 may perform image rendering based on the generated rendering parameters to draw the traffic sign information onto the display module 170.
It should be noted that, corresponding animation display effects and display rules may be set in advance for each type of traffic sign information. For example, the animation display effect and rule of the straight-line sign may be: the length of the straight line straight mark is changed along with the change of the distance of the front vehicle. The animation display effect and the rule of the turn sign may be: the turn sign is a "straight guide line" when the distance from the turn is more than 30 meters, the "straight guide line" is changed to a "straight guide line" + "turn guide arrow" when the distance is less than 30 meters, and the turn sign is changed to a "turn guide gesture" when the distance is less than 10 meters. The rendering module 160 may also refer to a corresponding animation display effect and a corresponding display rule when determining the rendering parameter information, and may perform rendering based on a display effect and a display rule corresponding to the traffic sign information to be rendered after determining the rendering parameter.
In the present disclosure, the rendering module 160 may draw the traffic sign information onto the real road image and present the rendered real road image on the display module 170 to implement AR display. In addition, as shown in fig. 2, the display module 170 may be a vehicle-mounted display screen (e.g., a HUD head-up display), and the rendering module 160 may also directly draw the traffic sign information onto the vehicle-mounted display screen (e.g., the HUD head-up display), so that the traffic sign information displayed on the screen matches with the real road, thereby implementing AR display. Optionally, due to the difference of the display pictures and the difference of the screen sizes of different display screens, image effect adaptation can be performed according to the characteristics of the current screen through a screen adaptation module (not shown in the figure), and the adaptation process is not described herein again.
As an example of the present disclosure, the sign generation module 150 may be used to generate and schedule various types of traffic sign information, display time, display location, and the like. The rendering module 160 may obtain the generated traffic sign information sequence from the sign generation module 150, the traffic sign information sequence including the queue time and the traffic sign information. The rendering module 160 may render each traffic sign information in the sequence according to rendering parameter information such as a position, a rotation angle, a size change, a sequence duration, and the like, and finally display the rendered traffic sign information on the display module 170.
The prompt module 180 may play voice broadcast information consistent with the traffic sign information to help the user better understand the traffic sign information. Optionally, the prompt module 180 may integrate the navigation information and the traffic sign information into a voice broadcast message to help the user to better understand the navigation intention.
In summary, the driving assistance system of the present disclosure may be combined with a navigation engine to convert navigation information into finer sign information for guiding driving behavior at a lane level, and may combine traffic sign information with live-action roads for AR display. Therefore, the driving assistance system of the present disclosure can be implemented as an AR navigation system, and the following system modifications can be performed on the original vehicle-mounted operating system to implement the AR navigation system.
1) And a fixed front camera is additionally arranged and is used for supporting the image acquisition module 110 to acquire road images. The vehicle-mounted system needs to be added with a necessary camera signal acquisition processing unit, an image picture size, a frame rate parameter and an image transmission format. Optionally, a left fisheye camera and a right fisheye camera can be optionally arranged on the vehicle machine capable of receiving high hardware cost, the left fisheye camera and the right fisheye camera are used for collecting left lane pictures and right lane pictures, and the safety of steering and merging guide can be improved. If so, the video signal processing units of the system portion need to be increased accordingly.
2) The improvement of the system computing power needs to be further improved for the real-time image recognition, rendering and other work, the hardware computation of the whole vehicle-mounted system needs to be further improved, and the specific improved numerical value is dynamically computed according to the function points of practical application. An add-on power resource allocation unit is required in the in-vehicle system for allocating power and image processing resources.
3) And the navigation mode of the AR effect can be defaulted as an alternative navigation view mode in the vehicle-mounted operating system by the AR navigation mode switching control unit. After the user actively clicks the switching control unit to switch the modes, the default 2D navigation picture can be switched to the AR navigation mode. In addition, in some special complex scenes, including but not limited to scenes in which all 2D navigation has intersection enlarged images, viaduct scenes and roundabout scenes, the navigation mode can be automatically switched to the AR navigation mode. The early stage of the screening of the complex scenes can be determined through navigation expert experience, and the driving error rate of a driver can be known through machine learning in the execution process of the system, so that the complex scenes can be automatically marked.
[ METHOD FOR VEHICLE NAVIGATION ]
Fig. 3 is a schematic flowchart illustrating a driving assistance method according to an embodiment of the present disclosure. The method shown in fig. 3 may be implemented by the driving assistance system shown in fig. 1. The basic implementation process of the driving assistance method is described below, and for the details involved therein, the above description may be referred to, and details are not described below.
Referring to fig. 3, in step S310, environmental information around the vehicle is identified.
The environmental information around the vehicle may be one or more road condition information such as a road on which the vehicle is located, a lane line on the road, a pedestrian, a traffic sign, a road surface obstacle, and the like. The surroundings of the vehicle may be imaged and the resulting image may be analyzed to identify environmental information around the vehicle.
In step S320, the lane information where the vehicle is located is determined.
The lane information in which the vehicle is located may be determined based on one or more of a signal location algorithm, a dead reckoning algorithm, and an environmental feature matching algorithm.
In step S330, traffic sign information for guiding driving behavior is generated based on the environmental information and the lane information. For the traffic sign information and the generation process thereof, reference may be made to the above description, which is not repeated herein.
After the traffic sign information is obtained, the traffic sign information can also be visually displayed. For example, the traffic sign information may be displayed superimposed on the live-action road, so as to realize AR display of the traffic sign information. For displaying the traffic sign information, refer to the above description, and are not described herein.
In addition, voice broadcast information consistent with the traffic sign information can be played. Also, the step of generating traffic sign information for guiding the driving behavior may be performed in response to detecting that the vehicle arrives or is about to arrive at a predetermined scene. The predetermined scene may be a scene with a high driving error rate of the user or a preset special scene, such as but not limited to a scene in which the navigation engine generates an enlarged intersection, a viaduct scene, a roundabout scene, and the like.
[ calculating device ]
Fig. 4 is a schematic structural diagram of a computing device that can be used to implement the driving assistance method according to an embodiment of the present disclosure.
Referring to fig. 4, computing device 400 includes memory 410 and processor 420.
The processor 420 may be a multi-core processor or may include a plurality of processors. In some embodiments, processor 420 may include a general-purpose host processor and one or more special coprocessors such as a Graphics Processor (GPU), a Digital Signal Processor (DSP), or the like. In some embodiments, processor 420 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
The memory 410 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are required by the processor 420 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 410 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 410 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 410 has stored thereon executable code that, when processed by the processor 420, may cause the processor 420 to perform the driving assistance methods described above.
The driving assistance method, system, and computing device according to the present disclosure have been described in detail above with reference to the accompanying drawings.
This is disclosed can combine image recognition, high accuracy location, the navigation technique among the AR display technology cooperation vehicle-mounted system, the drawing and the display effect of the lane rank that realize, and present product compares in this scheme, all has the great difference of ease of use, practicality and user experience aspect.
Specifically, some manufacturers and college research institutes have proposed AR navigation in different forms and different schemes in the market, and the specific disadvantages include: a) the method can only display schematic icons, cannot accurately identify the specific lane, has no great practical improvement compared with the traditional navigation map, and due to the limitation of lane positioning technology, products on the market only can prompt that the functions only comprise two functions of straight going and steering, and other functions (such as doubling, turning around, roundabout and the like which strongly depend on accurate lane information) cannot effectively guide and cover driving behaviors, which is the maximum limitation of the current market scheme; b) the schematic icon is displayed in a relatively fixed manner, and is mostly a static icon (such as a HUDAR scheme). The reminding effect is weak, the user is difficult to see carefully when the reminding effect is superposed on a real scene, and compared with the traditional navigation information, the reminding effect is more convenient for the owner to pay attention and pay eyes, and the technical value cannot be effectively exerted.
The lane positioning scheme can be realized by comprehensively utilizing image recognition, high-precision positioning and navigation technologies, and more accurate AR navigation display and navigation mark prompt in more forms are realized, so that the driving experience of a vehicle owner is improved. The concrete advantages include: 1) the types of AR navigation marks which can be supported by the current market products are greatly expanded, and the expansion types comprise left and right doubling, turning, driving in and out of the roundabout, driving in and out of the high speed and the like; 2) the existing AR navigation forms (straight line and steering) in the market are displayed with higher precision, the display is accurately attached to the current lane, and the user can be more intuitively and effectively helped to understand the navigation function.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the above-mentioned steps defined in the above-mentioned method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.