CN112885132A - Intelligent unmanned dispatching system based on AI and automatic driving method - Google Patents

Intelligent unmanned dispatching system based on AI and automatic driving method Download PDF

Info

Publication number
CN112885132A
CN112885132A CN202110112363.4A CN202110112363A CN112885132A CN 112885132 A CN112885132 A CN 112885132A CN 202110112363 A CN202110112363 A CN 202110112363A CN 112885132 A CN112885132 A CN 112885132A
Authority
CN
China
Prior art keywords
information
vehicle
real
attribute information
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110112363.4A
Other languages
Chinese (zh)
Inventor
韦峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Lanniao Intelligent Parking Technology Industrialization Co ltd
Original Assignee
Anhui Lanniao Intelligent Parking Technology Industrialization Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Lanniao Intelligent Parking Technology Industrialization Co ltd filed Critical Anhui Lanniao Intelligent Parking Technology Industrialization Co ltd
Priority to CN202110112363.4A priority Critical patent/CN112885132A/en
Publication of CN112885132A publication Critical patent/CN112885132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0967Systems involving transmission of highway information, e.g. weather, speed limits
    • G08G1/096708Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control
    • G08G1/096725Systems involving transmission of highway information, e.g. weather, speed limits where the received information might be used to generate an automatic action on the vehicle control where the received information generates an automatic action on the vehicle control
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/0969Systems involving transmission of navigation instructions to the vehicle having a display in the form of a map

Abstract

The invention is suitable for the technical field of computers, and provides an AI-based unmanned intelligent scheduling system and an automatic driving method, wherein the method comprises the steps of acquiring the position information of a vehicle in real time; acquiring surrounding image information of a vehicle in real time; analyzing the surrounding image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle; according to the real-time model map and the position information of the vehicle, the target driving state of the vehicle is determined, the target driving state is sent to the vehicle, the vehicle can still be guaranteed to normally and automatically drive in the field or on the road section of which the map is not updated in time according to the on-site road condition information, and the problem that in the prior art, the vehicle cannot automatically adjust the driving state in the field or on the road section of which the map is not updated in time according to the actual condition is effectively solved.

Description

Intelligent unmanned dispatching system based on AI and automatic driving method
Technical Field
The invention relates to the technical field of computers, in particular to an AI-based unmanned intelligent scheduling system and an automatic driving method.
Background
The unmanned vehicle is a novel intelligent vehicle, and is characterized in that each part of the vehicle is accurately controlled, calculated and analyzed through a Control device (namely, a vehicle-mounted intelligent brain), and finally different devices in the unmanned vehicle are respectively controlled by sending instructions to an Electronic Control Unit (ECU), so that the full-automatic operation of the vehicle is realized, and the purpose of unmanned driving of the vehicle is achieved.
In the prior art, particularly in the field or in a road condition with complex road condition information, an electronic map is difficult to better feed back the road condition information due to reasons such as untimely updating, a vehicle cannot automatically adjust the driving state according to the actual road section condition, obstacle avoidance, deceleration driving and the like can be realized only by a sensor structure, so that the problems of vehicle body instability and the like are easily caused, and the experience of drivers and passengers is influenced.
Disclosure of Invention
The embodiment of the invention aims to provide an AI-based unmanned intelligent scheduling system and an automatic driving method, and aims to solve the technical problems in the prior art determined in the background art.
The embodiment of the invention is realized in such a way that an automatic driving method based on AI comprises the following steps:
acquiring the position information of the vehicle in real time;
acquiring surrounding image information of a vehicle in real time;
analyzing the surrounding image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle;
and determining the target driving state of the vehicle according to the real-time model map and the position information of the vehicle, and issuing the target driving state to the vehicle.
As a further scheme of the invention: the step of acquiring the surrounding image information of the vehicle in real time specifically includes:
acquiring a plurality of images, wherein the images are obtained by a plurality of cameras on a vehicle respectively;
a composite image of the surroundings of the vehicle is generated from the plurality of images.
As a still further scheme of the invention: the step of analyzing the surrounding image information to determine road condition information of the position where the vehicle is located, and generating a real-time model map of the vehicle by combining the position information of the vehicle specifically includes:
inputting peripheral image information and acquiring marking information in the peripheral image information, wherein the marking information at least comprises road characteristic information and road sign information;
inputting the marking information into a preset classification model for classification and identification, and outputting attribute information about the marking information, wherein the attribute information is used for representing the content attribute of the marking information;
and calibrating the labeling information and the attribute information on the peripheral image information, and generating a real-time model map with the labeling information and the attribute information by combining the position information of the vehicle.
As a still further scheme of the invention: the step of calibrating the labeling information and the attribute information on the peripheral image information to generate the real-time model map with the labeling information and the attribute information specifically comprises the following steps:
determining a visual angle azimuth according to the vehicle driving direction;
marking information and attribute information on the peripheral image information according to the view direction;
and generating a real-time model map with the labeling information and the attribute information according to the position information of the vehicle.
As a still further scheme of the invention: the step of determining a target driving state of the vehicle according to the real-time model map and the position information of the vehicle and issuing the target driving state to the vehicle specifically comprises the following steps:
according to the position information of the vehicle, determining marking information and attribute information on the current vehicle driving route;
analyzing the attribute information, and acquiring a decision scheme corresponding to the attribute information from a decision library;
and generating a target driving state of the vehicle according to the decision scheme corresponding to the labeling information, and issuing the target driving state to the vehicle to enable the vehicle to execute corresponding actions.
As a still further scheme of the invention: the step of analyzing the attribute information and obtaining a decision scheme corresponding to the attribute information from a decision library specifically includes:
analyzing the attribute information to determine characteristic information of the attribute information;
inputting the characteristic information of the attribute information into a decision base;
and outputting a decision scheme corresponding to the attribute information.
Another objective of an embodiment of the present invention is to provide an AI-based unmanned intelligent scheduling system, including:
the position determining module is used for acquiring the position information of the vehicle in real time;
the image acquisition module is used for acquiring the peripheral image information of the vehicle in real time;
the model map generation module is used for analyzing the peripheral image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle; and the running state determining module is used for determining the target running state of the vehicle according to the real-time model map and the position information of the vehicle and issuing the target running state to the vehicle.
As a further scheme of the invention: the image acquisition module includes:
the number of the cameras is multiple, and the cameras are used for acquiring multiple images; and an image synthesizing unit for generating a surrounding synthesized picture about the vehicle from a plurality of the images.
As a still further scheme of the invention: the model map generation module includes:
the system comprises a label information generating unit, a label information generating unit and a label information acquiring unit, wherein the label information generating unit is used for inputting peripheral image information and acquiring label information in the peripheral image information, and the label information at least comprises road characteristic information and road sign information;
the classification identification unit is used for inputting the marking information into a preset classification model for classification identification and outputting attribute information related to the marking information, wherein the attribute information is used for representing the content attribute of the marking information; and the calibration unit is used for calibrating the marking information and the attribute information on the peripheral image information and generating a real-time model map with the marking information and the attribute information by combining the position information of the vehicle.
As a still further scheme of the invention: the driving state determination module includes:
the information determining unit is used for determining marking information and attribute information on the current vehicle driving route according to the position information of the vehicle;
the decision generating unit is used for analyzing the attribute information and acquiring a decision scheme corresponding to the attribute information from a decision library; and
and the target running state determining unit is used for generating a target running state of the vehicle according to the decision scheme corresponding to the labeling information, and issuing the target running state to the vehicle to enable the vehicle to execute corresponding actions.
Compared with the prior art, the invention has the beneficial effects that: obtaining the position information of the vehicle in real time; acquiring surrounding image information of a vehicle in real time; analyzing the surrounding image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle; according to the real-time model map and the position information of the vehicle, the target driving state of the vehicle is determined, the target driving state is sent to the vehicle, the vehicle can still be guaranteed to normally and automatically drive in the field or on the road section of which the map is not updated in time according to the on-site road condition information, and the problem that in the prior art, the vehicle cannot automatically adjust the driving state in the field or on the road section of which the map is not updated in time according to the actual condition is effectively solved.
Drawings
Fig. 1 is a flowchart of an AI-based automatic driving method.
Fig. 2 is a flowchart of acquiring the surrounding image information of the vehicle in real time.
Fig. 3 is a flowchart for generating a real-time model map about a vehicle.
FIG. 4 is a flow chart for generating a real-time model map with annotation information and attribute information.
Fig. 5 is a flowchart of determining a target running state of the vehicle.
Fig. 6 is a flow chart of a decision scheme for obtaining the attribute information from the decision library.
Fig. 7 is a schematic structural diagram of an AI-based unmanned intelligent scheduling system.
Fig. 8 is a schematic structural diagram of an image acquisition module in an AI-based unmanned intelligent scheduling system.
Fig. 9 is a schematic structural diagram of a model map generation module in an AI-based unmanned intelligent scheduling system.
Fig. 10 is a schematic structural diagram of a driving state determination module in an AI-based unmanned intelligent scheduling system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
As shown in fig. 1, a flowchart of an AI-based automatic driving method according to an embodiment of the present invention includes the following steps:
and S200, acquiring the position information of the vehicle in real time.
In the embodiment of the invention, the vehicle position information is acquired for the subsequent purpose of determining the target running state of the vehicle. In practical applications, the vehicle position information may be obtained by a GPS or other device carried by the vehicle itself, and the embodiment is not specifically limited herein.
And S400, acquiring the surrounding image information of the vehicle in real time.
In the embodiment of the invention, the acquired peripheral image information of the vehicle refers to the peripheral image information of the vehicle, the image information at least needs to comprise other vehicles, obstacles around the vehicle, and indication information (such as deceleration lines, zebra crossings, speed limit signs and the like) on roads and signboards, and the information such as the speed limit signs and the obstacles needs corresponding coping decisions to ensure the normal running of the automatic running of the vehicle.
S600, analyzing the surrounding image information to determine road condition information of the position where the vehicle is located, and generating a real-time model map of the vehicle.
In practical application, according to the obtained peripheral image information, the embodiment of the invention can obtain the indication information on other vehicles, obstacles around the vehicles, roads and guideboards in the image information, obtain the road condition information of the positions of the vehicles, and further obtain the real-time model map of the vehicles.
And S800, determining the target driving state of the vehicle according to the real-time model map and the position information of the vehicle, and issuing the target driving state to the vehicle.
In the embodiment of the invention, because the real-time model map comprises other vehicles, obstacles around the vehicle, and indication information (such as deceleration lines, zebra crossings, speed limit signs and the like) on roads and guideboards, corresponding coping decisions and the like can be determined according to the information, for example, the vehicle automatically decelerates when meeting the deceleration lines, automatically avoids when meeting the obstacles and the like, namely, the target running state of the vehicle is determined, and then the target running state is issued to the vehicle, so that the vehicle automatically executes the corresponding decisions.
It should be noted that, in S200 in the embodiment of the present invention, the position information of the vehicle is acquired in real time; in S400, the order of implementing the steps of acquiring the surrounding image information of the vehicle in real time may be changed, which is not limited in the present embodiment, and the present embodiment is not specifically limited herein.
As shown in fig. 2, as a preferred embodiment of the present invention, the step of acquiring the image information of the periphery of the vehicle in real time specifically includes:
s401, a plurality of images are obtained, and the images are obtained by a plurality of cameras on a vehicle respectively.
In the embodiment of the invention, the distribution positions of the cameras are at least in the front, back, left and right directions of the vehicle, so that the image information around the vehicle can be acquired, the panoramic image around the vehicle is obtained, and the later-stage image processing is facilitated.
S403, a composite image of the periphery of the vehicle is generated from the plurality of images.
In practical application, the step of generating the composite image of the periphery of the vehicle according to the plurality of images is substantially panoramic image synthesis in the prior art, but in the application, when the panoramic image synthesis is performed, operations such as blurring are not performed on the background image, and the like, because the application needs to search for corresponding identification information from the background image in subsequent steps so as to facilitate subsequent decision generation and the like.
As shown in fig. 3, as another preferred embodiment of the present invention, the step of analyzing the surrounding image information to determine road condition information of the vehicle at the position, and generating a real-time model map of the vehicle by combining the position information of the vehicle specifically includes:
s601, inputting peripheral image information, and acquiring marking information in the peripheral image information, wherein the marking information at least comprises road characteristic information and road sign information.
In the embodiment of the present invention, the label information in the peripheral image information at least includes road characteristic information and road sign information, where the road characteristic information may be a pothole, a deceleration strip, an isolation guardrail, a roadblock, etc. on a road, and the road sign information may be an indication guideboard, a speed limit sign, a deceleration sign, a double yellow line, etc., and the above characteristic information may affect the driving of a vehicle, and therefore, in the embodiment, the label information needs to be obtained first.
S603, inputting the marking information into a preset classification model for classification and identification, and outputting attribute information related to the marking information, wherein the attribute information is used for representing the content attribute of the marking information.
In the embodiment of the present invention, after the annotation information is obtained, an image of the annotation information (which may be obtained by cutting an original peripheral image) may be input into a preset classification model for classification and identification, where the classification model may be an SVM vector machine, a convolutional neural network model, and the like, and this embodiment is not specifically limited herein, and attribute information of the annotation information may be obtained through the classification model, for example, after the image information of a "deceleration mark" is input, the annotation information may be identified as the "deceleration mark" according to the image information, and the "deceleration mark" here is the annotation information of the annotation information.
And S605, calibrating the marking information and the attribute information on the peripheral image information, and generating a real-time model map with the marking information and the attribute information by combining the position information of the vehicle.
In practical application, the embodiment of the invention can mark the marking information and the attribute information on the peripheral image information, and in one case of the embodiment, the marking information can be marked in a red frame mode, and the attribute information can be marked and displayed in a text or other forms, so that a real-time model map with the marking information and the attribute information can be generated by combining the peripheral image information and the position information of the vehicle.
As shown in fig. 4, as another preferred embodiment of the present invention, the step of calibrating the annotation information and the attribute information on the peripheral image information to generate the real-time model map with the annotation information and the attribute information specifically includes:
and S6051, determining the visual angle azimuth according to the vehicle running direction.
In the embodiment of the invention, the driving direction of the vehicle needs to be determined firstly, and after the driving direction is determined, the front, back, left and right directions, namely the viewing angle direction of the vehicle can be determined according to the clockwise direction.
And S6053, calibrating the annotation information and the attribute information on the peripheral image information according to the view angle and the direction.
In the embodiment of the invention, after the visual angle position is determined, the marking information and the attribute information are calibrated on the corresponding visual angle position according to the processing and analyzing result of the peripheral image information, and the calibration is completed.
S6055, based on the position information of the vehicle, generates a real-time model map having label information and attribute information.
In the embodiment of the invention, the position information of the vehicle, namely the position information of the vehicle in the surrounding image information, is combined to obtain the real-time model map with the labeling information and the attribute information.
As shown in fig. 5, as another preferred embodiment of the present invention, the step of determining a target driving state of the vehicle according to the real-time model map and the position information of the vehicle, and issuing the target driving state to the vehicle specifically includes:
s801, according to the position information of the vehicle, marking information and attribute information on the current vehicle driving route are determined.
In the embodiment of the invention, the marking information and the attribute information on the current driving route can be determined according to the position information of the current vehicle, and when the current driving route is changed, the corresponding marking information and the corresponding attribute information can be adaptively changed.
And S803, analyzing the attribute information, and acquiring a decision scheme corresponding to the attribute information from a decision library.
In the embodiment of the present invention, the attribute information of each piece of label information has a corresponding decision scheme, for example, the decision scheme corresponding to the "deceleration mark" is deceleration, and the specific deceleration is set according to the actual application, which is not specifically limited herein.
And S805, generating a target driving state of the vehicle according to the decision scheme corresponding to the marking information, and issuing the target driving state to the vehicle to enable the vehicle to execute corresponding actions.
In the embodiment of the invention, after the decision scheme is obtained, the target driving state of the vehicle can be generated, and the target driving state determines what action the vehicle should perform when driving to the corresponding position, for example, when driving to the position of a deceleration strip, automatic deceleration and the like, so that the state stability of the vehicle during driving can be effectively ensured.
As shown in fig. 6, as another preferred embodiment of the present invention, the step of analyzing the attribute information and obtaining a decision scheme corresponding to the attribute information from a decision library specifically includes:
s8031, analyzing the attribute information to determine the characteristic information of the attribute information.
In the embodiment of the present invention, the determining manner of the feature information of the attribute information may be to extract a keyword in the attribute information or analyze semantics of the attribute information, and the specific embodiment is not limited.
S8033, inputting the characteristic information of the attribute information into a decision base.
In the embodiment of the invention, the characteristic information of the attribute information is used as input, the deceleration mark is used as the attribute information, the characteristic information is decelerated through semantic analysis, and the deceleration is input into the decision base to obtain a corresponding decision.
It should be noted that the essence of the decision base in the embodiment of the present invention may be a decision mapping base storing a plurality of mapping relationships.
And S8035, outputting a decision scheme corresponding to the attribute information.
As shown in fig. 7, an embodiment of the present invention further provides an AI-based unmanned intelligent scheduling system, which includes a position determining module 100, an image obtaining module 200, a model map generating module 300, and a driving state determining module 400, where the position determining module 100 is configured to obtain position information of a vehicle in real time; the image acquisition module 200 is used for acquiring the surrounding image information of the vehicle in real time; the model map generation module 300 is configured to analyze the surrounding image information to determine road condition information of a location where the vehicle is located, and generate a real-time model map of the vehicle; the driving state determining module 400 is configured to determine a target driving state of the vehicle according to the real-time model map and the position information of the vehicle, and issue the target driving state to the vehicle.
In the embodiment of the invention, the purpose of obtaining the vehicle position information is to determine the target running state of the vehicle in the following process, when the vehicle is actually applied, the vehicle position information can be obtained through a GPS (global positioning system) and other devices which are arranged on the vehicle, the peripheral image information of the vehicle refers to the peripheral image information of the vehicle, and the image information at least needs to comprise other vehicles, barriers around the vehicle, roads and indication information (such as deceleration lines, zebra stripes, speed limit signs and the like) on guideboards, and the information such as speed limit signs and barriers needs corresponding coping decisions so as to ensure the normal running of the vehicle during automatic running; according to the obtained peripheral image information, the indication information on other vehicles, obstacles around the vehicles, roads and guideboards in the image information can be obtained, the road condition information at the position where the vehicles are located is obtained, and further a real-time model map of the vehicles can be obtained.
As shown in fig. 8, as a preferred embodiment of the present invention, the image obtaining module 200 includes a plurality of cameras 201 and an image synthesizing unit 202, where the number of the cameras 201 is multiple, and is used for obtaining a plurality of images; the image synthesizing unit 202 is configured to generate a surrounding synthesized picture about the vehicle from a plurality of the images.
In the embodiment, the distribution positions of the cameras are at least in the front, back, left and right directions of the vehicle, so that image information around the vehicle can be acquired, a panoramic image around the vehicle is obtained, and the later-stage image processing is facilitated; the essence of the step of generating a composite image of the surroundings of the vehicle from a plurality of images is panoramic image synthesis in the prior art, but in the present application, no blurring or other operation is performed on the background image when panoramic image synthesis is performed, because the present application needs to search for corresponding identification information from the background image in subsequent steps in order to facilitate subsequent decision generation and the like.
As shown in fig. 9, as a preferred embodiment of the present invention, the model map generating module 300 includes a label information generating unit 301, a classification identifying unit 302, and a calibrating unit 303, where the label information generating unit 301 is configured to input peripheral image information and obtain label information in the peripheral image information, where the label information at least includes road characteristic information and road sign information; the classification identification unit 302 is configured to input the label information into a preset classification model for classification identification, and output attribute information about the label information, where the attribute information is used to represent a content attribute of the label information; the calibration unit 303 is configured to calibrate the label information and the attribute information on the peripheral image information, and generate a real-time model map with the label information and the attribute information by combining the position information of the vehicle.
As shown in fig. 10, as a preferred embodiment of the present invention, the driving state determining module 400 includes an information determining unit 401, a decision generating unit 402 and a target driving state determining unit 403, where the information determining unit 401 is configured to determine labeling information and attribute information on a driving route of a current vehicle according to position information of the vehicle; the decision generating unit 402 is configured to analyze the attribute information and obtain a decision scheme corresponding to the attribute information from a decision library; the target driving state determining unit 403 is configured to generate a target driving state of the vehicle according to the decision scheme corresponding to the labeled information, and issue the target driving state to the vehicle, so that the vehicle executes a corresponding action.
The embodiment of the invention discloses an automatic driving method based on AI, and provides an intelligent unmanned dispatching system based on AI based on the method, the method acquires the position information of the vehicle in real time; acquiring surrounding image information of a vehicle in real time; analyzing the surrounding image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle; according to the real-time model map and the position information of the vehicle, the target driving state of the vehicle is determined, the target driving state is sent to the vehicle, the vehicle can still be guaranteed to normally and automatically drive in the field or on the road section of which the map is not updated in time according to the on-site road condition information, and the problem that in the prior art, the vehicle cannot automatically adjust the driving state in the field or on the road section of which the map is not updated in time according to the actual condition is effectively solved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An AI-based autopilot method comprising the steps of:
acquiring the position information of the vehicle in real time;
acquiring surrounding image information of a vehicle in real time;
analyzing the surrounding image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle;
and determining the target driving state of the vehicle according to the real-time model map and the position information of the vehicle, and issuing the target driving state to the vehicle.
2. The AI-based automatic driving method according to claim 1, wherein the step of acquiring the image information of the surroundings of the vehicle in real time specifically comprises:
acquiring a plurality of images, wherein the images are obtained by a plurality of cameras on a vehicle respectively;
a composite image of the surroundings of the vehicle is generated from the plurality of images.
3. The AI-based automatic driving method according to claim 1, wherein the step of analyzing the surrounding image information to determine road condition information at a location of the vehicle and generating a real-time model map of the vehicle in combination with the location information of the vehicle comprises:
inputting peripheral image information and acquiring marking information in the peripheral image information, wherein the marking information at least comprises road characteristic information and road sign information;
inputting the marking information into a preset classification model for classification and identification, and outputting attribute information about the marking information, wherein the attribute information is used for representing the content attribute of the marking information;
and calibrating the labeling information and the attribute information on the peripheral image information, and generating a real-time model map with the labeling information and the attribute information by combining the position information of the vehicle.
4. The AI-based automatic driving method according to claim 3, wherein the step of calibrating the label information and the attribute information to the surrounding image information to generate the real-time model map with the label information and the attribute information specifically comprises:
determining a visual angle azimuth according to the vehicle driving direction;
marking information and attribute information on the peripheral image information according to the view direction;
and generating a real-time model map with the labeling information and the attribute information according to the position information of the vehicle.
5. The AI-based automatic driving method according to claim 1, wherein the step of determining a target driving status of the vehicle according to the real-time model map and the position information of the vehicle, and issuing the target driving status to the vehicle specifically comprises:
according to the position information of the vehicle, determining marking information and attribute information on the current vehicle driving route;
analyzing the attribute information, and acquiring a decision scheme corresponding to the attribute information from a decision library;
and generating a target driving state of the vehicle according to the decision scheme corresponding to the labeling information, and issuing the target driving state to the vehicle to enable the vehicle to execute corresponding actions.
6. The AI-based automatic driving method according to claim 5, wherein the step of analyzing the attribute information and obtaining a decision-making scheme corresponding to the attribute information from a decision-making library specifically comprises:
analyzing the attribute information to determine characteristic information of the attribute information;
inputting the characteristic information of the attribute information into a decision base;
and outputting a decision scheme corresponding to the attribute information.
7. An AI-based unmanned intelligent scheduling system, comprising:
the position determining module is used for acquiring the position information of the vehicle in real time;
the image acquisition module is used for acquiring the peripheral image information of the vehicle in real time;
the model map generation module is used for analyzing the peripheral image information to determine road condition information of the position where the vehicle is located and generate a real-time model map of the vehicle; and the running state determining module is used for determining the target running state of the vehicle according to the real-time model map and the position information of the vehicle and issuing the target running state to the vehicle.
8. An AI-based unmanned intelligent scheduling system according to claim 7, wherein the image acquisition module comprises:
the number of the cameras is multiple, and the cameras are used for acquiring multiple images; and an image synthesizing unit for generating a surrounding synthesized picture about the vehicle from a plurality of the images.
9. An AI-based unmanned intelligent scheduling system according to claim 7, wherein the model map generation module comprises:
the system comprises a label information generating unit, a label information generating unit and a label information acquiring unit, wherein the label information generating unit is used for inputting peripheral image information and acquiring label information in the peripheral image information, and the label information at least comprises road characteristic information and road sign information;
the classification identification unit is used for inputting the marking information into a preset classification model for classification identification and outputting attribute information related to the marking information, wherein the attribute information is used for representing the content attribute of the marking information; and
and the calibration unit is used for calibrating the marking information and the attribute information on the peripheral image information and generating a real-time model map with the marking information and the attribute information by combining the position information of the vehicle.
10. An AI-based unmanned intelligent scheduling system according to claim 7, wherein the driving status determination module comprises:
the information determining unit is used for determining marking information and attribute information on the current vehicle driving route according to the position information of the vehicle;
the decision generating unit is used for analyzing the attribute information and acquiring a decision scheme corresponding to the attribute information from a decision library; and the target running state determining unit is used for generating a target running state of the vehicle according to the decision scheme corresponding to the marking information, and issuing the target running state to the vehicle to enable the vehicle to execute corresponding actions.
CN202110112363.4A 2021-01-27 2021-01-27 Intelligent unmanned dispatching system based on AI and automatic driving method Pending CN112885132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110112363.4A CN112885132A (en) 2021-01-27 2021-01-27 Intelligent unmanned dispatching system based on AI and automatic driving method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110112363.4A CN112885132A (en) 2021-01-27 2021-01-27 Intelligent unmanned dispatching system based on AI and automatic driving method

Publications (1)

Publication Number Publication Date
CN112885132A true CN112885132A (en) 2021-06-01

Family

ID=76053399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110112363.4A Pending CN112885132A (en) 2021-01-27 2021-01-27 Intelligent unmanned dispatching system based on AI and automatic driving method

Country Status (1)

Country Link
CN (1) CN112885132A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035576A (en) * 2021-11-09 2022-02-11 北京赛目科技有限公司 Method and device for determining driving path
CN114093191A (en) * 2021-11-25 2022-02-25 济南亚跃信息技术有限公司 Unmanned intelligent scheduling system and automatic driving method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007110944A (en) * 2005-10-19 2007-05-10 Mkv Platech Co Ltd Sprinkler tube
US20180211119A1 (en) * 2017-01-23 2018-07-26 Ford Global Technologies, Llc Sign Recognition for Autonomous Vehicles
CN108334084A (en) * 2018-01-24 2018-07-27 北京墨丘科技有限公司 Automatic driving mode determines method, apparatus, electronic equipment and storage medium
CN109429518A (en) * 2017-06-22 2019-03-05 百度时代网络技术(北京)有限公司 Automatic Pilot traffic forecast based on map image
KR20190029192A (en) * 2017-09-12 2019-03-20 현대자동차주식회사 Automatic Driving control apparatus, vehicle having the same and method for controlling the same
CN110920604A (en) * 2018-09-18 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN111223294A (en) * 2019-11-12 2020-06-02 维特瑞交通科技有限公司 Intelligent vehicle guiding control method and device
CN111383473A (en) * 2018-12-29 2020-07-07 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007110944A (en) * 2005-10-19 2007-05-10 Mkv Platech Co Ltd Sprinkler tube
US20180211119A1 (en) * 2017-01-23 2018-07-26 Ford Global Technologies, Llc Sign Recognition for Autonomous Vehicles
CN109429518A (en) * 2017-06-22 2019-03-05 百度时代网络技术(北京)有限公司 Automatic Pilot traffic forecast based on map image
KR20190029192A (en) * 2017-09-12 2019-03-20 현대자동차주식회사 Automatic Driving control apparatus, vehicle having the same and method for controlling the same
CN108334084A (en) * 2018-01-24 2018-07-27 北京墨丘科技有限公司 Automatic driving mode determines method, apparatus, electronic equipment and storage medium
CN110920604A (en) * 2018-09-18 2020-03-27 阿里巴巴集团控股有限公司 Driving assistance method, driving assistance system, computing device, and storage medium
CN111383473A (en) * 2018-12-29 2020-07-07 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication
CN111223294A (en) * 2019-11-12 2020-06-02 维特瑞交通科技有限公司 Intelligent vehicle guiding control method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JESSE LEVINSON ET AL: "Traffic light mapping, localization, and state detection for autonomous vehicles", 《ROBOTICS AND AUTOMATION (ICRA)》, 31 May 2011 (2011-05-31), pages 5784 - 5791, XP055478036, DOI: 10.1109/ICRA.2011.5979714 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114035576A (en) * 2021-11-09 2022-02-11 北京赛目科技有限公司 Method and device for determining driving path
CN114035576B (en) * 2021-11-09 2023-09-08 北京赛目科技股份有限公司 Driving path determining method and device
CN114093191A (en) * 2021-11-25 2022-02-25 济南亚跃信息技术有限公司 Unmanned intelligent scheduling system and automatic driving method

Similar Documents

Publication Publication Date Title
CN106980813B (en) Gaze generation for machine learning
CN113196357B (en) Method and system for controlling autonomous vehicles in response to detecting and analyzing strange traffic signs
US11610411B2 (en) Driver assistance system and method for displaying traffic information
DE102016217645B4 (en) Method for providing information about a probable driving intention of a vehicle
US20210089794A1 (en) Vehicle system and method for detecting objects and object distance
US11620419B2 (en) Systems and methods for identifying human-based perception techniques
US10942519B2 (en) System and method for navigating an autonomous driving vehicle
CN111386563B (en) Teacher data generation device
CN113358125B (en) Navigation method and system based on environment target detection and environment target map
CN112885132A (en) Intelligent unmanned dispatching system based on AI and automatic driving method
JP7028228B2 (en) Display system, display control device and display control program
DE112018004953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING PROCESS, PROGRAM AND MOVING BODY
US20200318989A1 (en) Route guidance apparatus and method
GB2510698A (en) Driver assistance system
US20230161034A1 (en) Point cloud registration for lidar labeling
CN110940349A (en) Method for planning a trajectory of a vehicle
DE112018004904T5 (en) INFORMATION PROCESSING DEVICE, SELF-POSITION ESTIMATE AND PROGRAM
DE112021002953T5 (en) INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD AND PROGRAM
DE112021001882T5 (en) INFORMATION PROCESSING ESTABLISHMENT, INFORMATION PROCESSING METHOD AND PROGRAM
Beck et al. Automated vehicle data pipeline for accident reconstruction: New insights from LiDAR, camera, and radar data
CN115520100A (en) Automobile electronic rearview mirror system and vehicle
Vivan et al. No cost autonomous vehicle advancements in CARLA through ROS
Zhang et al. Research on performance test method of lane departure warning system with PreScan
CN116564116A (en) Intelligent auxiliary driving guiding system and method driven by digital twin
CN115273003A (en) Traffic sign recognition and navigation decision method and system combining character positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination