CN111661054B - Vehicle control method, device, electronic device and storage medium - Google Patents
Vehicle control method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN111661054B CN111661054B CN202010382009.9A CN202010382009A CN111661054B CN 111661054 B CN111661054 B CN 111661054B CN 202010382009 A CN202010382009 A CN 202010382009A CN 111661054 B CN111661054 B CN 111661054B
- Authority
- CN
- China
- Prior art keywords
- signal lamps
- vehicle
- image information
- group
- road image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 239000003086 colorant Substances 0.000 claims abstract description 48
- 238000013136 deep learning model Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/18—Propelling the vehicle
- B60W30/18009—Propelling the vehicle related to particular drive situations
- B60W30/18159—Traversing an intersection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
- B60W2552/53—Road markings, e.g. lane marker or crosswalk
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/60—Traffic rules, e.g. speed limits or right of way
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Human Computer Interaction (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a vehicle control method, a vehicle control device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps; determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information; determining a target lane in which a vehicle is currently located based on a current position of the vehicle; and controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane. The method and the device can accurately judge the signal lamp corresponding to the lane where the current vehicle is located, can provide accurate basis for the control of the vehicle, and meet the requirement of a user on the safe driving of the vehicle.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a vehicle control method and apparatus, an electronic device, and a storage medium.
Background
The unmanned vehicle can replace a driver to reduce the occurrence of traffic accidents and can replace the driver to complete special operations, so that the unmanned vehicle is receiving more and more attention. For the vehicle to travel, it is necessary to accurately identify the signal lamp at the intersection in real time.
In the correlation technique, when there are multiunit signal lamps in the image that the on-vehicle camera was shot, unable accurate judgement current vehicle should go according to which group's signal lamp, influence the vehicle and go, can't satisfy the demand that the user went to the vehicle safety.
Disclosure of Invention
In view of the above, the present invention provides a vehicle control method, a vehicle control apparatus, an electronic device and a storage medium to solve the above technical problems.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
according to a first aspect of an embodiment of the present invention, there is provided a vehicle control method including:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
and controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane.
In an embodiment, the road image information includes road image information collected by a binocular camera carried by the vehicle, and the road image information further includes distances between each group of signal lamps of the at least one group of signal lamps and the binocular camera.
In an embodiment, the determining the lighting colors of the groups of signal lamps and the corresponding lanes based on the road image information includes:
processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps and the coordinates of the signal lamps in each group in an image coordinate system;
and determining the corresponding lane of each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
In an embodiment, the determining the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system includes:
converting the coordinates of the groups of signal lamps in the image coordinate system to a map coordinate system based on the corresponding relation between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system;
and matching the coordinates of the groups of signal lamps in a map coordinate system with a pre-constructed high-precision map to obtain the lanes corresponding to the groups of signal lamps.
In one embodiment, the controlling the vehicle based on the lighting color of a group of signal lights corresponding to the target lane includes:
if the lighting color is a color representing no passing, controlling the vehicle to stop;
if the lighting color is a color representing warning, controlling the vehicle to stop or drive through the current intersection based on the direction of the vehicle relative to the stop line of the intersection;
and if the lighting color is a color indicating that the vehicle is allowed to pass, controlling the vehicle to run through the current intersection.
According to a second aspect of the embodiment of the present invention, there is provided a vehicle control apparatus including:
the system comprises an image information acquisition module, a signal light acquisition module and a signal light processing module, wherein the image information acquisition module is used for acquiring currently acquired road image information which comprises image information of at least one group of signal lights;
the color lane determining module is used for determining the lighting colors of the signal lamps of each group and the corresponding lanes based on the road image information;
the target lane determining module is used for determining a target lane where the vehicle is located currently based on the current position of the vehicle;
and the vehicle control module is used for controlling the vehicle based on the lighting colors in a group of signal lamps corresponding to the target lane.
In an embodiment, the road image information includes road image information collected by a binocular camera carried by the vehicle, and the road image information further includes distances between each group of signal lamps of the at least one group of signal lamps and the binocular camera.
In one embodiment, the color lane determination module includes:
the color coordinate determination unit is used for processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps in each group and coordinates in an image coordinate system;
and the corresponding lane determining unit is used for determining the lane corresponding to each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
In an embodiment, the corresponding lane determining unit is further configured to:
converting the coordinates of the groups of signal lamps in the image coordinate system to a map coordinate system based on the corresponding relation between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system;
and matching the coordinates of the groups of signal lamps in a map coordinate system with a pre-constructed high-precision map to obtain the lanes corresponding to the groups of signal lamps.
In one embodiment, the vehicle control module includes:
a first control unit for controlling the vehicle to stop when the lighting color is a color indicating no-pass;
a second control unit for controlling the vehicle to stop or to travel through the current intersection based on a direction of a stop line of the vehicle with respect to the intersection when the lighting color is a color indicating a warning;
and the third control unit is used for controlling the vehicle to drive through the current intersection when the lighting color is a color representing permission of passing.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
and controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when processed by a processor, implements:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
and controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane.
Compared with the prior art, the invention acquires the currently acquired road image information which comprises the image information of at least one group of signal lamps, determining the lighting colors of the groups of signal lamps and the corresponding lanes based on the road image information, and determining the target lane where the vehicle is located based on the current position of the vehicle, the vehicle may then be controlled based on the lighting color in a set of signal lights corresponding to the target lane, because the lighting colors of all groups of signal lamps and the corresponding lanes are determined based on the currently collected road image information, and determining the current target lane of the vehicle based on the current position of the vehicle, so that the signal lamp corresponding to the current lane of the vehicle can be accurately judged, accurate basis can be provided for the control of the vehicle, and the requirement of a user on safe driving of the vehicle is met.
Drawings
FIG. 1 shows a flow chart of a vehicle control method according to an exemplary embodiment of the invention;
fig. 2 shows a flow chart of how to determine the lighting colors of the groups of signal lights and the corresponding lanes based on the road image information according to an exemplary embodiment of the present invention;
fig. 3 shows a flowchart of how to determine the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system according to an exemplary embodiment of the present invention;
fig. 4 shows a block diagram of a vehicle control apparatus according to an exemplary embodiment of the invention;
fig. 5 is a block diagram showing a structure of a vehicle control apparatus according to another example embodiment of the invention;
fig. 6 shows a block diagram of an electronic device according to an exemplary embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to specific embodiments shown in the drawings. These embodiments are not intended to limit the present invention, and structural, methodological, or functional changes made by those of ordinary skill in the art in light of these embodiments are intended to be within the scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms actual, predicted, etc. may be used herein to describe various structures, these structures should not be limited by these terms. These terms are only used to distinguish one type of structure from another.
Fig. 1 shows a flowchart of a vehicle control method according to an exemplary embodiment of the invention. The method of the embodiment may be applied to a terminal device (e.g., a vehicle-mounted terminal, a smart phone, a tablet computer, or a notebook computer), or may be applied to a server (e.g., a server cluster formed by one or more servers). As shown in fig. 1, the method comprises the following steps S101-S104:
in step S101, the currently acquired road image information is acquired.
The road image information comprises image information of at least one group of signal lamps.
In this embodiment, in the current driving process of the vehicle, the currently acquired road image information may be acquired, and the image information may include image information of one or more groups of signal lamps. For example, the currently acquired image information of the road may include image information of a set of signal lamps corresponding to each intersection in the driving environment of the current vehicle.
In one embodiment, the set of signal lights may be a set of lights composed of three red, yellow and green colored lights, or may be a single light capable of producing red, yellow or green colored light.
In this embodiment, in the current driving process of the vehicle, the vehicle-mounted camera may be acquired to acquire the current road image information. For example, the road image information includes road image information collected by a binocular camera carried by the vehicle, and the road image information further includes distances between each group of signal lamps in the at least one group of signal lamps and the binocular camera.
It should be noted that the road image information may be collected by a vehicle-mounted binocular camera, and a developer may select another camera device capable of collecting distance information according to actual business needs, such as a depth camera, and the like, which is not limited in this embodiment.
In step S102, the lighting colors of the groups of signal lights and the corresponding lanes are determined based on the road image information.
In this embodiment, after the currently acquired road image information is acquired, the lighting colors and the corresponding lanes of the groups of signal lamps may be determined based on the road image information.
For example, after obtaining the road image information currently collected by a vehicle-mounted camera on a vehicle, the road image information may be processed based on an image processing technique in the related art, and then the lighting colors of each group of signal lamps may be obtained. Meanwhile, the lanes corresponding to the groups of signal lamps can be determined based on road signs or buildings contained in the image information.
In another embodiment, the manner of determining the lighting colors and the corresponding lanes of the groups of signal lamps based on the road image information may also be referred to the following embodiment shown in fig. 2, which is not described in detail herein.
In step S103, a target lane in which the vehicle is currently located is determined based on the current position of the vehicle.
In this embodiment, in the process of vehicle driving, the current position of the vehicle can also be acquired in real time.
For example, the current position of the vehicle may be determined based on an onboard GPS global positioning system, and the target lane in which the vehicle is currently located may be determined based on the current position and coordinates of the lanes in the map.
In one embodiment, after the current position of the vehicle is determined based on the vehicle-mounted GPS global positioning system, the current position of the vehicle may be matched with a pre-constructed high-precision map to obtain a target lane where the vehicle is currently located.
It should be noted that, in addition to determining the current position of the vehicle based on the vehicle-mounted GPS global positioning system, the developer may select other positioning manners to determine the current position of the vehicle based on actual business needs, which is not limited in this embodiment.
In step S104, the vehicle is controlled based on the lighting color in a group of signal lights corresponding to the target lane.
In this embodiment, after the lighting colors and the corresponding lanes of the groups of signal lamps are determined based on the road image information, and the current target lane of the vehicle is determined based on the current position of the vehicle, a group of signal lamps corresponding to the current target lane of the vehicle may be determined, and the vehicle may be controlled based on the lighting colors of the group of signal lamps corresponding to the target lane.
For example, when the lighting color is a color indicating no-pass, such as a red light, the vehicle may be controlled to stop.
Alternatively, when the lighting color is a color indicating permission of passage, such as a green light, the vehicle may be controlled to travel through the current intersection.
Alternatively, when the lighting color is a color indicating a warning, such as a yellow light, the vehicle may be controlled to stop or to travel through the current intersection based on the direction of the stop line of the vehicle with respect to the intersection. For example, if the vehicle has already crossed the stop line of the intersection when the yellow light is turned on, the vehicle can be controlled to continue to run through the current intersection; on the contrary, when the yellow light is turned on, the vehicle does not cross the stop line of the intersection, and the vehicle can be controlled to stop.
As can be seen from the above technical solution, in the present embodiment, by obtaining the currently acquired road image information, the road image information includes image information of at least one group of signal lamps, determining the lighting colors of the groups of signal lamps and the corresponding lanes based on the road image information, and determining the target lane where the vehicle is located based on the current position of the vehicle, the vehicle may then be controlled based on the lighting color in a set of signal lights corresponding to the target lane, because the lighting colors of all groups of signal lamps and the corresponding lanes are determined based on the currently collected road image information, and determining the current target lane of the vehicle based on the current position of the vehicle, so that the signal lamp corresponding to the current lane of the vehicle can be accurately judged, accurate basis can be provided for the control of the vehicle, and the requirement of a user on safe driving of the vehicle is met.
Fig. 2 shows a flow chart of how to determine the lighting colors of the groups of signal lights and the corresponding lanes based on the road image information according to an exemplary embodiment of the present invention; the present embodiment is exemplified by how to determine the lighting colors of the groups of signal lamps and the corresponding lanes based on the road image information on the basis of the above embodiments. As shown in fig. 2, the determining the lighting colors and the corresponding lanes of the groups of signal lamps based on the road image information in step S102 may include the following steps S201 to S202:
in step S201, the road image information is processed based on a pre-constructed deep learning model, and the lighting colors of the groups of signal lamps and the coordinates in the image coordinate system are obtained.
In this embodiment, after the currently acquired road image information is acquired, the road image information may be processed based on a pre-constructed deep learning model, so as to obtain the lighting colors of the signal lamps and the coordinates of the signal lamps in the image coordinate system.
For example, a deep learning model for processing road image information may be constructed in advance, the input of the deep learning model may be the road image information, and the output thereof may be the lighting colors of each group of signal lamps included in the image and the coordinates in the image coordinate system.
It should be noted that, for the training mode of the deep learning model, reference may be made to explanations and descriptions in the related art, for example, model training is performed based on calibrated sample image information, and the present embodiment does not limit this.
In step S202, the lanes corresponding to the groups of signal lights are determined based on the coordinates of the groups of signal lights in the image coordinate system.
In this embodiment, after the road image information is processed based on the pre-constructed deep learning model to obtain the lighting colors and the coordinates of the groups of signal lamps in the image coordinate system, the lanes corresponding to the groups of signal lamps may be determined based on the coordinates of the groups of signal lamps in the image coordinate system.
For example, the corresponding map coordinates may be determined based on the coordinates of each group of signal lamps in the image coordinate system, and then the lane corresponding to each group of signal lamps may be determined based on the map coordinates of each group of signal lamps.
In another embodiment, the above-mentioned manner of determining the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system can be referred to the embodiment shown in fig. 3, which will not be described in detail first.
According to the technical scheme, the lighting colors of the groups of signal lamps and the coordinates of the groups of signal lamps in the image coordinate system are obtained by processing the road image information based on the pre-constructed deep learning model, the lanes corresponding to the groups of signal lamps are determined based on the coordinates of the groups of signal lamps in the image coordinate system, the lighting colors of the groups of signal lamps and the lanes corresponding to the groups of signal lamps can be determined based on the road image information, the subsequent step of controlling the vehicle based on the lighting colors of the group of signal lamps corresponding to the target lane can be further realized, an accurate basis can be provided for the control of the vehicle, and the requirement of a user on the safe driving of the vehicle is met.
Fig. 3 shows a flowchart of how to determine the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system according to an exemplary embodiment of the present invention; the present embodiment takes an example of how to determine the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system on the basis of the above embodiments. As shown in fig. 3, the determining the lanes corresponding to the groups of signal lights based on the coordinates of the groups of signal lights in the image coordinate system in step S202 may include the following steps S301 to S302:
in step S301, based on the correspondence between the image coordinate system and the map coordinate system, the coordinates of the groups of signal lamps in the image coordinate system are converted into the map coordinate system, so as to obtain the coordinates of the groups of signal lamps in the map coordinate system.
In this embodiment, after the road image information is processed based on the pre-constructed deep learning model to obtain the lighting colors and the coordinates of the groups of signal lamps in the image coordinate system, the coordinates of the groups of signal lamps in the image coordinate system may be converted into the map coordinate system based on the corresponding relationship between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system.
It should be noted that, the determination manner of the correspondence between the image coordinate system and the map coordinate system may refer to explanations and descriptions in related technologies, for example, calibration is performed on the image coordinate system and the map coordinate system to obtain the correspondence between the image coordinate system and the map coordinate system, and the present embodiment does not limit this.
In step S302, coordinates of each group of traffic lights in the map coordinate system are matched with a pre-constructed high-precision map, so as to obtain lanes corresponding to each group of traffic lights.
In this embodiment, after obtaining the coordinates of each group of signal lamps in the map coordinate system, a pre-constructed high-precision map may be obtained, where the high-precision map includes pre-labeled signal lamps, and then the coordinates of each group of signal lamps in the map coordinate system may be matched with the high-precision map, so as to obtain lanes corresponding to each group of signal lamps.
As can be seen from the foregoing technical solutions, in this embodiment, coordinates of each group of signal lamps in the image coordinate system are converted into the map coordinate system based on a corresponding relationship between the image coordinate system and the map coordinate system, so as to obtain coordinates of each group of signal lamps in the map coordinate system, and the coordinates of each group of signal lamps in the map coordinate system are matched with a pre-constructed high-precision map, so as to obtain lanes corresponding to each group of signal lamps, so that lanes corresponding to each group of signal lamps can be determined based on the coordinates of each group of signal lamps in the image coordinate system, and further, a step of subsequently controlling the vehicle based on a bright light color in a group of signal lamps corresponding to the target lane can be implemented, so as to provide an accurate basis for controlling the vehicle, and meet a requirement of a user for safe driving of the vehicle.
Fig. 4 shows a block diagram of a vehicle control apparatus according to an exemplary embodiment of the invention; the device of the embodiment can be applied to a terminal device (such as a vehicle-mounted terminal, a smart phone, a tablet computer, a notebook computer, or the like), or can be applied to a server (such as a server or a server cluster formed by multiple servers, or the like). As shown in fig. 4, the apparatus includes: an image information acquisition module 110, a color lane determination module 120, a target lane determination module 130, and a vehicle control module 140, wherein:
the image information acquiring module 110 is configured to acquire currently acquired road image information, where the road image information includes image information of at least one group of signal lamps;
a color lane determining module 120, configured to determine lighting colors of the groups of signal lamps and corresponding lanes based on the road image information;
a target lane determination module 130 for determining a target lane in which a vehicle is currently located based on a current location of the vehicle;
and a vehicle control module 140, configured to control the vehicle based on the lighting color of a group of signal lights corresponding to the target lane.
As can be seen from the above technical solution, in the present embodiment, by obtaining the currently acquired road image information, the road image information includes image information of at least one group of signal lamps, determining the lighting colors of the groups of signal lamps and the corresponding lanes based on the road image information, and determining the target lane where the vehicle is located based on the current position of the vehicle, the vehicle may then be controlled based on the lighting color in a set of signal lights corresponding to the target lane, because the lighting colors of all groups of signal lamps and the corresponding lanes are determined based on the currently collected road image information, and determining the current target lane of the vehicle based on the current position of the vehicle, so that the signal lamp corresponding to the current lane of the vehicle can be accurately judged, accurate basis can be provided for the control of the vehicle, and the requirement of a user on safe driving of the vehicle is met.
Fig. 5 is a block diagram showing a structure of a vehicle control apparatus according to another example embodiment of the invention; the device of the embodiment can be applied to a terminal device (such as a vehicle-mounted terminal, a smart phone, a tablet computer, a notebook computer, or the like), or can be applied to a server (such as a server or a server cluster formed by multiple servers, or the like). The image information obtaining module 210, the color lane determining module 220, the target lane determining module 230, and the vehicle control module 240 have the same functions as the image information obtaining module 110, the color lane determining module 120, the target lane determining module 130, and the vehicle control module 140 in the embodiment shown in fig. 4, and are not repeated herein. As shown in fig. 5, the road image information may include road image information collected by a binocular camera carried by the vehicle, and the road image information further includes distances between each group of signal lamps of the at least one group of signal lamps and the binocular camera.
In one embodiment, the color lane determination module 120 includes:
the color coordinate determination unit 121 is configured to process the road image information based on a pre-constructed deep learning model to obtain lighting colors of the groups of signal lamps and coordinates of the groups of signal lamps in an image coordinate system;
and a corresponding lane determining unit 122, configured to determine, based on the coordinates of the groups of signal lights in the image coordinate system, lanes corresponding to the groups of signal lights.
In an embodiment, the corresponding lane determining unit 122 may be further configured to:
converting the coordinates of the groups of signal lamps in the image coordinate system to a map coordinate system based on the corresponding relation between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system;
and matching the coordinates of the groups of signal lamps in a map coordinate system with a pre-constructed high-precision map to obtain the lanes corresponding to the groups of signal lamps.
In an embodiment, the vehicle control module 240 may include:
a first control unit 241 for controlling the vehicle to stop when the lighting color is a color indicating no-pass;
a second control unit 242 for controlling the vehicle to stop or to travel through the current intersection based on a direction of a stop line of the vehicle with respect to the intersection when the lighting color is a color indicating a warning;
a third control unit 243 for controlling the vehicle to travel through the current intersection when the lighting color is a color indicating permission of passage.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.
The embodiment of the vehicle control device of the invention can be applied to network equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for operation through the processor of the device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 6, the hardware structure diagram of the electronic device in which the vehicle control apparatus of the present invention is located is shown, except for the processor, the network interface, the memory and the nonvolatile memory shown in fig. 6, the device in which the apparatus is located in the embodiment may also include other hardware, such as a forwarding chip responsible for processing a message, and the like; the device may also be a distributed device in terms of hardware structure, and may include multiple interface cards to facilitate expansion of message processing at the hardware level.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program implements the following task processing method when being processed by a processor:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
and controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Claims (8)
1. A vehicle control method characterized by comprising:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps, the road image information comprises road image information acquired by binocular cameras carried by the vehicles, and the road image information further comprises distances between each group of signal lamps in the at least one group of signal lamps and the binocular cameras;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane;
the determining the lighting colors and the corresponding lanes of the groups of signal lamps based on the road image information comprises:
processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps and the coordinates of the signal lamps in each group in an image coordinate system;
and determining the corresponding lane of each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
2. The method of claim 1, wherein the determining the lane corresponding to each group of signal lamps based on the coordinates of each group of signal lamps in an image coordinate system comprises:
converting the coordinates of the groups of signal lamps in the image coordinate system to a map coordinate system based on the corresponding relation between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system;
and matching the coordinates of the groups of signal lamps in a map coordinate system with a pre-constructed high-precision map to obtain the lanes corresponding to the groups of signal lamps.
3. The method of claim 1, wherein the controlling the vehicle based on the lighting color in a set of signal lights corresponding to the target lane comprises:
if the lighting color is a color representing no passing, controlling the vehicle to stop;
if the lighting color is a color representing warning, controlling the vehicle to stop or drive through the current intersection based on the direction of the vehicle relative to the stop line of the intersection;
and if the lighting color is a color indicating that the vehicle is allowed to pass, controlling the vehicle to run through the current intersection.
4. A vehicle control apparatus characterized by comprising:
the image information acquisition module is used for acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps, the road image information comprises road image information acquired by binocular cameras carried by the vehicle, and the road image information further comprises distances between each group of signal lamps in the at least one group of signal lamps and the binocular cameras;
the color lane determining module is used for determining the lighting colors of the signal lamps of each group and the corresponding lanes based on the road image information;
the target lane determining module is used for determining a target lane where the vehicle is located currently based on the current position of the vehicle;
the vehicle control module is used for controlling the vehicle based on the lighting colors of a group of signal lamps corresponding to the target lane;
the color lane determination module, comprising:
the color coordinate determination unit is used for processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps in each group and coordinates in an image coordinate system;
and the corresponding lane determining unit is used for determining the lane corresponding to each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
5. The apparatus of claim 4, wherein the corresponding lane determination unit is further configured to:
converting the coordinates of the groups of signal lamps in the image coordinate system to a map coordinate system based on the corresponding relation between the image coordinate system and the map coordinate system to obtain the coordinates of the groups of signal lamps in the map coordinate system;
and matching the coordinates of the groups of signal lamps in a map coordinate system with a pre-constructed high-precision map to obtain the lanes corresponding to the groups of signal lamps.
6. The apparatus of claim 4, wherein the vehicle control module comprises:
a first control unit for controlling the vehicle to stop when the lighting color is a color indicating no-pass;
a second control unit for controlling the vehicle to stop or to travel through the current intersection based on a direction of a stop line of the vehicle with respect to the intersection when the lighting color is a color indicating a warning;
and the third control unit is used for controlling the vehicle to drive through the current intersection when the lighting color is a color representing permission of passing.
7. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory configured to store processor-executable instructions;
wherein the processor is configured to:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps, the road image information comprises road image information acquired by a binocular camera carried by a vehicle, and the road image information also comprises distances between each group of signal lamps in the at least one group of signal lamps and the binocular camera;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane;
the determining the lighting colors and the corresponding lanes of the groups of signal lamps based on the road image information comprises:
processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps and the coordinates of the signal lamps in each group in an image coordinate system;
and determining the corresponding lane of each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
8. A computer-readable storage medium, on which a computer program is stored, which program, when being processed by a processor, is adapted to carry out:
acquiring currently acquired road image information, wherein the road image information comprises image information of at least one group of signal lamps, the road image information comprises road image information acquired by a binocular camera carried by a vehicle, and the road image information also comprises distances between each group of signal lamps in the at least one group of signal lamps and the binocular camera;
determining the lighting colors of the signal lamps in each group and the corresponding lanes based on the road image information;
determining a target lane in which a vehicle is currently located based on a current position of the vehicle;
controlling the vehicle based on the lighting color in a group of signal lamps corresponding to the target lane;
the determining the lighting colors and the corresponding lanes of the groups of signal lamps based on the road image information comprises:
processing the road image information based on a pre-constructed deep learning model to obtain the lighting colors of the signal lamps and the coordinates of the signal lamps in each group in an image coordinate system;
and determining the corresponding lane of each group of signal lamps based on the coordinates of each group of signal lamps in the image coordinate system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010382009.9A CN111661054B (en) | 2020-05-08 | 2020-05-08 | Vehicle control method, device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010382009.9A CN111661054B (en) | 2020-05-08 | 2020-05-08 | Vehicle control method, device, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111661054A CN111661054A (en) | 2020-09-15 |
CN111661054B true CN111661054B (en) | 2022-03-04 |
Family
ID=72383084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010382009.9A Active CN111661054B (en) | 2020-05-08 | 2020-05-08 | Vehicle control method, device, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111661054B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699773B (en) * | 2020-12-28 | 2023-09-01 | 阿波罗智联(北京)科技有限公司 | Traffic light identification method and device and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608417A (en) * | 2015-12-15 | 2016-05-25 | 福州华鹰重工机械有限公司 | Traffic signal lamp detection method and device |
CN107992829A (en) * | 2017-12-05 | 2018-05-04 | 武汉中海庭数据技术有限公司 | A kind of traffic lights track level control planning extracting method and device |
CN108305475A (en) * | 2017-03-06 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of traffic lights recognition methods and device |
CN108335510A (en) * | 2018-03-21 | 2018-07-27 | 北京百度网讯科技有限公司 | Traffic lights recognition methods, device and equipment |
DE102018004667A1 (en) * | 2018-06-12 | 2018-12-20 | Daimler Ag | Method for determining a direction of a traffic light system |
CN109849922A (en) * | 2018-12-25 | 2019-06-07 | 青岛中汽特种汽车有限公司 | A method of the view-based access control model information for intelligent vehicle is merged with GIS information |
CN111079680A (en) * | 2019-12-23 | 2020-04-28 | 北京三快在线科技有限公司 | Temporary traffic signal lamp detection method and device and automatic driving equipment |
-
2020
- 2020-05-08 CN CN202010382009.9A patent/CN111661054B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105608417A (en) * | 2015-12-15 | 2016-05-25 | 福州华鹰重工机械有限公司 | Traffic signal lamp detection method and device |
CN108305475A (en) * | 2017-03-06 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of traffic lights recognition methods and device |
CN107992829A (en) * | 2017-12-05 | 2018-05-04 | 武汉中海庭数据技术有限公司 | A kind of traffic lights track level control planning extracting method and device |
CN108335510A (en) * | 2018-03-21 | 2018-07-27 | 北京百度网讯科技有限公司 | Traffic lights recognition methods, device and equipment |
DE102018004667A1 (en) * | 2018-06-12 | 2018-12-20 | Daimler Ag | Method for determining a direction of a traffic light system |
CN109849922A (en) * | 2018-12-25 | 2019-06-07 | 青岛中汽特种汽车有限公司 | A method of the view-based access control model information for intelligent vehicle is merged with GIS information |
CN111079680A (en) * | 2019-12-23 | 2020-04-28 | 北京三快在线科技有限公司 | Temporary traffic signal lamp detection method and device and automatic driving equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111661054A (en) | 2020-09-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110146097B (en) | Method and system for generating automatic driving navigation map, vehicle-mounted terminal and server | |
CN110796007B (en) | Scene recognition method and computing device | |
CN107031656B (en) | Virtual sensor data generation for wheel immobilizer detection | |
US11248925B2 (en) | Augmented road line detection and display system | |
CN107430815B (en) | Method and system for displaying parking area | |
CN113724520B (en) | Vehicle-road cooperative information processing method and device, electronic equipment and storage medium | |
CN111079680B (en) | Temporary traffic signal lamp detection method and device and automatic driving equipment | |
CN111046709A (en) | Vehicle lane level positioning method and system, vehicle and storage medium | |
CN111959499A (en) | Vehicle control method and device | |
US11963066B2 (en) | Method for indicating parking position and vehicle-mounted device | |
CN109387208B (en) | Map data processing method, device, equipment and medium | |
CN112349101B (en) | High-precision map generation method, and method and system for identifying traffic lights | |
CN110335484B (en) | Method and device for controlling vehicle to run | |
CN114945802A (en) | System, apparatus and method for identifying and updating design applicability of autonomous vehicles | |
CN109050530A (en) | A kind of cruise acceleration-controlled system and method | |
CN110599853A (en) | Intelligent teaching system and method for driving school | |
CN112198877B (en) | Control method and system of unmanned vehicle based on 5G network | |
CN111240224A (en) | Multifunctional simulation system for vehicle automatic driving technology | |
CN111661054B (en) | Vehicle control method, device, electronic device and storage medium | |
CN108242163B (en) | Driver assistance system, motor vehicle, method and medium for outputting traffic information | |
CN108346294B (en) | Vehicle identification system, method and device | |
CN112629547A (en) | Method and apparatus for creating positioning map | |
CN111433779A (en) | System and method for identifying road characteristics | |
CN113428081A (en) | Traffic safety control method, vehicle-mounted device and readable storage medium | |
CN112113593A (en) | Method and system for testing sensor configuration of vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |