CN116853282A - Vehicle control method, device, computer equipment and storage medium - Google Patents

Vehicle control method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN116853282A
CN116853282A CN202311022422.4A CN202311022422A CN116853282A CN 116853282 A CN116853282 A CN 116853282A CN 202311022422 A CN202311022422 A CN 202311022422A CN 116853282 A CN116853282 A CN 116853282A
Authority
CN
China
Prior art keywords
vehicle
dimensional scene
scene map
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311022422.4A
Other languages
Chinese (zh)
Inventor
刘飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jidu Technology Co Ltd
Original Assignee
Beijing Jidu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jidu Technology Co Ltd filed Critical Beijing Jidu Technology Co Ltd
Priority to CN202311022422.4A priority Critical patent/CN116853282A/en
Publication of CN116853282A publication Critical patent/CN116853282A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Abstract

The method starts to acquire first environment information outside a vehicle when the vehicle enters a parking preparation state (namely before parking), and starts to establish a three-dimensional scene map of a target parking area where the vehicle is located by using the first environment information after the vehicle enters a parked state from the parking preparation state, so as to display the three-dimensional scene map, provide navigation information of the target parking area for a user, and help the user leave the target parking area.

Description

Vehicle control method, device, computer equipment and storage medium
Technical Field
The disclosure relates to the technical field of vehicles, and in particular relates to a vehicle control method, a vehicle control device, computer equipment and a storage medium.
Background
As the holding rate of vehicles increases, the parking lot structure becomes more and more complex, and the difficulty of finding a parking space for a user increases gradually, so that the user often needs to park the vehicles in unfamiliar places (such as underground parking lots and open parking lots).
Due to the complex structure of the parking lot, the user may also have insufficient knowledge of the parking lot, resulting in a general difficulty for the user to drive out of the parking lot.
Disclosure of Invention
The embodiment of the disclosure at least provides a vehicle control method, a vehicle control device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a vehicle control method, including:
acquiring first environmental information outside a vehicle by using a sensor deployed on the vehicle in response to the vehicle entering a parking ready state;
responding to the vehicle entering a parked state from the parking preparation state, and establishing a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
and displaying the three-dimensional scene map.
In the above aspect, when the vehicle enters the parking preparation state (i.e. before parking), the first environmental information outside the vehicle is acquired, and after the vehicle enters the parked state from the parking preparation state, the first environmental information is utilized to establish a three-dimensional scene map of the target parking area where the vehicle is located, and the three-dimensional scene map is displayed, so that navigation information of the target parking area is provided for a user, and the user is helped to leave the target parking area.
In an alternative embodiment, the presenting the three-dimensional scene map includes:
and in response to the vehicle entering a start state from the parked state, displaying the three-dimensional scene map.
According to the embodiment, whether the user has the intention of leaving the target parking area can be judged by determining whether the vehicle enters the starting state from the parked state, and then the three-dimensional scene map is displayed to provide navigation information for the user under the condition that the user needs to leave the target parking area.
In an alternative embodiment, the first environmental information includes multiple frames of collected data of at least one sensor;
based on the first environmental information, the method for establishing the three-dimensional scene map corresponding to the target parking area where the vehicle is currently located comprises the following steps:
identifying a plurality of first objects in the target parking area from the multi-frame acquired data;
screening any first object based on the target times of identifying the first object from the multi-frame acquired data and the total frame number of the acquired data to obtain a second object;
and based on the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area.
According to the embodiment, the object in the target parking area can be identified from the acquired data of the vehicle sensor, so that a three-dimensional scene map containing more details is established, and more information is provided for a user.
In an alternative embodiment, the screening the first object to obtain the second object based on the target number of times the first object is identified from the multi-frame acquired data and the total frame number of the acquired data includes:
determining a ratio between the target number of times and the total frame number when the total frame number of the acquired data is greater than or equal to a first target number;
taking the first object as the second object when the ratio is greater than or equal to a target ratio;
and under the condition that the total frame number of the acquired data is smaller than a second target number, if the target number is larger than or equal to a reference number, taking the first object as the second object.
According to the embodiment, the first object can be filtered through the total frame number of the acquired data and the target times of identifying the first object, so that irrelevant objects identified by the vehicle are removed, and the accuracy of the three-dimensional scene map is improved.
In an alternative embodiment, the identifying a plurality of first objects in the target parking area from the multi-frame collected data includes:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object;
determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object;
and taking the third object and the fourth object as the identification result of the first object in different acquired data.
In the above embodiment, the objects which are actually the same object but are recognized as different objects can be combined, and the accuracy of the three-dimensional scene map can be improved.
In an alternative embodiment, after the three-dimensional scene map is displayed, the method further comprises:
acquiring second environmental information outside the vehicle by using a sensor deployed on the vehicle;
and updating the displayed three-dimensional scene map based on the second environment information.
According to the embodiment, after the vehicle is restarted, the three-dimensional scene map is updated by using the second environmental information acquired after the vehicle is restarted, so that the three-dimensional scene map is corrected and expanded.
In an alternative embodiment, the vehicle is determined to enter a park ready state by:
determining that the vehicle enters a parking preparation state when the current speed of the vehicle is lower than a target speed and/or the vehicle enters a preset parking place; the parking place includes the target parking area.
According to the embodiment, whether the vehicle enters a parking preparation state or not can be judged through the speed of the vehicle and/or the position of the vehicle, so that the acquisition of environment information and the establishment of a three-dimensional environment map are accurately triggered.
In an optional implementation manner, the building a three-dimensional scene map corresponding to the target parking area based on the pose information of the second object includes:
based on the acquired data corresponding to the second object, carrying out semantic recognition on the second object, and determining type information of the second object;
based on the type information of the second object and the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area; navigation information based on type information and pose information of at least one second object is indicated in the three-dimensional scene map.
According to the embodiment, the identified second object can be subjected to semantic analysis to obtain the type information of the second object, and the type information of the second object is utilized to generate the three-dimensional scene map capable of indicating the type information of the second object, so that the information quantity of the three-dimensional scene map is improved, and the navigation effect is improved.
In an optional implementation manner, the building a three-dimensional scene map corresponding to the target parking area where the vehicle is currently located based on the first environmental information includes:
the first environment information is sent to a cloud server;
receiving a three-dimensional scene map of a parking place returned by the cloud server based on the first environmental information, wherein the parking place comprises the target parking area; the three-dimensional scene map is generated based on environmental information collected by a plurality of vehicles.
According to the embodiment, the cloud server can be used for generating the three-dimensional scene map, the cloud server is used for generating the environment information acquired by the vehicles, the environment information which is not acquired by the current vehicle can be provided, and the generated three-dimensional scene map has richer information.
In a second aspect, an embodiment of the present disclosure further provides a vehicle control apparatus, including:
The system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring first environmental information outside a vehicle by using a sensor deployed on the vehicle in response to the vehicle entering a parking preparation state;
the map building module is used for responding to the situation that the vehicle enters a parked state from the parking preparation state, and building a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
and the display module is used for displaying the three-dimensional scene map.
In an alternative embodiment, the display module is specifically configured to:
and in response to the vehicle entering a start state from the parked state, displaying the three-dimensional scene map.
In an alternative embodiment, the first environmental information includes multiple frames of collected data of at least one sensor;
the map building module is specifically used for:
identifying a plurality of first objects in the target parking area from the multi-frame acquired data;
screening any first object based on the target times of identifying the first object from the multi-frame acquired data and the total frame number of the acquired data to obtain a second object;
And based on the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area.
In an optional implementation manner, the mapping module is configured to, when screening the first object based on the target number of times the first object is identified from the multi-frame acquired data and the total frame number of the acquired data, obtain a second object:
determining a ratio between the target number of times and the total frame number when the total frame number of the acquired data is greater than or equal to a first target number;
taking the first object as the second object when the ratio is greater than or equal to a target ratio;
and under the condition that the total frame number of the acquired data is smaller than a second target number, if the target number is larger than or equal to a reference number, taking the first object as the second object.
In an alternative embodiment, the mapping module, when identifying a plurality of first objects in the target parking area from the multi-frame collected data, is configured to:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object;
Determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object;
and taking the third object and the fourth object as the identification result of the first object in different acquired data.
In an alternative embodiment, after the three-dimensional scene map is displayed, the mapping module is further configured to:
acquiring second environmental information outside the vehicle by using a sensor deployed on the vehicle;
and updating the displayed three-dimensional scene map based on the second environment information.
In an alternative embodiment, the apparatus further comprises a determining module configured to:
determining that the vehicle enters a parking preparation state when the current speed of the vehicle is lower than a target speed and/or the vehicle enters a preset parking place; the parking place includes the target parking area.
In an optional implementation manner, the mapping module is configured to, when establishing a three-dimensional scene map corresponding to the target parking area based on pose information of the second object:
based on the acquired data corresponding to the second object, carrying out semantic recognition on the second object, and determining type information of the second object;
Based on the type information of the second object and the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area; navigation information based on type information and pose information of at least one second object is indicated in the three-dimensional scene map.
In an optional implementation manner, the mapping module is configured to, when establishing a three-dimensional scene map corresponding to the target parking area where the vehicle is currently located based on the first environmental information:
the first environment information is sent to a cloud server;
receiving a three-dimensional scene map of a parking place returned by the cloud server based on the first environmental information, wherein the parking place comprises the target parking area; the three-dimensional scene map is generated based on environmental information collected by a plurality of vehicles.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
The description of the effects of the vehicle control apparatus, the computer device, and the computer-readable storage medium is referred to the description of the vehicle control method, and is not repeated here.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the aspects of the disclosure.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a vehicle control method provided by some embodiments of the present disclosure;
FIG. 2 illustrates a schematic diagram of a vehicle control apparatus provided by some embodiments of the present disclosure;
fig. 3 illustrates a schematic diagram of a computer device provided by some embodiments of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It has been found that users often have a need to park the vehicle at a strange location, and some locations have complex structures, and it may be difficult for the user to get back to the vehicle and leave the parking location.
Based on the above-mentioned study, the present disclosure provides a vehicle control method, by starting to acquire first environmental information outside a vehicle when the vehicle enters a parking preparation state (i.e., before parking), and starting to establish a three-dimensional scene map of a target parking area where the vehicle is located by using the first environmental information after the vehicle enters a parked state from the parking preparation state, and displaying the three-dimensional scene map when the vehicle enters a starting state from the parked state, providing navigation information for a user to travel out of the target parking area, and helping the user leave the target parking area.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a vehicle control method disclosed in an embodiment of the present disclosure, where an execution subject of the vehicle control method provided in the embodiment of the present disclosure is generally a computer device having a certain computing capability, and the computer device includes, for example: the terminal device or other processing device, and the terminal device may be a vehicle-mounted terminal or the like. In some possible implementations, the vehicle testing method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
The vehicle control method provided in the embodiment of the present disclosure will be described below by taking an execution subject as an in-vehicle terminal as an example.
Referring to fig. 1, a flowchart of a vehicle control method according to an embodiment of the disclosure is shown, where the method includes steps S101 to S103, where:
s101, responding to a vehicle entering a parking preparation state, and acquiring first environment information outside the vehicle by using a sensor deployed on the vehicle.
In this step, the user usually performs some preparatory actions such as deceleration, traveling to a parking area, etc., before parking, and thus it is possible to determine whether the vehicle is brought into a parking-ready state by the speed of the vehicle and/or the position of the vehicle.
For example, it may be determined that the vehicle enters a parking-ready state upon determining that the current vehicle speed of the vehicle is lower than the target speed.
The target speed can be determined according to the vehicle speed of the current user in a period of time before the historical parking operation, so that the matched target speed is determined for users with different driving habits. Alternatively, the vehicle speed may be determined based on the vehicle speeds of a plurality of test users before the parking operation.
By way of example, the target speed may be set to 30km/h.
Since there are many situations in which the user decelerates the vehicle, the user may not need to park the vehicle, and thus it is also possible to determine whether the vehicle is in a parking-ready state using the position information of the vehicle.
For example, whether the vehicle enters a preset parking place may be determined according to the position information of the vehicle, and if the vehicle enters the preset parking place, it may be determined that the vehicle enters a parking preparation state.
In one possible embodiment, the road or the area may be classified in advance, the parking area where the three-dimensional scene map needs to be generated is set as a target class, and then it is determined whether the vehicle enters the parking place according to the road class of the road on which the vehicle is currently traveling.
In some situations, the user may drive the vehicle through a preset parking place, so if only the position information of the vehicle is used to determine whether the vehicle enters the preset parking place, the accuracy may be insufficient, and thus, whether the vehicle enters the preset parking place may be determined according to the position information and the vehicle speed of the vehicle at the same time.
For example, if the position information and the vehicle speed of the vehicle are used to determine whether the vehicle enters a preset parking place, it may be determined that the vehicle enters a parking preparation state when it is determined that the current vehicle speed of the vehicle is lower than the target speed and the vehicle enters the preset parking place. Therefore, under the condition that calculation is simple and quick enough, the accuracy of judging whether the vehicle enters a parking preparation state is improved, misjudgment is reduced, and the subsequent step of building the three-dimensional scene map under the unnecessary condition is avoided.
In order to further improve the accuracy of judging whether the vehicle enters the parking preparation state, more vehicle state information can be used for judging, such as change information of the vehicle speed, change information of the accelerator, whether a sensor acquires an identification of a parking place, the distance between the vehicle and the parking place and the like.
For example, a trained machine learning model may be used to determine whether the vehicle is in a park ready state using the relevant information. The machine learning model can learn the association relation between various information and whether the vehicle enters a parking preparation state, thereby improving the accuracy of judging whether the vehicle enters the parking preparation state.
In order to reduce the calculation amount, a trigger condition may be set for determining whether the vehicle enters a parking preparation state, and after the vehicle satisfies the trigger condition, a determination may be made as to whether the vehicle enters the parking preparation state.
By setting the triggering conditions, a large number of irrelevant scenes can be filtered, and the calculated amount of the vehicle is effectively reduced.
Illustratively, the triggering condition may be as follows: the current speed of the vehicle is lower than the preset speed, the vehicle is located near a parking place, etc.
The target parking area where the vehicle collects the environmental information may be at least a part of the preset parking place, or may be the entire area of the preset parking place.
After determining that the vehicle enters the park ready state, first environmental information external to the vehicle may be acquired using sensors disposed on the vehicle, and the vehicle may, for example, begin after entering the park ready state and buffer the acquired environmental information until the vehicle enters the parked state.
The first environmental information may include multiple frames of collected data of at least one sensor.
When the first environmental information is acquired and cached, a failure time length (such as 20S) can be set for the cached environmental information, when the time length of the environmental information storage exceeds the failure time length, the environmental information can be deleted from the cache, and when the vehicle enters a parked state, the currently cached environmental information is used as the first environmental information, so that the time difference between the first frame data in the first environmental information and an event node of the vehicle entering the parked state does not exceed the failure time length, and the correlation degree between the first environmental information and a parking place is improved.
The sensor disposed in the above-described vehicle may include an image pickup device, a radar device, a positioning device, and the like.
S102, responding to the vehicle entering a parked state from the parking preparation state, and establishing a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area.
In the step, after the vehicle enters a parked state from a parking preparation state, the scene of the target parking area is rebuilt based on the acquired first environmental information to obtain a three-dimensional scene map corresponding to the target parking area, and the three-dimensional scene map is displayed when needed to guide a user to drive away from the target parking area.
In one possible embodiment, the vehicle may be determined to enter the parked state from the parking ready state when the control gear of the vehicle enters a park (P-range). At this time, the vehicle is in a stationary state, and the user can leave the vehicle. When the user returns to the vehicle and starts the vehicle, the user drives away from the target parking area, so that the three-dimensional scene map can be built before the user returns to the vehicle and after the vehicle enters the P gear.
When the three-dimensional scene map is built, the vehicle-mounted terminal can identify each first object in the target parking area from multi-frame acquisition data, because the first scene data are acquired by a user in real time in the parking process, a plurality of irrelevant objects can be identified, which are irrelevant to the navigation function, or wrong objects (such as the identified objects do not exist actually) are identified, therefore, the target times of identifying the first objects in the multi-frame acquisition data and the total frame number of the acquisition data can be determined for each first object, and then, whether the first objects stay nearby to influence navigation can be judged based on the target times and the total frame number, so that the first objects are screened, and the second objects are obtained.
In this way, recognition objects that are not related to navigation can be filtered such that only the second object related to navigation is contained in the three-dimensional scene map.
For example, in the case where the total number of frames of the acquired data is greater than or equal to the first target number, a ratio between the target number of times and the total number of frames may be determined; and in the case that the ratio is greater than or equal to the target ratio, taking the first object as a second object. Therefore, when the parking time is long, whether the object needs to be displayed or not can be judged according to the detected proportion of the object to the parking time.
In the case where the total number of frames of the acquired data is smaller than the second target number, the first object may be regarded as the second object if the target number is greater than or equal to the reference number. Thus, when the parking time is short, whether the object needs to be displayed in the three-dimensional scene map can be accurately judged.
When the first object is identified from the multi-frame acquired data, the following steps are performed:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object; determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object; and taking the third object and the fourth object as the identification result of the first object in different acquired data.
In the case of performing object recognition, there may be a case where recognition errors are present, in which objects that are actually the same object in the data acquired in different frames are recognized as different objects, for example, one deceleration strip is recognized in the i-th frame, and is marked as deceleration strip a, a new deceleration strip is recognized in the i+1-th frame, and is marked as deceleration strip b, but in reality deceleration strip a and deceleration strip b are the same deceleration strip, it is necessary to combine deceleration strip a and deceleration strip b.
For this reason, for any frame of collected data, a third object which is recognized arbitrarily can be selected from the collected data, whether a fourth object recognized in other frames of collected data is matched with the third object is judged according to pose information of the third object, if so, the third object and the fourth object are the same object, and the third object and the fourth object can be used as recognition results of one first object in different collected data.
The above matching process for the third object and the fourth object may be determined by pose information of the third object and the fourth object, for example, if the similarity of pose information of the third object and the fourth object is higher than a preset similarity, it may be determined that the third object and the fourth object are matched and are the same object.
Therefore, through the screening step, the object with the error identification in the second object can be obviously reduced, and the accuracy of the three-dimensional scene map can be effectively improved.
After the second object is obtained, the pose information of the second object can be utilized to build a three-dimensional scene map. When the three-dimensional scene map is established, an instant positioning and map construction (Simultaneous Localization and Mapping, SLAM) technology can be adopted. The three-dimensional scene map may include a three-dimensional model of the detected second object, and be disposed at a corresponding position in the map according to the pose information corresponding to the three-dimensional model.
Since the three-dimensional scene map is used for navigation, if only the three-dimensional model of the identified second object is displayed, the three-dimensional scene map may not be sufficient to meet the navigation requirement, and therefore, in some possible embodiments, the semantic identification may be performed on the second object based on the acquired data corresponding to the second object, and the type information of the second object may be determined.
The type information of the second object may include at least one of a car stopper (stopper), a deceleration strip, other vehicles, an exit, a location mark, a road, an obstacle, and the like.
Thus, the three-dimensional scene map corresponding to the target parking area can be established based on the type information and the pose information corresponding to the second object. The built three-dimensional scene map can be indicated with the type information of each second object, and can also be displayed with navigation information based on the type information and the pose information of at least one second object, so as to guide the running route of the user.
Because the three-dimensional scene map is established based on the environmental data acquired in a period of time before the vehicle parks, the whole parking place may not be completely covered, and therefore, the vehicle-mounted terminal may send the first environmental information to the cloud server, and the cloud server searches and returns the three-dimensional scene map of the parking place.
The cloud server can acquire three-dimensional scene maps generated by a plurality of vehicle histories, if the three-dimensional scene maps are three-dimensional scene maps of the same parking place, the three-dimensional scene maps can be combined to form a three-dimensional scene map with wider coverage and stored, and the three-dimensional scene map is returned to the cloud server when a vehicle requests the cloud server.
The cloud server can also acquire environmental information collected by a plurality of vehicle histories, and generate a three-dimensional scene map which is larger and wider in coverage and is used for subsequent vehicle requests based on the environmental information collected by the vehicle histories.
S103, displaying the three-dimensional scene map.
In this step, the three-dimensional scene map may be displayed through the vehicle-mounted terminal, so as to provide relevant information of the target parking area, which can be used to guide the user to leave the parking place.
The three-dimensional scene map may indicate a route leaving the parking place and a hint of objects in the route that may have an influence on the running of the vehicle.
Because the three-dimensional scene map is generated based on the environmental information collected before the vehicle parks, the data in the three-dimensional scene map is limited, and in the process of displaying the three-dimensional scene map, a sensor arranged on the vehicle can be used for acquiring second environmental information outside the vehicle, and the second environmental information is used for updating the three-dimensional scene map, so that the three-dimensional scene map can display more contents, and better navigation service is provided.
In this step, the three-dimensional scene map may be displayed immediately after the three-dimensional scene map is generated, or may be displayed after the vehicle is detected to enter the start state from the stopped state.
Under the condition that the three-dimensional scene map is displayed immediately after the three-dimensional scene map is generated, the information of the target parking area can be displayed immediately to the user, so that the user has a certain knowledge of the target parking area before leaving the vehicle, and the user can conveniently act after leaving the vehicle.
When the vehicle enters the starting state from the parked state, the vehicle enters the parked state first and is flameout, the user can leave the vehicle, when the user needs to use the vehicle, the vehicle enters the starting state again, at the moment, the user intention is usually that the vehicle is driven to leave the target parking area, and thus, when the user has the intention of exiting the target parking area, the three-dimensional scene map providing navigation information is displayed, and the requirements of the user are met.
In one possible implementation manner, the two modes of displaying the three-dimensional scene map can be considered, the three-dimensional scene map can be displayed immediately after being generated, and then the three-dimensional scene map is displayed when the vehicle enters a parked state and then enters a starting state again, so that the coverage rate of displaying the three-dimensional scene map can be improved, and a user can acquire navigation information conveniently.
According to the vehicle control method provided by the embodiment of the disclosure, when the vehicle enters the parking preparation state (i.e. before parking), the first environmental information outside the vehicle is acquired, after the vehicle enters the parked state from the parking preparation state, the three-dimensional scene map of the target parking area where the vehicle is located is established by using the first environmental information, the three-dimensional scene map is displayed, navigation information of the target parking area is provided for a user, and the user is helped to leave the target parking area.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide a vehicle control device corresponding to the vehicle control method, and since the principle of solving the problem by the device in the embodiments of the present disclosure is similar to that of the vehicle control method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 2, a schematic diagram of a vehicle control apparatus according to an embodiment of the disclosure is shown, where the apparatus includes:
an acquisition module 210 for acquiring first environmental information outside a vehicle with a sensor disposed on the vehicle in response to the vehicle entering a parking ready state;
a mapping module 220, configured to, in response to the vehicle entering a parked state from the parking ready state, establish a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
and the display module 230 is configured to display the three-dimensional scene map.
In an alternative embodiment, the display module 230 is specifically configured to:
and in response to the vehicle entering a start state from the parked state, displaying the three-dimensional scene map.
In an alternative embodiment, the first environmental information includes multiple frames of collected data of at least one sensor;
the mapping module 220 is specifically configured to:
identifying a plurality of first objects in the target parking area from the multi-frame acquired data;
screening any first object based on the target times of identifying the first object from the multi-frame acquired data and the total frame number of the acquired data to obtain a second object;
and based on the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area.
In an alternative embodiment, the mapping module 220 is configured to, when screening the first object based on the target number of times the first object is identified from the multiple frames of collected data and the total frame number of the collected data, obtain a second object:
determining a ratio between the target number of times and the total frame number when the total frame number of the acquired data is greater than or equal to a first target number;
taking the first object as the second object when the ratio is greater than or equal to a target ratio;
And under the condition that the total frame number of the acquired data is smaller than a second target number, if the target number is larger than or equal to a reference number, taking the first object as the second object.
In an alternative embodiment, the mapping module 220 is configured to, when identifying a plurality of first objects in the target parking area from the multi-frame collected data:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object;
determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object;
and taking the third object and the fourth object as the identification result of the first object in different acquired data.
In an alternative embodiment, after the three-dimensional scene map is displayed, the mapping module 220 is further configured to:
acquiring second environmental information outside the vehicle by using a sensor deployed on the vehicle;
and updating the displayed three-dimensional scene map based on the second environment information.
In an alternative embodiment, the apparatus further comprises a determining module configured to:
determining that the vehicle enters a parking preparation state when the current speed of the vehicle is lower than a target speed and/or the vehicle enters a preset parking place; the parking place includes the target parking area.
In an optional implementation manner, the mapping module 220 is configured to, when establishing the three-dimensional scene map corresponding to the target parking area based on the pose information of the second object:
based on the acquired data corresponding to the second object, carrying out semantic recognition on the second object, and determining type information of the second object;
based on the type information of the second object and the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area; navigation information based on type information and pose information of at least one second object is indicated in the three-dimensional scene map.
In an optional implementation manner, the mapping module 220 is configured to, when establishing, based on the first environmental information, a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located:
The first environment information is sent to a cloud server;
receiving a three-dimensional scene map of a parking place returned by the cloud server based on the first environmental information, wherein the parking place comprises the target parking area; the three-dimensional scene map is generated based on environmental information collected by a plurality of vehicles.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 3, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
a processor 31 and a memory 32; the memory 32 stores machine readable instructions executable by the processor 31, the processor 31 being configured to execute the machine readable instructions stored in the memory 32, the machine readable instructions when executed by the processor 31, the processor 31 performing the steps of:
acquiring first environmental information outside a vehicle by using a sensor deployed on the vehicle in response to the vehicle entering a parking ready state;
responding to the vehicle entering a parked state from the parking preparation state, and establishing a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
And displaying the three-dimensional scene map.
In an alternative embodiment, the presenting the three-dimensional scene map in the instructions executed by the processor 31 includes:
and in response to the vehicle entering a start state from the parked state, displaying the three-dimensional scene map.
In an alternative embodiment, the first environmental information includes multiple frames of collected data of at least one sensor in the instructions executed by the processor 31;
based on the first environmental information, the method for establishing the three-dimensional scene map corresponding to the target parking area where the vehicle is currently located comprises the following steps:
identifying a plurality of first objects in the target parking area from the multi-frame acquired data;
screening any first object based on the target times of identifying the first object from the multi-frame acquired data and the total frame number of the acquired data to obtain a second object;
and based on the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area.
In an alternative embodiment, in the instructions executed by the processor 31, the screening the first object to obtain the second object based on the target number of times the first object is identified from the multiple frames of collected data and the total frame number of the collected data includes:
Determining a ratio between the target number of times and the total frame number when the total frame number of the acquired data is greater than or equal to a first target number;
taking the first object as the second object when the ratio is greater than or equal to a target ratio;
and under the condition that the total frame number of the acquired data is smaller than a second target number, if the target number is larger than or equal to a reference number, taking the first object as the second object.
In an alternative embodiment, in the instructions executed by the processor 31, the identifying a plurality of first objects in the target parking area from the multi-frame collected data includes:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object;
determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object;
and taking the third object and the fourth object as the identification result of the first object in different acquired data.
In an alternative embodiment, the instructions executed by the processor 31 further include, after the presenting the three-dimensional scene map:
Acquiring second environmental information outside the vehicle by using a sensor deployed on the vehicle;
and updating the displayed three-dimensional scene map based on the second environment information.
In an alternative embodiment, the vehicle is determined to enter a park ready state by:
determining that the vehicle enters a parking preparation state when the current speed of the vehicle is lower than a target speed and/or the vehicle enters a preset parking place; the parking place includes the target parking area.
In an optional implementation manner, in the instructions executed by the processor 31, the creating a three-dimensional scene map corresponding to the target parking area based on pose information of the second object includes:
based on the acquired data corresponding to the second object, carrying out semantic recognition on the second object, and determining type information of the second object;
based on the type information of the second object and the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area; navigation information based on type information and pose information of at least one second object is indicated in the three-dimensional scene map.
In an optional implementation manner, in the instructions executed by the processor 31, the creating a three-dimensional scene map corresponding to the target parking area where the vehicle is currently located based on the first environmental information includes:
the first environment information is sent to a cloud server;
receiving a three-dimensional scene map of a parking place returned by the cloud server based on the first environmental information, wherein the parking place comprises the target parking area; the three-dimensional scene map is generated based on environmental information collected by a plurality of vehicles.
The memory 32 includes a memory 321 and an external memory 322; the memory 321 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 31 and data exchanged with an external memory 322 such as a hard disk, and the processor 31 exchanges data with the external memory 322 via the memory 321.
The specific execution process of the above instruction may refer to the steps of the vehicle control method described in the embodiments of the present disclosure, and will not be described herein.
The disclosed embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the vehicle control method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to perform the steps of the vehicle control method described in the foregoing method embodiments, and specifically reference the foregoing method embodiments will not be described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A vehicle control method characterized by comprising:
acquiring first environmental information outside a vehicle by using a sensor deployed on the vehicle in response to the vehicle entering a parking ready state;
responding to the vehicle entering a parked state from the parking preparation state, and establishing a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
And displaying the three-dimensional scene map.
2. The method of claim 1, wherein the presenting the three-dimensional scene map comprises:
and in response to the vehicle entering a start state from the parked state, displaying the three-dimensional scene map.
3. The method of claim 1, wherein the first environmental information comprises multi-frame acquisition data of at least one sensor;
based on the first environmental information, the method for establishing the three-dimensional scene map corresponding to the target parking area where the vehicle is currently located comprises the following steps:
identifying a plurality of first objects in the target parking area from the multi-frame acquired data;
screening any first object based on the target times of identifying the first object from the multi-frame acquired data and the total frame number of the acquired data to obtain a second object;
and based on the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area.
4. The method of claim 3, wherein the screening the first object based on the target number of times the first object is identified from the multi-frame acquisition data and the total number of frames of the acquisition data to obtain a second object comprises:
Determining a ratio between the target number of times and the total frame number when the total frame number of the acquired data is greater than or equal to a first target number;
taking the first object as the second object when the ratio is greater than or equal to a target ratio;
and under the condition that the total frame number of the acquired data is smaller than a second target number, if the target number is larger than or equal to a reference number, taking the first object as the second object.
5. A method according to claim 3, wherein said identifying a plurality of first objects within said target parking area from said plurality of frames of collected data comprises:
for any frame of acquisition data, identifying at least one third object from the acquisition data and pose information of the third object;
determining a fourth object matched with any third object in other acquired data except the current acquired data based on pose information of the third object;
and taking the third object and the fourth object as the identification result of the first object in different acquired data.
6. The method of claim 1, wherein after displaying the three-dimensional scene map, the method further comprises:
Acquiring second environmental information outside the vehicle by using a sensor deployed on the vehicle;
and updating the displayed three-dimensional scene map based on the second environment information.
7. The method of claim 1, wherein the vehicle is determined to enter a park ready state by:
determining that the vehicle enters a parking preparation state when the current speed of the vehicle is lower than a target speed and/or the vehicle enters a preset parking place; the parking place includes the target parking area.
8. The method according to claim 3, wherein the creating a three-dimensional scene map corresponding to the target parking area based on pose information of the second object includes:
based on the acquired data corresponding to the second object, carrying out semantic recognition on the second object, and determining type information of the second object;
based on the type information of the second object and the pose information of the second object, establishing a three-dimensional scene map corresponding to the target parking area; navigation information based on type information and pose information of at least one second object is indicated in the three-dimensional scene map.
9. The method of claim 1, wherein the creating a three-dimensional scene map corresponding to the target parking area in which the vehicle is currently located based on the first environmental information comprises:
the first environment information is sent to a cloud server;
receiving a three-dimensional scene map of a parking place returned by the cloud server based on the first environmental information, wherein the parking place comprises the target parking area; the three-dimensional scene map is generated based on environmental information collected by a plurality of vehicles.
10. A vehicle control apparatus characterized by comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring first environmental information outside a vehicle by using a sensor deployed on the vehicle in response to the vehicle entering a parking preparation state;
the map building module is used for responding to the situation that the vehicle enters a parked state from the parking preparation state, and building a three-dimensional scene map corresponding to a target parking area where the vehicle is currently located based on the first environmental information; the three-dimensional scene map is used for providing navigation information for driving out of the target parking area;
and the display module is used for displaying the three-dimensional scene map.
11. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor for executing the machine-readable instructions stored in the memory, which when executed by the processor, perform the steps of the vehicle control method according to any one of claims 1 to 9.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a computer device, performs the steps of the vehicle control method according to any one of claims 1 to 9.
CN202311022422.4A 2023-08-14 2023-08-14 Vehicle control method, device, computer equipment and storage medium Pending CN116853282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311022422.4A CN116853282A (en) 2023-08-14 2023-08-14 Vehicle control method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311022422.4A CN116853282A (en) 2023-08-14 2023-08-14 Vehicle control method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116853282A true CN116853282A (en) 2023-10-10

Family

ID=88228760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311022422.4A Pending CN116853282A (en) 2023-08-14 2023-08-14 Vehicle control method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116853282A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117109592A (en) * 2023-10-18 2023-11-24 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117109592A (en) * 2023-10-18 2023-11-24 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium
CN117109592B (en) * 2023-10-18 2024-01-12 北京集度科技有限公司 Vehicle navigation method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
US20180313724A1 (en) Testing method and apparatus applicable to driverless vehicle
CN109937343A (en) Appraisal framework for the prediction locus in automatic driving vehicle traffic forecast
CN107449433A (en) The feedback cycle for being used for vehicle observation based on map
CN116853282A (en) Vehicle control method, device, computer equipment and storage medium
WO2020007589A1 (en) Training a deep convolutional neural network for individual routes
CN111127651A (en) Automatic driving test development method and device based on high-precision visualization technology
US20200202208A1 (en) Automatic annotation and generation of data for supervised machine learning in vehicle advanced driver assistance systems
JP2018189457A (en) Information processing device
CN108286973B (en) Running data verification method and device and hybrid navigation system
CN107257913B (en) Method for updating parking lot information in navigation system and navigation system
KR102106029B1 (en) Method and system for improving signage detection performance
JP2004038489A (en) Vehicle operation control device, system, and method
WO2022067295A1 (en) Architecture for distributed system simulation timing alignment
US20210048819A1 (en) Apparatus and method for determining junction
US11120687B2 (en) Systems and methods for utilizing a machine learning model to identify public parking spaces and for providing notifications of available public parking spaces
CN113548040B (en) Parking method, parking device, vehicle and storage medium
US11928406B2 (en) Systems and methods for creating infrastructure models
CN114048626A (en) Traffic flow simulation scene construction method and system
CN113762030A (en) Data processing method and device, computer equipment and storage medium
JP2022056153A (en) Temporary stop detection device, temporary stop detection system, and temporary stop detection program
CN116762094A (en) Data processing method and device
CN114035576B (en) Driving path determining method and device
CN110942603A (en) Vehicle collision alarm method and device, vehicle and server
CN117109592B (en) Vehicle navigation method, device, computer equipment and storage medium
US20230024799A1 (en) Method, system and computer program product for the automated locating of a vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination