CN111856963A - Parking simulation method and device based on vehicle-mounted looking-around system - Google Patents

Parking simulation method and device based on vehicle-mounted looking-around system Download PDF

Info

Publication number
CN111856963A
CN111856963A CN201910364960.9A CN201910364960A CN111856963A CN 111856963 A CN111856963 A CN 111856963A CN 201910364960 A CN201910364960 A CN 201910364960A CN 111856963 A CN111856963 A CN 111856963A
Authority
CN
China
Prior art keywords
parking
target
virtual
scene
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910364960.9A
Other languages
Chinese (zh)
Other versions
CN111856963B (en
Inventor
柴长坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Chusudu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chusudu Technology Co ltd filed Critical Beijing Chusudu Technology Co ltd
Priority to CN201910364960.9A priority Critical patent/CN111856963B/en
Publication of CN111856963A publication Critical patent/CN111856963A/en
Application granted granted Critical
Publication of CN111856963B publication Critical patent/CN111856963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B17/00Systems involving the use of models or simulators of said systems
    • G05B17/02Systems involving the use of models or simulators of said systems electric
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

The embodiment of the invention discloses a parking simulation method and device based on a vehicle-mounted looking-around system. The method comprises the following steps: acquiring a plurality of first images of a target virtual vehicle, which are acquired aiming at a virtual parking scene; splicing the plurality of first images to obtain a top spliced image; identifying preset visual features from the top-view mosaic; identifying target features matched with the visual features from the automatic driving navigation electronic map; determining the current pose of the target virtual vehicle according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map; determining a target parking space corresponding to the target virtual vehicle from the parking spaces; and controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm so as to realize the simulation of the parking process based on the vehicle-mounted looking-around system and provide a basis for the evaluation of the parking process based on the vehicle-mounted looking-around system.

Description

Parking simulation method and device based on vehicle-mounted looking-around system
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a parking simulation method and device based on a vehicle-mounted looking-around system.
Background
In the field of automatic driving technology, the safety problem of vehicles and personnel during automatic driving of vehicles is always concerned.
In order to ensure safety of vehicles and persons during automatic driving of vehicles, it is important to verify availability and consistency of various algorithms related to automatic driving of vehicles, which may include a vehicle positioning algorithm and a parking algorithm, before actually applying the various algorithms related to automatic driving of vehicles.
When the usability and consistency of various algorithms related to the automatic driving of the vehicle are verified, how to obtain and utilize the various algorithms related to the automatic driving of the vehicle is important to generate a result to be verified. Therefore, how to provide a method for simulating an automatic driving process of a vehicle, such as a parking process, to obtain the above-mentioned result to be verified is a problem to be solved.
Disclosure of Invention
The invention provides a parking simulation method and device based on a vehicle-mounted looking-around system, which are used for realizing the simulation of a parking process based on the vehicle-mounted looking-around system and providing an evaluation basis for the evaluation of the parking process based on the vehicle-mounted looking-around system. The specific technical scheme is as follows.
In a first aspect, an embodiment of the present invention provides a parking simulation method based on a vehicle-mounted looking-around system, including:
acquiring a plurality of first images of a target virtual vehicle, which are acquired aiming at a virtual parking scene, wherein the plurality of first images are as follows: the virtual parking system comprises a plurality of virtual cameras of the target virtual vehicle, a plurality of parking sensors and a plurality of parking sensors, wherein the virtual cameras of the target virtual vehicle shoot images in different directions of the virtual parking scene at the current moment;
splicing the plurality of first images to obtain a top spliced image;
identifying a preset visual feature from the top-view mosaic, wherein the preset visual feature at least comprises a parking space;
identifying target features matched with the visual features from an automatic driving navigation electronic map; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map;
determining a target parking space corresponding to the target virtual vehicle from the identified parking spaces;
and controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
Optionally, the preset parking algorithm includes a preset parking path planning algorithm and a preset parking control algorithm;
the step of controlling the target virtual vehicle to travel to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm comprises the following steps:
determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and the preset parking path planning algorithm;
and controlling the target virtual vehicle to run to the target parking space based on the parking path and the preset parking control algorithm.
Optionally, after the step of controlling the target virtual vehicle to travel to the target parking space based on the parking path and the preset parking control algorithm, the method further includes:
displaying the parking path for a user to evaluate the preset parking path planning algorithm; and/or the presence of a gas in the gas,
obtaining parking control data generated in the process that the target virtual vehicle drives to the target parking space, wherein the parking control data comprises at least one of the following data: virtual throttle data, virtual brake data, virtual vehicle driving direction data, virtual gear data, wheel pulse data and wheel speed data;
And displaying the parking control data to allow a user to evaluate the preset parking control algorithm.
Optionally, after the step of stitching the plurality of first images to obtain a top-view stitched image, the method further includes:
obtaining a top view shot picture of a scene camera at the current moment for the target virtual vehicle, wherein the scene camera is as follows: the virtual camera is arranged in the virtual parking scene and used for shooting the target virtual vehicle in a overlooking mode; evaluating the installation positions of the plurality of virtual cameras based on the top shot view and the top mosaic view;
and/or the step of identifying preset visual features from the top-view mosaic comprises the following steps:
recognizing preset visual features from the top-view mosaic by using a preset visual feature recognition algorithm; and after the step of identifying preset visual features from the top-view mosaic, the method further comprises:
obtaining the real pose of the target virtual vehicle in the virtual parking scene at the current moment; determining a scene view angle range corresponding to the target virtual vehicle based on the real pose and the view angle ranges of the virtual cameras; based on the scene visual angle range, determining scene visual features existing in the scene visual angle range from the scene visual features in the virtual parking scene, and taking the scene visual features as target scene visual features; evaluating the preset visual feature recognition algorithm based on the recognized preset visual features and the visual features of the target scene;
And/or after the step of determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking mosaic, the method further comprises the following steps:
obtaining the real pose of the target virtual vehicle in the virtual parking scene at the current moment; calculating a difference pose between the real pose and the current pose; and evaluating the reliability of the current pose based on the difference pose.
Optionally, before the step of obtaining a plurality of first images of the target virtual vehicle captured for the virtual parking scene, the method further includes:
constructing to obtain the virtual parking scene based on scene construction operation triggered by a user;
and constructing and obtaining the target virtual vehicle based on the vehicle construction operation triggered by the user.
Optionally, the step of constructing the virtual parking scene based on the scene construction operation triggered by the user is implemented by any one of the following two implementation manners:
the first implementation mode comprises the following steps:
outputting a first display interface displaying scene identifications of a plurality of models of a preset virtual parking scene based on a first scene construction operation triggered by a user;
After the user selection operation is detected, constructing and obtaining a virtual parking scene corresponding to the scene identification carried by the selection operation based on the scene identification of the model of the selected virtual parking scene carried by the selection operation;
the second implementation mode comprises the following steps:
outputting an initial virtual parking scene based on a second scene construction operation triggered by a user, and outputting a second display interface displaying a plurality of scene construction models required for constructing the initial virtual parking scene, wherein the scene construction models at least comprise at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model;
displaying the scene construction models selected and adjusted by the user in the initial virtual parking scene based on the selection operation and the adjustment operation of the user on each scene construction model to construct and obtain the virtual parking scene;
the step of constructing and obtaining the target virtual vehicle based on the vehicle construction operation triggered by the user is realized by any one of the following two realization modes:
The first implementation mode comprises the following steps:
outputting a third display interface displaying vehicle identifications of models of a plurality of preset virtual vehicles based on the first vehicle construction operation triggered by the user, wherein each preset virtual vehicle at least corresponds to a plurality of virtual cameras;
constructing and obtaining a target virtual vehicle based on the selection operation of the user on the vehicle identification of the model of the target virtual vehicle;
the second implementation mode comprises the following steps:
outputting an initial virtual vehicle based on a second vehicle construction operation triggered by a user, and outputting a configuration interface showing a plurality of parameters for configuring virtual sensors corresponding to the initial virtual vehicle, wherein the virtual sensors corresponding to the initial virtual vehicle at least comprise a plurality of virtual cameras;
constructing and obtaining the target virtual vehicle based on the configuration operation of the user on the parameters of the virtual sensor corresponding to the initial virtual vehicle, wherein the configuration operation carries: a user configured parameter value for a parameter of the virtual sensor.
Optionally, the step of determining the current pose of the target virtual vehicle in the target virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the top-view mosaic map includes:
Calculating a mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map;
calculating a first error between the mapping position and an actual position of the target feature in the automatic driving navigation electronic map;
judging whether the first error is smaller than a specified threshold value;
when the first error is larger than or equal to the specified threshold, adjusting the value of the estimated pose, and calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking mosaic;
when the first error is smaller than the specified threshold value, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose;
or, the step of determining the current pose of the target virtual vehicle in the target virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking mosaic map includes:
Calculating the projection position of the target feature projected to the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
calculating a second error between the projected location of the target feature and an actual location of the visual feature in the top-down mosaic;
judging whether the second error is smaller than a specified threshold value;
when the second error is larger than or equal to the specified threshold, adjusting the value of the estimated pose, and calculating the projection position of the target feature projected to the overlook mosaic according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
and when the second error is smaller than the specified threshold, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose.
Optionally, before the step of calculating, according to the value of the estimated pose and the position of the visual feature in the top-view mosaic, a mapping position of the visual feature in the map, the method further includes:
Calculating the estimated pose of the virtual vehicle at the current moment by taking the pose of the target virtual vehicle at the last moment as a reference and combining a motion model, and calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map; the last moment is as follows: at a time instant adjacent to and before the current time instant in time, the motion model is determined from data collected by an inertial measurement unit and/or a wheel speed meter of the target virtual vehicle;
and the step of calculating the mapping position of the visual feature to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map comprises the following steps of:
taking the value of the estimated pose as an initial value of the estimated pose;
calculating a mapping position of the visual feature in the automatic driving navigation electronic map according to the current value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and the step of calculating the projection position of the target feature projected to the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map comprises the following steps of:
Taking the value of the estimated pose as an initial value of the estimated pose;
and calculating the projection position of the target feature projected to the overlooking splicing map according to the current value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map. In a second aspect, an embodiment of the present invention provides a parking simulation apparatus based on a vehicle-mounted looking-around system, including:
a first obtaining module configured to obtain a plurality of first images of a target virtual vehicle captured for a virtual parking scene, wherein the plurality of first images are: the virtual parking system comprises a plurality of virtual cameras of the target virtual vehicle, a plurality of parking sensors and a plurality of parking sensors, wherein the virtual cameras of the target virtual vehicle shoot images in different directions of the virtual parking scene at the current moment;
the splicing module is configured to splice the plurality of first images to obtain a top-view splicing image;
an identification module configured to identify a preset visual feature from the top-view mosaic, wherein the preset visual feature includes at least a parking space;
the identification determination module is configured to identify target features matched with the visual features from an automatic driving navigation electronic map; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map;
A first determination module configured to determine a target parking space corresponding to the target virtual vehicle from the identified parking spaces;
and the control module is configured to control the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
Optionally, the preset parking algorithm includes a preset parking path planning algorithm and a preset parking control algorithm; the control module is specifically configured to: determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and the preset parking path planning algorithm; and controlling the target virtual vehicle to run to the target parking space based on the parking path and the preset parking control algorithm.
Optionally, the apparatus further comprises:
the first display module is configured to display the parking path after the target virtual vehicle is controlled to drive to the target parking space based on the parking path and the preset parking control algorithm, so that a user can evaluate the preset parking path planning algorithm; and/or the presence of a gas in the gas,
A second obtaining module configured to obtain parking control data generated during driving of the target virtual vehicle to the target parking space, wherein the parking control data includes at least one of: virtual throttle data, virtual brake data, virtual vehicle driving direction data, virtual gear data, wheel pulse data and wheel speed data;
a second display module configured to display the parking control data for a user to evaluate the preset parking control algorithm.
Optionally, the apparatus further comprises:
a third obtaining module, configured to obtain a top view shot diagram of a scene camera shooting a top view of the target virtual vehicle at the current time after the top view mosaic diagram is obtained by stitching the plurality of first images, where the scene camera is: the virtual camera is arranged in the virtual parking scene and used for shooting the target virtual vehicle in a overlooking mode; a first evaluation module configured to evaluate mounting positions of the plurality of virtual cameras based on the top shot view and the top mosaic view;
and/or the recognition module is specifically configured to recognize preset visual features from the top-view mosaic image by using a preset visual feature recognition algorithm; and the device further comprises: a fourth obtaining module configured to obtain a real pose of the target virtual vehicle at the virtual parking scene at the current moment after the preset visual features are identified from the top-view mosaic; a second determining module configured to determine a scene view range corresponding to the target virtual vehicle based on the real pose and the view ranges of the plurality of virtual cameras; the third determining module is configured to determine scene visual features existing in the scene visual angle range from the scene visual features in the virtual parking scene based on the scene visual angle range, and the scene visual features are used as target scene visual features; a second evaluation module configured to evaluate the preset visual feature recognition algorithm based on the recognized preset visual feature and the target scene visual feature;
And/or the device further comprises:
a fifth obtaining module configured to obtain a real pose of the target virtual vehicle in the virtual parking scene at a current moment after determining a current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the look-down mosaic; a calculation module configured to calculate a difference pose between the true pose and the current pose; a third evaluation module configured to evaluate the confidence level of the current pose based on the difference pose.
Optionally, the apparatus further comprises:
the virtual parking scene acquisition module is configured to acquire a plurality of first images of a target virtual vehicle, which are acquired aiming at a virtual parking scene, and construct the virtual parking scene based on scene construction operation triggered by a user;
a second construction module configured to construct the target virtual vehicle based on the user-triggered vehicle construction operation.
Optionally, the first building module is specifically configured to: outputting a first display interface displaying scene identifications of a plurality of models of a preset virtual parking scene based on a first scene construction operation triggered by a user; after the user selection operation is detected, constructing and obtaining a virtual parking scene corresponding to the scene identification carried by the selection operation based on the scene identification of the model of the selected virtual parking scene carried by the selection operation; or, specifically configured to: outputting an initial virtual parking scene based on a second scene construction operation triggered by a user, and outputting a second display interface displaying a plurality of scene construction models required for constructing the initial virtual parking scene, wherein the scene construction models at least comprise at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model; displaying the scene construction models selected and adjusted by the user in the initial virtual parking scene based on the selection operation and the adjustment operation of the user on each scene construction model to construct and obtain the virtual parking scene;
The second building block is specifically configured to: outputting a third display interface displaying vehicle identifications of models of a plurality of preset virtual vehicles based on the first vehicle construction operation triggered by the user, wherein each preset virtual vehicle at least corresponds to a plurality of virtual cameras; constructing and obtaining a target virtual vehicle based on the selection operation of the user on the vehicle identification of the model of the target virtual vehicle; or, specifically configured to: outputting an initial virtual vehicle based on a second scene construction operation triggered by a user, and outputting a configuration interface showing a plurality of parameters for configuring virtual sensors corresponding to the initial virtual vehicle, wherein the virtual sensors corresponding to the initial virtual vehicle at least comprise a plurality of virtual cameras; constructing and obtaining the target virtual vehicle based on the configuration operation of the user on the parameters of the virtual sensor corresponding to the initial virtual vehicle, wherein the configuration operation carries: a user configured parameter value for a parameter of the virtual sensor.
Optionally, the identification determining module includes:
a first calculation unit, configured to calculate, according to a value of an estimated pose and a position of the visual feature in the top-view mosaic, a mapping position of the visual feature in the map;
A second calculation unit configured to calculate a first error between the mapped position and an actual position of the target feature in the automatic driving navigation electronic map;
a first judgment unit configured to judge whether the first error is smaller than a specified threshold;
a first adjusting unit configured to adjust a value of the estimated pose and trigger the first calculating unit when the first error is greater than or equal to the specified threshold;
a first determination unit configured to determine a current pose of the target virtual vehicle in the target virtual parking scene according to a current value of the estimated pose when the first error is smaller than the specified threshold;
alternatively, the identification determination module includes:
a third calculation unit, configured to calculate a projection position of the target feature projected into the top view mosaic according to a value of the estimated pose and a position of the target feature in the automatic driving navigation electronic map;
a fourth calculation unit configured to calculate a second error between the projected position of the target feature and an actual position of the visual feature in the top-view mosaic;
A second determination unit configured to determine whether the second error is smaller than a specified threshold;
a second adjusting unit configured to adjust a value of the estimated pose and trigger the third calculating unit when the second error is greater than or equal to the specified threshold;
a second determination unit configured to determine a current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose when the second error is less than the specified threshold.
Optionally, the identification determining module further includes:
a fifth calculating unit, configured to calculate, by taking the pose of the target virtual vehicle at the previous time as a reference, an estimated pose of the virtual vehicle at the current time by using a motion model before calculating the mapping position of the visual feature to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking mosaic, and trigger the first calculating unit; the last moment is as follows: at a time instant adjacent to and before the current time instant in time, the motion model is determined from data collected by an inertial measurement unit and/or a wheel speed meter of the target virtual vehicle;
And the first computing unit is specifically configured to:
taking the value of the estimated pose as an initial value of the estimated pose;
calculating a mapping position of the visual feature in the automatic driving navigation electronic map according to the current value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and the third computing unit is specifically configured to:
taking the value of the estimated pose as an initial value of the estimated pose;
and calculating the projection position of the target feature projected to the overlooking splicing map according to the current value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map.
As can be seen from the above, the simulation method and device for vehicle positioning based on the vehicle-mounted looking-around system provided in the embodiments of the present invention can obtain a plurality of first images of the target virtual vehicle collected for the virtual parking scene, where the plurality of first images are: the method comprises the following steps that a plurality of virtual cameras of a target virtual vehicle shoot images in different directions of a virtual parking scene at the current moment; splicing the plurality of first images to obtain a top spliced image; identifying a preset visual feature from the overlook splicing picture, wherein the preset visual feature at least comprises a parking space; identifying target features matched with the visual features from the automatic driving navigation electronic map; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map; determining a target parking space corresponding to the target virtual vehicle from the identified parking spaces; and controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
By applying the embodiment of the invention, the looking-around system of the cameras is simulated by using the plurality of virtual cameras, the environment around the target virtual vehicle in the virtual parking scene is shot by the camera looking-around system, the position of the features in the image in the virtual parking scene and the positions of the features in the map are utilized by combining the automatic driving navigation electronic map, the pose of the target virtual vehicle at the current moment is determined, and the simulation of vehicle positioning in the parking process based on the vehicle-mounted looking-around system is realized. And then, determining a target parking space, controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm, and realizing the simulation of the parking process based on the vehicle-mounted looking-around system. The simulation of the parking process based on the vehicle-mounted looking-around system is realized through the simulation system, and an evaluation basis is provided for the evaluation of the parking process based on the vehicle-mounted looking-around system. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
The innovation points of the embodiment of the invention comprise:
1. the method comprises the steps of simulating a looking-around system of a camera by using a plurality of virtual cameras, shooting the environment around a target virtual vehicle in a virtual parking scene according to the looking-around system of the camera, determining the current time pose of the target virtual vehicle by combining an automatic driving navigation electronic map and using the positions of the features in the image and the positions of the features in the map in the virtual parking scene, and realizing the simulation of vehicle positioning in the parking process based on a vehicle-mounted looking-around system. And then determining a target parking space, controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm, and realizing the simulation of the parking process based on the vehicle-mounted looking-around system. The simulation of the parking process based on the vehicle-mounted looking-around system is realized through the simulation system, and an evaluation basis is provided for the evaluation of the parking process based on the vehicle-mounted looking-around system. By implementing the embodiment of the invention, the positioning of the target virtual vehicle can be completed only by using the visual information, and the environmental information around the target virtual vehicle can be obtained by single image information acquisition based on the all-around camera setting scheme, so that the positioning precision of the target virtual vehicle is higher.
2. The parking path is displayed, so that a user can evaluate a preset parking path planning algorithm, and an evaluation basis is provided for the evaluation of the preset parking path planning algorithm in the parking process based on the vehicle-mounted looking-around system; and/or obtaining and displaying parking control data generated in the process that the target virtual vehicle drives to the target parking space, so that a user can evaluate a preset parking control algorithm, and an evaluation basis is provided for the evaluation of the preset parking control algorithm in the parking process based on the vehicle-mounted looking-around system.
3. Obtaining a top-view shot picture of a scene camera at the current moment for top-view shooting of a target virtual vehicle, and further evaluating the installation positions of a plurality of virtual cameras based on the top-view shot picture and the top-view splicing picture; and/or determining a scene visual angle range corresponding to the target virtual vehicle at the current moment, determining target scene visual characteristics existing in the scene visual angle range from scene visual characteristics in the virtual parking scene, further evaluating a preset visual characteristic recognition algorithm based on the recognized preset visual characteristics and the target scene visual characteristics, evaluating whether false detection and missing detection of the visual characteristics occur or not, and evaluating indexes such as recognition accuracy and the like; and/or obtaining the real pose of the target virtual vehicle, and evaluating the reliability of the current pose based on the real pose to realize the evaluation of the accuracy of the positioning algorithm.
4. The virtual parking scene and the target virtual vehicle can be independently constructed by the user, the diversification of the parking scene is realized, the requirements of the user can be met, and the use experience of the user is improved.
5. The method comprises the steps of establishing a motion model by utilizing data collected by a virtual inertia measurement unit and a wheel speed meter, estimating the estimated pose of a target virtual vehicle at the current moment by combining the pose of the target virtual vehicle at the previous moment, and iteratively adjusting the value of the pose of the target virtual vehicle by taking the estimated pose as an initial value until a positioning result with higher precision is obtained, so that the accuracy of real-time positioning of the target virtual vehicle can be further improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is to be understood that the drawings in the following description are merely exemplary of some embodiments of the invention. For a person skilled in the art, without inventive effort, further figures can be obtained from these figures.
Fig. 1 is a schematic flow chart of a parking simulation method based on a vehicle-mounted looking-around system according to an embodiment of the present invention;
Fig. 2 is another schematic flow chart of a parking simulation method based on a vehicle-mounted looking-around system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a constructed virtual parking scenario;
fig. 4 is a schematic structural diagram of a parking simulation device based on a vehicle-mounted looking-around system according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the embodiments and drawings of the present invention are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The invention provides a parking simulation method and device based on a vehicle-mounted looking-around system, which are used for realizing the simulation of a parking process based on the vehicle-mounted looking-around system and providing an evaluation basis for the evaluation of the parking process based on the vehicle-mounted looking-around system. The following are detailed below.
Fig. 1 is a schematic flow chart of a parking simulation method based on a vehicle-mounted looking-around system, which can be applied to an electronic device. In particular, the method may be applied to a master controller in an electronic device. The main controller may be a Central Processing Unit (CPU), etc. The method may comprise the steps of:
s101: and acquiring a plurality of first images of the target virtual vehicle, which are acquired aiming at the virtual parking scene.
Wherein, the plurality of first images are: the multiple virtual cameras of the target virtual vehicle capture the resulting images at the current time and in different directions of the virtual parking scene.
In the embodiment of the invention, the electronic equipment can be pre-provided with a simulation system, and the simulation system can be understood as computer application program software and is application program software for realizing a simulation function on a parking process based on a vehicle-mounted all-round system. The virtual parking scene may include: parking spaces, driveways, floor tiles, speed bumps, sidewalks, obstacles, numbers, traffic signs, different weather, various buildings, and the like. The sidewalk may also be referred to as a zebra crossing and the traffic sign may include a traffic sign.
The electronic device may obtain, from the simulation system, a plurality of first images of the target virtual vehicle captured for the virtual parking scene. Wherein the first image is: the multiple virtual cameras of the target virtual vehicle capture the resulting images at the current time and in different directions of the virtual parking scene. The virtual camera can acquire images according to a certain frequency and transmit the acquired images to the electronic equipment.
The plurality of virtual cameras may capture images for different directions of a virtual parking scene. In one aspect, the plurality of virtual cameras may be disposed in front, rear, left, and right directions of the target virtual vehicle, respectively, wherein a viewing range of each virtual camera may include at least a ground surface below the virtual camera. In the embodiment of the invention, at the current moment, each virtual camera can shoot at least one first image, and the number of the first images shot by the plurality of virtual cameras is at least not less than that of the virtual cameras.
As an alternative embodiment, the virtual camera may be a fisheye camera and/or a pinhole camera. The Field OF View (FOV) OF the fisheye camera is large, so that the first image shot by the single fisheye camera can include the surrounding environment OF the virtual vehicle as much as possible, and the information content in the first image is improved.
In one implementation, to make the simulation system closer to reality and more accurate in simulation evaluation, the simulation system may include a virtual environment engine and a vehicle dynamics model when the simulation system is established. The virtual environment engine is used for creating a virtual environment, namely the virtual parking scene, and the virtual environment engine can be understood as an application program capable of creating the virtual environment. For example, the virtual environment Engine may be a UE Engine (unregeal Engine). The vehicle dynamics model can adopt a model closer to the real automobile performance, and can model a model comprising parameters from an accelerator, a brake control quantity to vehicle acceleration and deceleration, a steering wheel corner to vehicle steering force and the like aiming at a specific automobile type. A vehicle dynamics model is understood to be a combination of mathematical functions that form the functions of the vehicle for creating a virtual vehicle and for controlling the virtual vehicle as a function of input parameters, so that the virtual vehicle can be driven in a virtual parking situation. The virtual vehicle created by the vehicle dynamics model includes virtual sensors onboard the virtual vehicle. The vehicle dynamics model can be a mathematical model after real vehicle testing, and the performance of the vehicle dynamics model is closer to that of a real vehicle.
In one case, the target virtual vehicle may be provided with other virtual sensors in addition to the virtual camera, wherein the other virtual sensors may include, but are not limited to, an ultrasonic device, an Inertial Measurement Unit (IMU), a wheel speed meter, and the like.
S102: and splicing the plurality of first images to obtain a top spliced image.
The electronic device may project the first image onto the road plane according to a certain mapping rule, and splice the plurality of first images according to an overlapping area that may exist between the first images, to obtain an overhead mosaic of the environmental information around the target virtual vehicle, for example: a top-view mosaic is obtained containing environmental information centered on the target virtual vehicle 360. Or the electronic device prestores the splicing rules of the first images acquired by the virtual cameras, and directly splices the plurality of first images based on the splicing rules to obtain the overlook splicing map.
S103: the preset visual features are identified from the top-view mosaic.
Wherein the preset visual features at least comprise parking spaces.
In the embodiment of the invention, after the electronic device obtains the top mosaic, the preset visual features can be identified from the top mosaic through a preset visual feature identification algorithm, such as a depth learning or image segmentation identification algorithm. In an optional implementation manner, a pre-established semantic feature detection model can be utilized to detect visual features in the top-view mosaic from the top-view mosaic; the semantic feature detection model is a neural network model obtained by taking a sample image at least marked with the visual features as model input training. The neural network model may be: convolutional neural network models, support vector machines and other deep learning based neural network models.
In the embodiment of the invention, the visual features can be image semantic features which have special meanings and are helpful for vehicle positioning after experience screening. As an optional implementation manner, the visual features may be traffic signs, parking spaces, lanes, speed bumps, sidewalks, and the like in the virtual parking scene, and the embodiment of the present invention does not limit specific types of the visual features. The first images acquired by the virtual cameras may or may not have a plurality of visual features.
In an alternative implementation, before the step of detecting the visual feature in the top-view mosaic from the top-view mosaic by using the pre-established semantic feature detection model, a process of establishing the pre-established semantic feature detection model is further included, and from a viewpoint, the process may include: obtaining an initial semantic feature detection model, the initial semantic feature detection model comprising: a feature extraction layer and a feature classification layer; obtaining a plurality of sample images, wherein each sample image comprises one or more sample features, wherein a visual feature is a subset of a sample feature; and obtaining the calibration information of each sample image, wherein each calibration information comprises the calibration position information and the calibration type information of each sample object in the corresponding sample image. The calibration information may be manually calibrated by a worker, for example: and the staff marks each sample object in each sample image by using a rectangular frame in each sample image, wherein the rectangular frame can represent the position information of the target sample object in the sample image and is used as the marked position information. And the staff marks the type of each sample object as the marking type information. Alternatively, the calibration information may be calibrated by the electronic device through a specific program, which is also possible.
Furthermore, the electronic equipment inputs the calibration position information and the calibration type information included in the calibration information corresponding to the plurality of sample images and each sample image into a feature extraction layer of the initial semantic feature detection model to obtain the image feature corresponding to each sample object in each sample image; inputting the image characteristics corresponding to each sample object in each sample image into a characteristic classification layer of an initial semantic characteristic detection model to obtain the prediction position information and the prediction type information corresponding to each sample characteristic in each sample image; matching each piece of predicted position information with the corresponding calibration position information, and matching each piece of predicted type information with the corresponding calibration type information; if the matching is successful, a pre-established semantic feature detection model comprising a feature extraction layer and a feature classification layer is obtained; if the matching fails, adjusting parameters of the feature extraction layer and the feature classification layer, and executing a step of inputting a plurality of sample images into the feature extraction layer to obtain image features corresponding to each sample feature in each sample image; and obtaining a pre-established semantic feature detection model comprising a feature extraction layer and a feature classification layer until matching is successful.
And if the sample image and the prediction information have a corresponding relationship, the calibration information and the prediction information have a corresponding relationship. The prediction information includes prediction location information and prediction type information.
The above-mentioned process of matching each piece of predicted position information with its corresponding calibration position information and matching each piece of predicted type information with its corresponding calibration type information may be: calculating a first loss value between each piece of predicted position information and the corresponding calibration position information by using a preset loss function, calculating a second loss value between each piece of predicted type information and the corresponding calibration type information, judging whether the first loss value is smaller than a first preset loss threshold value or not, and judging whether the second loss value is smaller than a second preset loss threshold value or not; if the first loss value is smaller than a first preset loss threshold value and the second loss value is smaller than a second preset loss threshold value, determining that the initial semantic feature detection model is converged, namely determining that the training of the initial semantic feature detection model is finished to obtain a pre-established semantic feature detection model; if the first loss value is judged to be not less than a first preset loss threshold value, and/or the second loss value is judged to be not less than a second preset loss threshold value; and adjusting parameters of the feature extraction layer and the feature classification layer based on a principle that the first loss value and the second loss value are reduced, then returning to execute the step of inputting a plurality of sample images into the feature extraction layer to obtain the image features corresponding to each sample feature in each sample image until the first loss value is judged to be smaller than a first preset loss threshold value and the second loss value is smaller than a second preset loss threshold value, and determining that the initial semantic feature detection model is converged to obtain a pre-established semantic feature detection model.
After the pre-established semantic feature detection model is obtained, the visual features contained in the image can be detected in real time by using the pre-established semantic feature detection model.
From another perspective, the neural network model is as follows: the network structure adopts an Encoder-Decoder model and mainly comprises two parts: an encoded (Encoder) part and a decoded (Decoder) part.
In the embodiment of the invention, the spliced image, namely the top view spliced image is input into the network, wherein the coding part of the network mainly extracts the characteristics of the image through convolution and pooling layers. The network adjusts the network parameters through the training of marked large-scale samples so as to encode the accurate semantic features and non-semantic features of the network. After extracting features through convolution twice, the coding network carries out down-sampling through pooling. The structure of cascading four two-layer convolutions plus one layer of pooling enables the receptive field of the neurons at the top layer of the coding network to cover semantic elements of different scales in the present example.
The decoding network is a symmetric structure with the encoding network, where the pooling layer of the encoding network is changed to an upsampling layer. And in the decoding part, the feature extracted by coding is amplified to the size of an original image through four times of upsampling, so that pixel semantic classification is realized. The up-sampling is realized by deconvolution, which can obtain most information of the input data, but still cause partial information loss, so that we introduce the characteristics of the bottom layer to supplement the details lost in the decoding process. The bottom layer features mainly come from convolutional layers with different scales in the coding network, and the features extracted from the convolutional layers of the coding network on the same scale can be combined with deconvolution to generate a more accurate feature map. The network training mainly adopts cross entropy to measure the difference between the predicted value and the actual value of the network, and the cross entropy formula is as follows:
Figure BDA0002047882540000101
Wherein y is a mark value of an image element, namely, whether a pixel of the image is a semantic element or a non-semantic element, generally 1 is used for representing the semantic element, and 0 is used for representing the non-semantic element; n is the total number of pixels in the image, x is the input, a is the output of the neuron, a ═ σ (z), z ═ Σjwjxj+ b, it can overcome the problem of slow update of network weights. After the training of the network model is completed, when the method is practically used, the network predicts each pixel of an input image, outputs an attribute value of 0 or 1 corresponding to each pixel, and a connected block of image elements marked as 1 is a meaningful semantic image structure, so that the semantic segmentation of the image is realized. The network structure is specially designed for extracting the semantic features of the spliced image, so that the accuracy of semantic feature extraction is ensured, and the method belongs to one of the invention points. In addition, the first image is spliced, the image semantic features are extracted from the overlook spliced image, the image semantic features in the first image are not extracted one by one, the extraction efficiency of the image semantic features can be improved, and the method also belongs to one of the invention points.
The electronic equipment can use the trained neural network model to identify the visual features from the overlook mosaic, and can quickly and accurately extract the preset visual features from the image.
S104: identifying target features matched with the visual features from the automatic driving navigation electronic map; and determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map.
As an alternative implementation, the position of each image semantic feature in the automatic driving navigation electronic map may be represented by using absolute coordinates in a preset coordinate system based on the virtual parking scene, and a specific coordinate value may be determined when the virtual parking scene is established. As another alternative, the position of each image semantic feature in the electronic map for automatic driving navigation may also be represented by using relative coordinates, that is, the relative position of each image semantic feature with respect to a preset origin of coordinates, where the origin of coordinates may be set according to requirements. For example, when the electronic map for automatic driving navigation is constructed, the entrance of the parking lot may be set as the origin of coordinates, and for each image semantic feature used for constructing the electronic map for automatic driving navigation, the relative position of the image semantic feature with respect to the entrance of the parking lot is measured.
In the embodiment of the invention, for a certain image semantic feature in the automatic driving navigation electronic map, when a target virtual vehicle passes through the position of the feature, a virtual camera of the target virtual vehicle may shoot a first image containing the feature. Subsequently, when the electronic device identifies a target feature matched with a visual feature in the top-view mosaic, image matching algorithms such as Scale-invariant feature transform (SIFT) and Speeded Up Robust Features (SURF) may be specifically used to identify the target feature matched with the visual feature in the top-view mosaic from the electronic map for automatic driving navigation, and the specific matching algorithm is not limited in the embodiment of the present invention.
In the embodiment of the invention, the pose of the target virtual vehicle at the current moment comprises the position and the pose of the target virtual vehicle at the current moment. It can be understood that, if the positions of the semantic features of the images in the electronic map for automatic driving navigation are expressed by absolute coordinates, the position of the target virtual vehicle at the current moment is correspondingly expressed by the absolute coordinates; if the positions of the semantic features of the images in the electronic map for automatic driving navigation are identified by using relative coordinates, the position of the target virtual vehicle at the current moment is correspondingly represented by using the relative coordinates.
For a certain target feature in the automatic driving navigation electronic map, the matched visual feature is actually the projection of the target feature on the imaging plane of the virtual camera (namely the first image obtained by shooting), and the specific projection position is determined according to the pose of the target virtual vehicle when the first image is shot by the virtual camera. In view of this, the pose of the target virtual vehicle in the virtual parking scene at the current time can be determined according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking mosaic. In one aspect, the electronic device may further calculate a final pose of the target virtual vehicle at the current time based on the plurality of poses of the target virtual vehicle at the current time based on the plurality of visual features identified from the top-view mosaic and the location of the target feature in the autopilot navigation electronic map that each visual feature matches.
S105: and determining a target parking space corresponding to the target virtual vehicle from the identified parking spaces.
Wherein, can discern one or more parking stall from overlooking the concatenation drawing, if discern a parking stall from overlooking the concatenation drawing, can directly regard this a parking stall of discerning as the target vehicle that the virtual vehicle of target corresponds. If a plurality of parking spaces are identified from the top-view mosaic, in one case, one parking space can be randomly selected from the plurality of identified parking spaces as a target parking space corresponding to the target virtual vehicle; in another case, to facilitate parking, a target parking space corresponding to the target virtual vehicle may be determined from the identified parking spaces based on the current pose. Specifically, it may be: and determining a parking space in front of or behind the driving of the target virtual vehicle from the recognized parking spaces based on the current pose as a target parking space corresponding to the target virtual vehicle. The parking spaces located in front of or behind the target virtual vehicle in the driving process are relative terms, the rear direction may refer to the driving rear direction closest to the target virtual vehicle in the recognized parking spaces, and the front direction may refer to the driving front direction closest to the target virtual vehicle in the recognized parking spaces. The parking space located in front of or behind the target virtual vehicle is taken as the target parking space corresponding to the target virtual vehicle, which is only an example and does not limit the embodiment of the present invention. It is also possible to determine a parking space on the left or right of the target virtual vehicle as a target parking space corresponding to the target virtual vehicle.
S106: and controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
The electronic device can directly obtain the position of the target parking space from the automatic driving navigation electronic map, namely the spatial position of the target parking space in the virtual parking scene. Or the position of the target parking space, that is, the spatial position of the target parking space in the virtual parking scene, may be determined according to the current pose of the target virtual vehicle, the position of the target parking space in the overlook mosaic, and the depth information corresponding to the target parking space obtained in advance.
Subsequently, the electronic device may control the target virtual vehicle to travel to the target parking space based on the current pose, the position of the target parking space, and a preset parking algorithm. In an optional implementation manner, the preset parking algorithm includes a preset parking path planning algorithm and a preset parking control algorithm; the S106 may include:
determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and a preset parking path planning algorithm; and controlling the target virtual vehicle to run to the target parking space based on the parking path and a preset parking control algorithm.
The electronic device determines a parking path of the target virtual vehicle running to the target parking space by taking the current pose as the initial position and the pose of the target virtual vehicle in the parking process and the position of the target parking space as the final position of the target virtual vehicle in the parking process based on a preset parking path planning algorithm. The preset parking path planning algorithm comprises the following steps: the embodiment of the present invention does not limit the specific type of the preset parking path planning algorithm, which may be any existing algorithm for planning the parking path of the target virtual vehicle.
And then, the electronic equipment parking path and a preset parking control algorithm control the target virtual vehicle to run to the target parking space. The preset parking control algorithm comprises the following steps: the parking control algorithm to be evaluated comprises the following preset parking control algorithms: the present invention is not limited to a specific type of a preset parking control algorithm, and may be any algorithm that is currently used for calculating parking control data according to a parking path so that a target virtual vehicle travels to a target parking space according to the parking control data.
In one implementation, the electronic device may calculate parking control data according to a parking path, and then input the parking control data into the simulation system, and the simulation system configures the parking control data to the target virtual vehicle, so that the target virtual vehicle travels to a target parking space in the simulation system according to the parking control data. Wherein the parking control data includes at least one of the following data: virtual throttle data, virtual brake data, virtual vehicle direction of travel data, virtual gear data, wheel pulse data, and wheel speed data.
In another implementation manner, the electronic device may first obtain current parking control data and ground information at a current time, further calculate parking control data based on the current parking control data, a preset visual feature and a parking path, further input the parking control data into the simulation system, and the simulation system configures the parking control data to the target virtual vehicle, so that the target virtual vehicle travels to a target parking space in the simulation system according to the parking control data. Wherein, the ground information may be: an obstacle or the like identified by looking down the mosaic.
By applying the embodiment of the invention, the looking-around system of the cameras is simulated by using the plurality of virtual cameras, the environment around the target virtual vehicle in the virtual parking scene is shot by the camera looking-around system, the position of the features in the image in the virtual parking scene and the positions of the features in the map are utilized by combining the automatic driving navigation electronic map, the pose of the target virtual vehicle at the current moment is determined, and the simulation of vehicle positioning in the parking process based on the vehicle-mounted looking-around system is realized. And then, a target parking space is determined based on the current pose, a parking path required by the target virtual vehicle to travel to the target parking space is determined based on the current pose, the position of the target parking space and a preset parking path planning algorithm, the target virtual vehicle is controlled to travel to the target parking space based on the parking path and a preset parking control algorithm, and parking path determination and parking simulation in a parking process based on a vehicle-mounted looking-around system are achieved. The simulation of the parking process based on the vehicle-mounted looking-around system is realized through the simulation system, and an evaluation basis is provided for the evaluation of the parking process based on the vehicle-mounted looking-around system.
In another embodiment of the present invention, as shown in fig. 2, the method may include the steps of:
S201: and acquiring a plurality of first images of the target virtual vehicle, which are acquired aiming at the virtual parking scene.
Wherein, the plurality of first images are: the multiple virtual cameras of the target virtual vehicle capture the resulting images at the current time and in different directions of the virtual parking scene.
S202: and splicing the plurality of first images to obtain a top spliced image.
S203: the preset visual features are identified from the top-view mosaic.
Wherein the preset visual features at least comprise parking spaces.
S204: identifying target features matched with the visual features from the automatic driving navigation electronic map; and determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map.
S205: and determining a target parking space corresponding to the target virtual vehicle from the identified parking spaces.
S206: and determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and a preset parking path planning algorithm.
S207: and controlling the target virtual vehicle to run to the target parking space based on the parking path and a preset parking control algorithm.
S208: and displaying the parking path for the user to evaluate a preset parking path planning algorithm.
S209: and obtaining parking control data generated in the process that the target virtual vehicle drives to the target parking space.
Wherein the parking control data includes at least one of: virtual throttle data, virtual brake data, virtual vehicle driving direction data, virtual gear data, wheel pulse data and wheel speed data;
s210: and displaying the parking control data to allow a user to evaluate a preset parking control algorithm.
Wherein S201 is the same as S101 shown in fig. 1, S202 is the same as S102 shown in fig. 1, S203 is the same as S103 shown in fig. 1, S204 is the same as S104 shown in fig. 1, S205 is the same as S105 shown in fig. 1, and S206 and S207 are an implementation manner of S106 shown in fig. 1, and are not repeated herein.
In the embodiment of the invention, after the parking process based on the vehicle-mounted looking-around system is simulated, the generated intermediate information based on the simulation process can be obtained, and then the related algorithm in the parking process based on the vehicle-mounted looking-around system is evaluated based on the intermediate information. The intermediate information may include parking control data generated based on the generated parking path and in the process of controlling the target virtual vehicle to travel to the target parking space, and then the parking path and the parking control data are displayed so that a user can evaluate a preset parking path planning algorithm and evaluate a preset parking control algorithm.
The user can check the parking path to see whether the parking path can cause the target virtual vehicle to collide with obstacles such as other virtual vehicles, virtual walls, virtual lamp posts and the like. It is also possible that the user can look at the parking control data to determine whether the fluctuation of the vehicle speed during the parking of the target virtual vehicle is excessive, such as whether the difference between the maximum speed and the minimum speed of the vehicle exceeds a preset vehicle speed threshold, and so on.
When the parking path and the parking control data are displayed, the parking path and the parking control data can be displayed on the same display interface or different display interfaces.
In another embodiment, it is possible to display only the parking route or only the parking control data.
In another embodiment of the present invention, after the step of stitching the plurality of first images to obtain the top-view stitched image, the method may further include: obtaining a top-view shot picture of a scene camera at the current moment for top-view shooting of a target virtual vehicle, wherein the scene camera is as follows: the virtual camera is arranged in the virtual parking scene and used for shooting the target virtual vehicle in a overlooking mode; based on the top-view shot image and the top-view spliced image, the installation positions of the plurality of virtual cameras are evaluated.
In another embodiment of the present invention, the step of identifying the preset visual feature from the top-view mosaic includes: recognizing preset visual features from the overlook spliced graph by using a preset visual feature recognition algorithm;
after the step of identifying preset visual features from the top-view mosaic, the method may further comprise: acquiring the real pose of the target virtual vehicle in the virtual parking scene at the current moment;
determining a scene visual angle range corresponding to the target virtual vehicle based on the real pose and the visual angle ranges of the plurality of virtual cameras; based on the scene visual angle range, determining scene visual characteristics existing in the scene visual angle range from the scene visual characteristics in the virtual parking scene as target scene visual characteristics; and evaluating a preset visual feature recognition algorithm based on the recognized preset visual features and the visual features of the target scene.
In another embodiment of the present invention, after the step of determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the top view mosaic, the method may further comprise:
Acquiring the real pose of the target virtual vehicle in the virtual parking scene at the current moment; calculating a difference pose between the real pose and the current pose; and evaluating the reliability of the current pose based on the difference pose.
In one implementation, a virtual camera for top view shooting, which may be referred to as a scene camera, may be preset in the virtual parking scene, after the top view mosaic is obtained by mosaic, a top view mosaic obtained when the scene camera performs top view shooting on the target virtual vehicle at the current time may be obtained, the top view mosaic includes an environment around the position where the target virtual vehicle is located at the current time, the top view mosaic theoretically includes an environment around the position where the target virtual vehicle is located at the current time, by comparing the content included in the top view mosaic with the content included in the top view mosaic, it may be evaluated whether the installation positions of the plurality of virtual cameras may acquire an environment image including the periphery of the target virtual vehicle, after the installation positions of the plurality of virtual cameras may acquire an environment image including the periphery of the target virtual vehicle, it may further be evaluated whether a stitching rule for stitching the plurality of first images is appropriate.
In one implementation, the simulation system can provide the pose of the target virtual vehicle in the virtual parking scene in real time, the electronic device can obtain the real pose of the target virtual vehicle in the virtual parking scene at the current moment and obtain the view angle range of each virtual camera in the plurality of virtual cameras of the target virtual vehicle, and further, the electronic device can determine the view angle range of the scene which can be acquired by the plurality of virtual cameras of the target virtual vehicle at the current moment, namely the scene view angle range corresponding to the target virtual vehicle, based on the real pose and each view angle range of the target virtual vehicle; furthermore, the electronic device can determine scene visual features existing in the scene visual angle range from the scene visual features in the virtual parking scene as target scene visual features; comparing each preset visual feature with each target scene visual feature, and when each preset visual feature is successfully matched with each target scene visual feature uniformly, determining that the accuracy of the result determined by the preset visual feature recognition algorithm is high; when the preset visual features have features which do not belong to the visual features of the target scene, determining that the false detection occurs, wherein the result determined by the preset visual feature recognition algorithm is not high in accuracy; when the preset visual features are less than the visual features of the target scene, the condition of missing detection can be determined, and the accuracy of the result determined by the preset visual feature recognition algorithm is not high. The target scene visual characteristics may include: traffic signs, parking spaces, lanes, speed bumps, sidewalks and the like in the virtual parking scene.
In an implementation manner, the simulation system may provide the pose of the target virtual vehicle in the virtual parking scene in real time, the electronic device may obtain the real pose of the target virtual vehicle in the virtual parking scene at the current time, compare the real pose with the determined current pose, calculate a difference pose between the real pose and the current pose, and further evaluate the reliability of the current pose, where when the difference between the real pose and the current pose is larger, that is, the difference pose is larger, the reliability of the current pose is larger, and further, the accuracy of the result obtained by determining the corresponding positioning algorithm is not high.
In another embodiment of the present invention, before S101, the method may further include:
constructing to obtain a virtual parking scene based on scene construction operation triggered by a user;
and constructing to obtain the target virtual vehicle based on the vehicle construction operation triggered by the user.
In this embodiment, in order to better improve the user experience, a standardized virtual parking scene construction tool is provided, a parking map can be customized by a user, a parking process based on a vehicle-mounted looking-around system is provided, and virtual sensors that may be used, for example: the virtual camera is used for collecting images and comprises a fisheye camera, a pinhole camera and the like; the virtual ultrasonic device, the IMU and the wheel speed meter are used for collecting distance information, displacement information and running speed information, and a parameter function of a user-defined virtual sensor is provided; control interfaces for virtual vehicles are provided, for example: a control interface for controlling data of an accelerator, a control interface for controlling data of a brake, a control interface for controlling a virtual vehicle driving direction, a control interface for controlling data of a gear, and the like.
In this embodiment, through the provided functions, the electronic device may construct and obtain a virtual parking scene and a target virtual vehicle through corresponding operations of a user. The virtual parking scene is customized by a user, and the target virtual vehicle is customized by the user.
In another embodiment of the present invention, the step of constructing the virtual parking scene based on the scene construction operation triggered by the user is implemented by any one of the following two implementation manners:
the first implementation mode comprises the following steps:
outputting a first display interface displaying scene identifications of a plurality of models of a preset virtual parking scene based on a first scene construction operation triggered by a user;
the scene identifier may be a number or a letter, or a text that may represent a scene feature of the preset virtual parking scene model, or a thumbnail of the preset virtual parking scene model.
After detecting the user selection operation, constructing and obtaining a virtual parking scene corresponding to the scene identification carried by the selection operation based on the scene identification of the model of the selected virtual parking scene carried by the selection operation;
the second implementation mode comprises the following steps:
outputting an initial virtual parking scene based on a second scene construction operation triggered by a user, and outputting a second display interface displaying a plurality of scene construction models required for constructing the initial virtual parking scene, wherein the scene construction models at least comprise at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model;
And displaying the scene construction model selected and adjusted by the user in the initial virtual parking scene based on the selection operation and the adjustment operation of the user on each scene construction model so as to construct and obtain the virtual parking scene.
In an implementation manner, a plurality of preset virtual parking scene models can be prestored in a simulation system, an electronic device can output a first display interface displaying scene identifications of the plurality of preset virtual parking scene models after detecting a first scene construction operation triggered by a user, the user can select one preset virtual parking scene model according to own requirements, namely, the user selects one scene identification of the preset virtual parking scene model through a mouse, a stylus pen or a finger, and the electronic device constructs a virtual parking scene corresponding to the scene identification carried by a selected operation in the simulation system based on the scene identification of the selected virtual parking scene model carried by the selected operation after detecting the selected operation of the user. The virtual parking scene corresponding to the scene identifier carried by the selected operation obtained by the construction can be represented as follows: and rendering the virtual parking scene based on the model of the selected virtual parking scene, wherein the virtual parking scene is a three-dimensional scene.
In another implementation, a plurality of scene building models required for building an initial virtual parking scene may be prestored in the simulation system, where the scene building models include, but are not limited to, at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model. After detecting a second scene construction operation triggered by a user, the electronic device outputs an initial virtual parking scene based on the second scene construction operation triggered by the user, and outputs a second display interface which displays a plurality of scene construction models required for constructing the initial virtual parking scene; the user can construct models from multiple scenes according to own requirements, select the required scene construction models and set the required scene construction models in the initial virtual parking scene so as to construct and obtain the virtual parking scene. After detecting the selection operation of the user on each scene construction model, the electronic device sets each selected scene construction model in the initial virtual parking scene based on the selection operation, and after detecting the adjustment operation of the user on each selected scene construction model, adjusts each selected scene construction model in the initial virtual parking scene, such as adjusting the position and size of each selected scene construction model, so as to construct a virtual parking scene in the simulation system.
In another embodiment of the present invention, the step of constructing the target virtual vehicle based on the vehicle construction operation triggered by the user is implemented by any one of the following two implementation manners:
the first implementation mode comprises the following steps:
outputting a third display interface for displaying vehicle identifications of models of a plurality of preset virtual vehicles based on first vehicle construction operation triggered by a user, wherein each preset virtual vehicle at least corresponds to a plurality of virtual cameras;
it is possible that the vehicle identifier may be a number or a letter, or may represent a model of a preset virtual vehicle, or may be a thumbnail of the preset virtual vehicle.
Constructing and obtaining a target virtual vehicle based on the selection operation of a user on the vehicle identification of the model of the target virtual vehicle;
the second implementation mode comprises the following steps:
outputting an initial virtual vehicle based on a second scene construction operation triggered by a user, and outputting a configuration interface showing a plurality of parameters for configuring virtual sensors corresponding to the initial virtual vehicle, wherein the virtual sensors corresponding to the initial virtual vehicle at least comprise a plurality of virtual cameras;
constructing and obtaining a target virtual vehicle based on configuration operation of a user on parameters of a virtual sensor corresponding to the initial virtual vehicle, wherein the configuration operation carries: a parameter value configured by the user for a parameter of the virtual sensor.
In one implementation manner, a plurality of models of the preset virtual vehicle may be prestored in the simulation system, after detecting a first vehicle construction operation triggered by a user, the electronic device outputs a third display interface displaying vehicle identifiers of the plurality of models of the preset virtual vehicle, the user may select one of the models of the preset virtual vehicle according to a need of the user, that is, the user selects a vehicle identifier of one of the models of the preset virtual vehicle through a mouse, a stylus, or a finger, and the electronic device constructs a target virtual vehicle by detecting a selection operation of the user on the vehicle identifier of the model of the target virtual vehicle, where the target virtual vehicle obtained by the above-mentioned construction may refer to: and rendering the target virtual vehicle in the virtual parking scene. Wherein the target virtual vehicle is a three-dimensional virtual vehicle. The target virtual vehicle may be provided with a plurality of virtual sensors, for example: the system comprises a plurality of virtual cameras for collecting images, a plurality of virtual cameras; the device comprises a virtual ultrasonic device, an IMU, a wheel speed meter and the like, wherein the virtual ultrasonic device is used for collecting distance information, displacement information and running speed information. The plurality of virtual cameras can be respectively arranged on the front, back, left and right directions of the target virtual vehicle, so that the plurality of virtual cameras can acquire images aiming at the periphery of the position where the target virtual vehicle is located. The parameters of a plurality of virtual sensors arranged in the target virtual vehicle are preset by referring to the parameters of the sensors of the real vehicle corresponding to the target virtual vehicle.
In another implementation manner, in order to better meet the requirements of the user, when detecting a second vehicle construction operation triggered by the user, the electronic device outputs an initial virtual vehicle based on the second vehicle construction operation triggered by the user, and outputs a configuration interface showing a plurality of parameters for configuring a virtual sensor corresponding to the initial virtual vehicle, at this time, the user may configure the parameters of the virtual sensor corresponding to the initial virtual vehicle on the configuration interface. For example: setting the posture of the virtual camera, setting the acquisition frequency of the IMU sensor and the like. Furthermore, after the electronic device detects the configuration operation of the user on the parameters of the virtual sensors corresponding to the initial virtual vehicle, the target virtual vehicle is constructed and obtained based on the parameter values carried in the configuration operation and configured by the user on the parameters of the virtual sensors.
As shown in fig. 3, a schematic diagram of a virtual parking scene is constructed, as shown in fig. 3, the virtual parking scene includes a target virtual vehicle, a parking space constructed by a parking space model, an obstacle such as a stationary vehicle constructed by a vehicle model, a floor tile constructed by a floor tile model, a zebra crossing constructed by a zebra crossing model, and the like.
In another embodiment of the present invention, the step of determining the current pose of the target virtual vehicle in the target virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the top view mosaic map may include:
calculating a mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map;
calculating a first error between the mapping position and the actual position of the target feature in the automatic driving navigation electronic map;
judging whether the first error is smaller than a specified threshold value;
when the first error is larger than or equal to a specified threshold value, adjusting the value of the estimated pose, and calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and when the first error is smaller than a specified threshold value, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose.
In the embodiment of the present invention, when the electronic device calculates the mapping position of the visual feature for the first time, an initial value of the estimated pose at the current time may be obtained first, where the initial value of the estimated pose may be: the electronic equipment is obtained by estimating the pose of the electronic equipment at the previous moment based on the current moment and data collected by an IMU (inertial measurement unit) and/or a wheel speed meter of the target virtual vehicle; or any value is used as an initial value for estimating the pose.
After the electronic device obtains the initial value of the estimated pose, the pose of the vehicle at the current moment can be determined according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the target image, and specifically, the estimated pose value can be used as pose information P of the vehicle at the moment iiAnd continuously iteratively adjusting PiUntil the error between the mapping position of the visual feature in the map and the actual position of the target feature is minimum, and when the error is minimum, P is obtainediThe value of (1) is determined as the pose of the vehicle at the current moment.
In an implementation manner, after the electronic device obtains the initial value of the estimated pose, the pose change between the current time and the previous time of the target virtual vehicle can be determined and obtained according to the value of the estimated pose and the pose of the target virtual vehicle at the previous time; and obtaining the position of the visual feature in the overlooking splicing map corresponding to the previous moment, and determining to obtain the depth information of the visual feature based on the pose change between the current moment and the previous moment of the target virtual vehicle, the position of the visual feature in the overlooking splicing map corresponding to the previous moment and the position of the visual feature in the overlooking splicing map corresponding to the current moment by utilizing a triangulation algorithm. Wherein, the depth information represents the distance information between the visual feature and the virtual camera, and can also be considered as the distance information between the visual feature and the target virtual vehicle: and then the electronic equipment calculates the mapping position of the visual feature in the automatic driving navigation electronic map based on the value of the estimated pose, the position of the visual feature in the overlooking splicing map and the depth information of the visual feature.
Alternatively, it may be: the target virtual vehicle is provided with a laser sensor, the depth information of the visual features can be acquired through the laser sensor, the electronic equipment acquires the depth information of the visual features acquired through the laser sensor, and the mapping position of the visual features mapped to the automatic driving navigation electronic map is calculated based on the values of the estimated pose, the positions of the visual features in the overlooking splicing map and the depth information of the visual features.
The process of calculating the mapping position of the visual feature to the automatic driving navigation electronic map based on the value of the estimated pose, the position of the visual feature in the overlooking mosaic and the depth information of the visual feature may be as follows: and determining a first device position of the visual feature in the device coordinate system corresponding to the overlooking mosaic corresponding to the current moment based on the position of the visual feature in the overlooking mosaic, the depth information of the visual feature and a first conversion relation between the image coordinate system and the device coordinate system corresponding to the overlooking mosaic corresponding to the current moment, and further determining to obtain a mapping position of the visual feature mapped to the automatic driving navigation electronic map based on the value of the estimated pose and the first device position. Wherein, the first conversion relation is a preset conversion relation.
Calculating a first error between the mapping position and the actual position of the target feature in the automatic driving navigation electronic map; judging whether the first error is smaller than a specified threshold value; when the first error is larger than or equal to a specified threshold value, the error is considered to be larger, the value of the estimated pose is adjusted based on the principle that the first error is reduced, and the step of calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking mosaic map is returned to be executed; when the first error is smaller than the specified threshold, the error may be considered acceptable, and the accuracy of the positioning is higher. And determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose.
In another embodiment of the present invention, the step of determining the current pose of the target virtual vehicle in the target virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the top view mosaic map may include:
calculating the projection position of the target feature projected to the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
Calculating a second error between the projected position of the target feature and the actual position of the visual feature in the top-down mosaic;
judging whether the second error is smaller than a specified threshold value;
when the second error is larger than or equal to the designated threshold value, adjusting the value of the estimated pose, and calculating the projection position of the target feature projected to the overlook mosaic according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
and when the second error is smaller than the specified threshold, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose.
Similarly, in the process of calculating the projection position of the target feature in the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map, distance information, namely depth information, between the target feature and the target virtual vehicle or the virtual camera can be obtained firstly; determining to obtain a second device position of the target feature under a device coordinate system corresponding to the overlook mosaic corresponding to the current moment based on the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map; and then calculating the projection position of the target feature projected to the overlook mosaic image based on the second equipment position, the obtained depth information of the target feature and the target virtual vehicle or the virtual camera and the first conversion relation. The obtaining mode of the depth information of the target feature and the target virtual vehicle or the virtual camera refers to the obtaining mode of the depth information of the visual feature, and is not repeated.
In one implementation, the above steps can be specifically expressed as the following mathematical model:
Pi=argmin(||Xij-f(Pi,Aj)||);
wherein, PiPose of the virtual vehicle for the target at time i, AjFor the location of the jth target feature in the autonomous electronic navigation map, XijFor views matching the jth object feature described aboveThe position of the perceptual features in the top-view mosaic, f (,) is the projection equation for AjProjection to PiSo that the projection result thereof becomes a sum XijThe same form of expression can be used for obtaining the errors of observation and actual observation of the current value mapping of the estimated pose, and the virtual camera pose (namely the pose of the target virtual vehicle) and the observation are optimized in a nonlinear optimization mode to iteratively reduce the errors so as to obtain the pose of the maximum likelihood. That is to say, in the embodiment of the present invention, the estimated pose value obtained by estimating the pose of the target virtual vehicle at the previous time and the data collected by the IMU and/or the wheel speed meter of the target virtual vehicle may also be used as the pose P of the target virtual vehicle at the time iiAnd continuously iteratively adjusting PiUntil the error between the projection position of the target feature in the overlook mosaic picture and the actual position of the visual feature is minimum, and when the error is minimum, P iAnd determining the value as the pose of the target virtual vehicle at the current moment. The data model is combined to show that after the estimation range of the positioning pose of the target virtual vehicle is determined, the numerical optimization algorithm is further adopted to determine the positioning pose of the target virtual vehicle with higher precision, and the method also belongs to one of the invention points.
In another embodiment of the present invention, before the step of calculating a mapping position of the visual feature to the electronic map for automated driving navigation according to the value of the estimated pose and the position of the visual feature in the top view mosaic, the method may further include:
calculating the estimated pose of the virtual vehicle at the current moment by taking the pose of the target virtual vehicle at the last moment as a reference and combining a motion model, and executing the step of calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map; the last moment is: at the adjacent time before the current time, the motion model is determined by data collected by an inertia measurement unit and/or a wheel speed meter of the target virtual vehicle;
And the step of calculating the mapping position of the visual feature to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map can comprise the following steps:
taking the value of the estimated pose as an initial value of the estimated pose;
calculating a mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the current value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and the step of calculating the projection position of the target feature in the overlooking mosaic according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map can comprise the following steps of:
taking the value of the estimated pose as an initial value of the estimated pose;
and calculating the projection position of the target feature projected to the overlooking splicing map according to the current value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map.
In the embodiment of the invention, the electronic device may calculate the pose of the target virtual vehicle at regular time according to a certain frequency, and the motion model may be determined by data collected by an IMU and/or a wheel speed meter of the target virtual vehicle. The six-axis IMU can measure the three-axis acceleration and angular rate of the target virtual vehicle, and the wheel speed meter can measure the wheel rotating speed of the vehicle; according to the pose of the target virtual vehicle at the previous moment, the time is integrated by utilizing the measurement data of the IMU and/or the wheel speed meter, and the estimated pose of the target virtual vehicle in the virtual parking scene at the current moment can be calculated. In the embodiment, the invention constructs a motion model based on data acquired by the IMU and/or data acquired by the tachometer. Compared with a uniform velocity model (namely, the relative motion speeds of the vehicles at two adjacent moments are the same), the precision of the motion model is higher than that of the uniform velocity model, so that the actual motion condition of the vehicles can be better represented. However, since the error of the integral is gradually accumulated with the increase of time, in order to further improve the accuracy of the final pose of the target virtual vehicle positioning, the present invention can determine the estimation range of the pose of the target virtual vehicle positioning according to the estimated pose calculated by the motion model, thereby further determining the pose of the positioning with higher accuracy within the range. This also belongs to one of the inventions of the present invention.
Therefore, after the electronic equipment obtains the visual characteristics, a motion model can be established according to the measurement data of the inertia measurement unit and/or the wheel speed meter, the estimated pose of the vehicle at the current moment is calculated by combining the motion model, the value of the estimated pose is used as an initial value of the estimated pose, and the value of the estimated pose is adjusted through continuous iteration to finally determine the vehicle positioning pose with higher precision.
Corresponding to the above method embodiment, an embodiment of the present invention provides a parking simulation apparatus based on a vehicle-mounted looking-around system, as shown in fig. 4, including: a first obtaining module 410 configured to obtain a plurality of first images of a target virtual vehicle captured for a virtual parking scene, wherein the plurality of first images are: the virtual parking system comprises a plurality of virtual cameras of the target virtual vehicle, a plurality of parking sensors and a plurality of parking sensors, wherein the virtual cameras of the target virtual vehicle shoot images in different directions of the virtual parking scene at the current moment; a stitching module 420 configured to stitch the plurality of first images to obtain a top stitching map; an identifying module 430 configured to identify a preset visual feature from the top-view mosaic, wherein the preset visual feature includes at least a parking space; an identification determination module 440 configured to identify a target feature matching the visual feature from an electronic map of automated driving navigation; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map; a first determining module 450, configured to determine a target parking space corresponding to the target virtual vehicle from the identified parking spaces; a control module 460 configured to control the target virtual vehicle to travel to the target parking space based on the current pose, the position of the target parking space, and a preset parking algorithm.
By applying the embodiment of the invention, the looking-around system of the cameras is simulated by using the plurality of virtual cameras, the environment around the target virtual vehicle in the virtual parking scene is shot by the camera looking-around system, the position of the features in the image in the virtual parking scene and the positions of the features in the map are utilized by combining the automatic driving navigation electronic map, the pose of the target virtual vehicle at the current moment is determined, and the simulation of vehicle positioning in the parking process based on the vehicle-mounted looking-around system is realized. And then, determining a target parking space, controlling the target virtual vehicle to drive to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm, and realizing parking path determination and parking simulation in the parking process based on the vehicle-mounted looking-around system. The simulation of the parking process based on the vehicle-mounted looking-around system is realized through the simulation system, and an evaluation basis is provided for the evaluation of the parking process based on the vehicle-mounted looking-around system.
In another embodiment of the present invention, the preset parking algorithm includes a preset parking path planning algorithm and a preset parking control algorithm;
the control module 460 is specifically configured to: determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and the preset parking path planning algorithm; and controlling the target virtual vehicle to run to the target parking space based on the parking path and the preset parking control algorithm.
In another embodiment of the present invention, the apparatus further comprises: the first display module is configured to display the parking path after the target virtual vehicle is controlled to drive to the target parking space based on the parking path and the preset parking control algorithm, so that a user can evaluate the preset parking path planning algorithm; and/or a second obtaining module configured to obtain parking control data generated during driving of the target virtual vehicle to the target parking space, wherein the parking control data includes at least one of: virtual throttle data, virtual brake data, virtual vehicle driving direction data, virtual gear data, wheel pulse data and wheel speed data; a second display module configured to display the parking control data for a user to evaluate the preset parking control algorithm.
In another embodiment of the present invention, the apparatus further comprises:
a third obtaining module, configured to obtain a top view shot diagram of a scene camera shooting a top view of the target virtual vehicle at the current time after the top view mosaic diagram is obtained by stitching the plurality of first images, where the scene camera is: the virtual camera is arranged in the virtual parking scene and used for shooting the target virtual vehicle in a overlooking mode; a first evaluation module configured to evaluate the installation positions of the plurality of virtual cameras based on the top shot view and the top mosaic view.
In another embodiment of the present invention, the identification module is specifically configured to identify a preset visual feature from the top-view mosaic by using a preset visual feature identification algorithm;
the device further comprises: a fourth obtaining module configured to obtain a real pose of the target virtual vehicle at the virtual parking scene at the current moment after the preset visual features are identified from the top-view mosaic; a second determining module configured to determine a scene view range corresponding to the target virtual vehicle based on the real pose and the view ranges of the plurality of virtual cameras; the third determining module is configured to determine scene visual features existing in the scene visual angle range from the scene visual features in the virtual parking scene based on the scene visual angle range, and the scene visual features are used as target scene visual features; a second evaluation module configured to evaluate the preset visual feature recognition algorithm based on the recognized preset visual feature and the target scene visual feature.
In another embodiment of the present invention, the apparatus further comprises: a fifth obtaining module configured to obtain a real pose of the target virtual vehicle in the virtual parking scene at a current moment after determining a current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the look-down mosaic; a calculation module configured to calculate a difference pose between the true pose and the current pose; a third evaluation module configured to evaluate the confidence level of the current pose based on the difference pose.
In another embodiment of the present invention, the apparatus further comprises:
the virtual parking scene acquisition module is configured to acquire a plurality of first images of a target virtual vehicle, which are acquired aiming at a virtual parking scene, and construct the virtual parking scene based on scene construction operation triggered by a user; a second construction module configured to construct the target virtual vehicle based on the user-triggered vehicle construction operation.
In another embodiment of the present invention, the first building module is specifically configured to: outputting a first display interface displaying scene identifications of a plurality of models of a preset virtual parking scene based on a first scene construction operation triggered by a user; after the user selection operation is detected, constructing and obtaining a virtual parking scene corresponding to the scene identification carried by the selection operation based on the scene identification of the model of the selected virtual parking scene carried by the selection operation; or, specifically configured to: outputting an initial virtual parking scene based on a second scene construction operation triggered by a user, and outputting a second display interface displaying a plurality of scene construction models required for constructing the initial virtual parking scene, wherein the scene construction models at least comprise at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model; displaying the scene construction models selected and adjusted by the user in the initial virtual parking scene based on the selection operation and the adjustment operation of the user on each scene construction model to construct and obtain the virtual parking scene;
The second building block is specifically configured to: outputting a third display interface displaying vehicle identifications of models of a plurality of preset virtual vehicles based on the first vehicle construction operation triggered by the user, wherein each preset virtual vehicle at least corresponds to a plurality of virtual cameras; constructing and obtaining a target virtual vehicle based on the selection operation of the user on the vehicle identification of the model of the target virtual vehicle; or, specifically configured to: outputting an initial virtual vehicle based on a second scene construction operation triggered by a user, and outputting a configuration interface showing a plurality of parameters for configuring virtual sensors corresponding to the initial virtual vehicle, wherein the virtual sensors corresponding to the initial virtual vehicle at least comprise a plurality of virtual cameras; constructing and obtaining the target virtual vehicle based on the configuration operation of the user on the parameters of the virtual sensor corresponding to the initial virtual vehicle, wherein the configuration operation carries: a user configured parameter value for a parameter of the virtual sensor.
In another embodiment of the present invention, the identification determination module 440 includes:
a first calculation unit, configured to calculate, according to a value of an estimated pose and a position of the visual feature in the top-view mosaic, a mapping position of the visual feature in the map; a second calculation unit configured to calculate a first error between the mapped position and an actual position of the target feature in the automatic driving navigation electronic map; a first judgment unit configured to judge whether the first error is smaller than a specified threshold; a first adjusting unit configured to adjust a value of the estimated pose and trigger the first calculating unit when the first error is greater than or equal to the specified threshold; a first determination unit configured to determine a current pose of the target virtual vehicle in the target virtual parking scene according to a current value of the estimated pose when the first error is smaller than the specified threshold;
Alternatively, the identification determination module 440 includes: a third calculation unit, configured to calculate a projection position of the target feature projected into the top view mosaic according to a value of the estimated pose and a position of the target feature in the automatic driving navigation electronic map; a fourth calculation unit configured to calculate a second error between the projected position of the target feature and an actual position of the visual feature in the top-view mosaic; a second determination unit configured to determine whether the second error is smaller than a specified threshold; a second adjusting unit configured to adjust a value of the estimated pose and trigger the third calculating unit when the second error is greater than or equal to the specified threshold; a second determination unit configured to determine a current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose when the second error is less than the specified threshold.
In another embodiment of the present invention, the identification determination module 440 further comprises:
a fifth calculating unit, configured to calculate, by taking the pose of the target virtual vehicle at the previous time as a reference, an estimated pose of the virtual vehicle at the current time by using a motion model before calculating the mapping position of the visual feature to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking mosaic, and trigger the first calculating unit; the last moment is as follows: at a time instant adjacent to and before the current time instant in time, the motion model is determined from data collected by an inertial measurement unit and/or a wheel speed meter of the target virtual vehicle;
And the first computing unit is specifically configured to: taking the value of the estimated pose as an initial value of the estimated pose; calculating a mapping position of the visual feature in the automatic driving navigation electronic map according to the current value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and the third computing unit is specifically configured to: taking the value of the estimated pose as an initial value of the estimated pose; and calculating the projection position of the target feature projected to the overlooking splicing map according to the current value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map.
The above device embodiment corresponds to the method embodiment, and has the same technical effect as the method embodiment, and for the specific description, refer to the method embodiment. The device embodiment is obtained based on the method embodiment, and for specific description, reference may be made to the method embodiment section, which is not described herein again.
Those of ordinary skill in the art will understand that: the figures are merely schematic representations of one embodiment, and the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those of ordinary skill in the art will understand that: modules in the devices in the embodiments may be distributed in the devices in the embodiments according to the description of the embodiments, or may be located in one or more devices different from the embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A parking simulation method based on a vehicle-mounted looking-around system is characterized by comprising the following steps:
acquiring a plurality of first images of a target virtual vehicle, which are acquired aiming at a virtual parking scene, wherein the plurality of first images are as follows: the virtual parking system comprises a plurality of virtual cameras of the target virtual vehicle, a plurality of parking sensors and a plurality of parking sensors, wherein the virtual cameras of the target virtual vehicle shoot images in different directions of the virtual parking scene at the current moment;
Splicing the plurality of first images to obtain a top spliced image;
identifying a preset visual feature from the top-view mosaic, wherein the preset visual feature at least comprises a parking space;
identifying target features matched with the visual features from an automatic driving navigation electronic map; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map;
determining a target parking space corresponding to the target virtual vehicle from the identified parking spaces;
and controlling the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
2. The method of claim 1, wherein the predetermined parking algorithm comprises a predetermined parking path planning algorithm and a predetermined parking control algorithm;
the step of controlling the target virtual vehicle to travel to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm comprises the following steps:
Determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and the preset parking path planning algorithm;
and controlling the target virtual vehicle to run to the target parking space based on the parking path and the preset parking control algorithm.
3. The method according to claim 2, wherein after the step of controlling the target virtual vehicle to travel to the target parking space based on the parking path and the preset parking control algorithm, the method further comprises:
displaying the parking path for a user to evaluate the preset parking path planning algorithm; and/or the presence of a gas in the gas,
obtaining parking control data generated in the process that the target virtual vehicle drives to the target parking space, wherein the parking control data comprises at least one of the following data: virtual throttle data, virtual brake data, virtual vehicle driving direction data, virtual gear data, wheel pulse data and wheel speed data;
and displaying the parking control data to allow a user to evaluate the preset parking control algorithm.
4. The method of claim 1, wherein after the step of stitching the plurality of first images to obtain a top-view stitched image, the method further comprises:
obtaining a top view shot picture of a scene camera at the current moment for the target virtual vehicle, wherein the scene camera is as follows: the virtual camera is arranged in the virtual parking scene and used for shooting the target virtual vehicle in a overlooking mode; evaluating the installation positions of the plurality of virtual cameras based on the top shot view and the top mosaic view;
and/or the step of identifying preset visual features from the top-view mosaic comprises the following steps:
recognizing preset visual features from the top-view mosaic by using a preset visual feature recognition algorithm; and after the step of identifying preset visual features from the top-view mosaic, the method further comprises:
obtaining the real pose of the target virtual vehicle in the virtual parking scene at the current moment; determining a scene view angle range corresponding to the target virtual vehicle based on the real pose and the view angle ranges of the virtual cameras; based on the scene visual angle range, determining scene visual features existing in the scene visual angle range from the scene visual features in the virtual parking scene, and taking the scene visual features as target scene visual features; evaluating the preset visual feature recognition algorithm based on the recognized preset visual features and the visual features of the target scene;
And/or after the step of determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking mosaic, the method further comprises the following steps:
obtaining the real pose of the target virtual vehicle in the virtual parking scene at the current moment; calculating a difference pose between the real pose and the current pose; and evaluating the reliability of the current pose based on the difference pose.
5. The method of claim 1, wherein prior to the step of obtaining a plurality of first images of a target virtual vehicle captured for a virtual parking scene, the method further comprises:
constructing to obtain the virtual parking scene based on scene construction operation triggered by a user;
and constructing and obtaining the target virtual vehicle based on the vehicle construction operation triggered by the user.
6. The method of claim 5, wherein the step of constructing the virtual parking scene based on the user-triggered scene construction operation is implemented by either of two implementations:
The first implementation mode comprises the following steps:
outputting a first display interface displaying scene identifications of a plurality of models of a preset virtual parking scene based on a first scene construction operation triggered by a user;
after the user selection operation is detected, constructing and obtaining a virtual parking scene corresponding to the scene identification carried by the selection operation based on the scene identification of the model of the selected virtual parking scene carried by the selection operation;
the second implementation mode comprises the following steps:
outputting an initial virtual parking scene based on a second scene construction operation triggered by a user, and outputting a second display interface displaying a plurality of scene construction models required for constructing the initial virtual parking scene, wherein the scene construction models at least comprise at least one of the following models: the traffic sign comprises a lane line model, a parking space model, a deceleration strip model, a zebra crossing model, an arrow model, a digital model, a floor tile model and a traffic sign model;
displaying the scene construction models selected and adjusted by the user in the initial virtual parking scene based on the selection operation and the adjustment operation of the user on each scene construction model to construct and obtain the virtual parking scene;
The step of constructing and obtaining the target virtual vehicle based on the vehicle construction operation triggered by the user is realized by any one of the following two realization modes:
the first implementation mode comprises the following steps:
outputting a third display interface displaying vehicle identifications of models of a plurality of preset virtual vehicles based on the first vehicle construction operation triggered by the user, wherein each preset virtual vehicle at least corresponds to a plurality of virtual cameras;
constructing and obtaining a target virtual vehicle based on the selection operation of the user on the vehicle identification of the model of the target virtual vehicle;
the second implementation mode comprises the following steps:
outputting an initial virtual vehicle based on a second vehicle construction operation triggered by a user, and outputting a configuration interface showing a plurality of parameters for configuring virtual sensors corresponding to the initial virtual vehicle, wherein the virtual sensors corresponding to the initial virtual vehicle at least comprise a plurality of virtual cameras;
constructing and obtaining the target virtual vehicle based on the configuration operation of the user on the parameters of the virtual sensor corresponding to the initial virtual vehicle, wherein the configuration operation carries: a user configured parameter value for a parameter of the virtual sensor.
7. The method of any one of claims 1-6, wherein the step of determining the current pose of the target virtual vehicle in the target virtual parking scene based on the position of the target feature in the autopilot electronic map, the position of the visual feature in the overhead mosaic, comprises:
calculating a mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map;
calculating a first error between the mapping position and an actual position of the target feature in the automatic driving navigation electronic map;
judging whether the first error is smaller than a specified threshold value;
when the first error is larger than or equal to the specified threshold, adjusting the value of the estimated pose, and calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking mosaic;
when the first error is smaller than the specified threshold value, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose;
Or, the step of determining the current pose of the target virtual vehicle in the target virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking mosaic map includes:
calculating the projection position of the target feature projected to the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
calculating a second error between the projected location of the target feature and an actual location of the visual feature in the top-down mosaic;
judging whether the second error is smaller than a specified threshold value;
when the second error is larger than or equal to the specified threshold, adjusting the value of the estimated pose, and calculating the projection position of the target feature projected to the overlook mosaic according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map;
and when the second error is smaller than the specified threshold, determining the current pose of the target virtual vehicle in the target virtual parking scene according to the current value of the estimated pose.
8. The method of claim 7, wherein prior to the step of computing the mapping of the visual feature to the mapped location in the autopilot navigation electronic map based on the value of the estimated pose and the location of the visual feature in the overhead mosaic, the method further comprises:
calculating the estimated pose of the virtual vehicle at the current moment by taking the pose of the target virtual vehicle at the last moment as a reference and combining a motion model, and calculating the mapping position of the visual feature mapped to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map; the last moment is as follows: at a time instant adjacent to and before the current time instant in time, the motion model is determined from data collected by an inertial measurement unit and/or a wheel speed meter of the target virtual vehicle;
and the step of calculating the mapping position of the visual feature to the automatic driving navigation electronic map according to the value of the estimated pose and the position of the visual feature in the overlooking splicing map comprises the following steps of:
Taking the value of the estimated pose as an initial value of the estimated pose;
calculating a mapping position of the visual feature in the automatic driving navigation electronic map according to the current value of the estimated pose and the position of the visual feature in the overlooking splicing map;
and the step of calculating the projection position of the target feature projected to the overlooking splicing map according to the value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map comprises the following steps of:
taking the value of the estimated pose as an initial value of the estimated pose;
and calculating the projection position of the target feature projected to the overlooking splicing map according to the current value of the estimated pose and the position of the target feature in the automatic driving navigation electronic map.
9. A parking simulation device based on a vehicle-mounted looking-around system is characterized by comprising:
a first obtaining module configured to obtain a plurality of first images of a target virtual vehicle captured for a virtual parking scene, wherein the plurality of first images are: the virtual parking system comprises a plurality of virtual cameras of the target virtual vehicle, a plurality of parking sensors and a plurality of parking sensors, wherein the virtual cameras of the target virtual vehicle shoot images in different directions of the virtual parking scene at the current moment;
The splicing module is configured to splice the plurality of first images to obtain a top-view splicing image;
an identification module configured to identify a preset visual feature from the top-view mosaic, wherein the preset visual feature includes at least a parking space;
the identification determination module is configured to identify target features matched with the visual features from an automatic driving navigation electronic map; determining the current pose of the target virtual vehicle in the virtual parking scene according to the position of the target feature in the automatic driving navigation electronic map and the position of the visual feature in the overlooking splicing map;
a first determination module configured to determine a target parking space corresponding to the target virtual vehicle from the identified parking spaces;
and the control module is configured to control the target virtual vehicle to run to the target parking space based on the current pose, the position of the target parking space and a preset parking algorithm.
10. The apparatus of claim 9, wherein the predetermined parking algorithm comprises a predetermined parking path planning algorithm and a predetermined parking control algorithm;
The control module is specifically configured to:
determining a parking path from the target virtual vehicle to the target parking space based on the current pose, the position of the target parking space and the preset parking path planning algorithm;
and controlling the target virtual vehicle to run to the target parking space based on the parking path and the preset parking control algorithm.
CN201910364960.9A 2019-04-30 2019-04-30 Parking simulation method and device based on vehicle-mounted looking-around system Active CN111856963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910364960.9A CN111856963B (en) 2019-04-30 2019-04-30 Parking simulation method and device based on vehicle-mounted looking-around system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910364960.9A CN111856963B (en) 2019-04-30 2019-04-30 Parking simulation method and device based on vehicle-mounted looking-around system

Publications (2)

Publication Number Publication Date
CN111856963A true CN111856963A (en) 2020-10-30
CN111856963B CN111856963B (en) 2024-02-20

Family

ID=72965080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910364960.9A Active CN111856963B (en) 2019-04-30 2019-04-30 Parking simulation method and device based on vehicle-mounted looking-around system

Country Status (1)

Country Link
CN (1) CN111856963B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526976A (en) * 2020-12-07 2021-03-19 广州小鹏自动驾驶科技有限公司 Simulation test method and system for automatic parking controller
CN112572419A (en) * 2020-12-22 2021-03-30 英博超算(南京)科技有限公司 Improve car week blind area monitored control system of start security of riding instead of walk
CN112967405A (en) * 2021-03-23 2021-06-15 深圳市商汤科技有限公司 Pose updating method, device and equipment of virtual object and storage medium
CN114120701A (en) * 2021-11-25 2022-03-01 北京经纬恒润科技股份有限公司 Parking positioning method and device
CN114379544A (en) * 2021-12-31 2022-04-22 北京华玉通软科技有限公司 Automatic parking system, method and device based on multi-sensor pre-fusion
CN114494439A (en) * 2022-01-25 2022-05-13 襄阳达安汽车检测中心有限公司 Camera pose calibration method, device, equipment and medium in HIL simulation test
CN114972494A (en) * 2021-02-26 2022-08-30 魔门塔(苏州)科技有限公司 Map construction method and device for memorizing parking scene
WO2022205356A1 (en) * 2021-04-01 2022-10-06 深圳市大疆创新科技有限公司 Automatic parking method, electronic device and computer-readable storage medium
CN115534935A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle running control method and device, electronic equipment and computer readable medium
WO2023273683A1 (en) * 2021-06-29 2023-01-05 广州小鹏汽车科技有限公司 Display method, vehicle-mounted terminal, vehicle and storage medium
WO2023123704A1 (en) * 2021-12-28 2023-07-06 魔门塔(苏州)科技有限公司 Automatic parking path planning method and apparatus, medium, and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005118339A1 (en) * 2004-06-02 2005-12-15 Robert Bosch Gmbh Method and device for assisting the performance of a parking maneuver of a vehicle
CN101008566A (en) * 2007-01-18 2007-08-01 上海交通大学 Intelligent vehicular vision device based on ground texture and global localization method thereof
CN102407848A (en) * 2010-09-21 2012-04-11 高强 Controller system with automatic parking and intelligent driving functions
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109165582A (en) * 2018-08-09 2019-01-08 河海大学 A kind of detection of avenue rubbish and cleannes appraisal procedure

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005118339A1 (en) * 2004-06-02 2005-12-15 Robert Bosch Gmbh Method and device for assisting the performance of a parking maneuver of a vehicle
CN101008566A (en) * 2007-01-18 2007-08-01 上海交通大学 Intelligent vehicular vision device based on ground texture and global localization method thereof
CN102407848A (en) * 2010-09-21 2012-04-11 高强 Controller system with automatic parking and intelligent driving functions
CN108090456A (en) * 2017-12-27 2018-05-29 北京初速度科技有限公司 A kind of Lane detection method and device
CN108537197A (en) * 2018-04-18 2018-09-14 吉林大学 A kind of lane detection prior-warning device and method for early warning based on deep learning
CN109165582A (en) * 2018-08-09 2019-01-08 河海大学 A kind of detection of avenue rubbish and cleannes appraisal procedure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张勇刚: "道路交通控制技术及应用", 中国人民公安大学出版社 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112526976B (en) * 2020-12-07 2022-08-16 广州小鹏自动驾驶科技有限公司 Simulation test method and system for automatic parking controller
CN112526976A (en) * 2020-12-07 2021-03-19 广州小鹏自动驾驶科技有限公司 Simulation test method and system for automatic parking controller
CN112572419A (en) * 2020-12-22 2021-03-30 英博超算(南京)科技有限公司 Improve car week blind area monitored control system of start security of riding instead of walk
CN112572419B (en) * 2020-12-22 2021-11-30 英博超算(南京)科技有限公司 Improve car week blind area monitored control system of start security of riding instead of walk
CN114972494A (en) * 2021-02-26 2022-08-30 魔门塔(苏州)科技有限公司 Map construction method and device for memorizing parking scene
CN112967405A (en) * 2021-03-23 2021-06-15 深圳市商汤科技有限公司 Pose updating method, device and equipment of virtual object and storage medium
WO2022205356A1 (en) * 2021-04-01 2022-10-06 深圳市大疆创新科技有限公司 Automatic parking method, electronic device and computer-readable storage medium
WO2023273683A1 (en) * 2021-06-29 2023-01-05 广州小鹏汽车科技有限公司 Display method, vehicle-mounted terminal, vehicle and storage medium
CN114120701A (en) * 2021-11-25 2022-03-01 北京经纬恒润科技股份有限公司 Parking positioning method and device
WO2023123704A1 (en) * 2021-12-28 2023-07-06 魔门塔(苏州)科技有限公司 Automatic parking path planning method and apparatus, medium, and device
CN114379544A (en) * 2021-12-31 2022-04-22 北京华玉通软科技有限公司 Automatic parking system, method and device based on multi-sensor pre-fusion
CN114494439A (en) * 2022-01-25 2022-05-13 襄阳达安汽车检测中心有限公司 Camera pose calibration method, device, equipment and medium in HIL simulation test
CN114494439B (en) * 2022-01-25 2023-08-15 襄阳达安汽车检测中心有限公司 Camera pose calibration method, device, equipment and medium in HIL simulation test
CN115534935A (en) * 2022-12-02 2022-12-30 广汽埃安新能源汽车股份有限公司 Vehicle running control method and device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN111856963B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN111856963B (en) Parking simulation method and device based on vehicle-mounted looking-around system
CN110136199B (en) Camera-based vehicle positioning and mapping method and device
US11094112B2 (en) Intelligent capturing of a dynamic physical environment
US11656620B2 (en) Generating environmental parameters based on sensor data using machine learning
US10832478B2 (en) Method and system for virtual sensor data generation with depth ground truth annotation
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
CN111169468B (en) Automatic parking system and method
CN109461211A (en) Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud
CN112508985B (en) SLAM loop detection improvement method based on semantic segmentation
CN110136058B (en) Drawing construction method based on overlook spliced drawing and vehicle-mounted terminal
CN101894366A (en) Method and device for acquiring calibration parameters and video monitoring system
CN112798811B (en) Speed measurement method, device and equipment
CN113903011B (en) Semantic map construction and positioning method suitable for indoor parking lot
US11755917B2 (en) Generating depth from camera images and known depth data using neural networks
CN111091038A (en) Training method, computer readable medium, and method and apparatus for detecting vanishing points
CN110986945B (en) Local navigation method and system based on semantic altitude map
CN106446785A (en) Passable road detection method based on binocular vision
CN115115859A (en) Long linear engineering construction progress intelligent identification and analysis method based on unmanned aerial vehicle aerial photography
CN112699748B (en) Human-vehicle distance estimation method based on YOLO and RGB image
Golovnin et al. Video processing method for high-definition maps generation
US20230401748A1 (en) Apparatus and methods to calibrate a stereo camera pair
KR102316818B1 (en) Method and apparatus of updating road network
CN116259001A (en) Multi-view fusion three-dimensional pedestrian posture estimation and tracking method
CN115565072A (en) Road garbage recognition and positioning method and device, electronic equipment and medium
CN116762094A (en) Data processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220303

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: Room 28, 4 / F, block a, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing 100089

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant