CN117173354A - Map element generation method, map element generation device, computer device and storage medium - Google Patents

Map element generation method, map element generation device, computer device and storage medium Download PDF

Info

Publication number
CN117173354A
CN117173354A CN202311140093.3A CN202311140093A CN117173354A CN 117173354 A CN117173354 A CN 117173354A CN 202311140093 A CN202311140093 A CN 202311140093A CN 117173354 A CN117173354 A CN 117173354A
Authority
CN
China
Prior art keywords
image
determining
area
map
area image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311140093.3A
Other languages
Chinese (zh)
Inventor
张振林
卢晓昀
陈佩文
岑益挺
张灿
王东科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Innovation Co Ltd
Original Assignee
China Automotive Innovation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Innovation Co Ltd filed Critical China Automotive Innovation Co Ltd
Priority to CN202311140093.3A priority Critical patent/CN117173354A/en
Publication of CN117173354A publication Critical patent/CN117173354A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present application relates to the field of artificial intelligence technologies, and in particular, to a map element generating method, apparatus, computer device, and storage medium. The method comprises the following steps: acquiring an area image of a target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element; performing angle analysis on the area image according to the positioning parameters, and determining the shooting angle of the area image; for each image element, performing position conversion processing on the image element based on the shooting angle, and determining the position information of the image element in the area image; map elements corresponding to the image elements are generated in the area map of the target area according to the position information of the image elements. The application realizes the completion of map drawing with low labor cost and high efficiency.

Description

Map element generation method, map element generation device, computer device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a map element generating method, apparatus, computer device, and storage medium.
Background
With the continued innovation of artificial intelligence technology, more and more industries (e.g., the automotive industry and the map industry) are beginning to develop toward the field of artificial intelligence, and digital map technology is being developed. The digital map includes a plurality of map elements, and the process of constructing and updating the digital map is often accompanied by the generation of the map elements.
In the traditional technology, a professional draws map elements based on the acquired related data, so that the construction and updating of a digital map are realized, the processing process is realized by relying on man-machine interaction, and the long-time man-machine interaction needs to consume more calculation processing resources, so that the problem of low efficiency exists.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a map element generation method, apparatus, computer device, and storage medium that can improve efficiency.
In a first aspect, the present application provides a map element generation method. The method comprises the following steps:
acquiring an area image of a target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
performing angle analysis on the area image according to the positioning parameters, and determining the shooting angle of the area image;
for each image element, performing position conversion processing on the image element based on the shooting angle, and determining the position information of the image element in the area image;
map elements corresponding to the image elements are generated in the area map of the target area according to the position information of the image elements.
In one embodiment, performing angle analysis on the area image according to the positioning parameter, determining a shooting angle of the area image includes:
Determining a first extended focus of the area image according to the positioning parameters;
extracting respective pixel distribution of the image elements in the region image, and determining a second extended focus of the region image based on the pixel distribution;
and performing angle analysis on the region image based on the first extended focus and the second extended focus, and determining the shooting angle of the region image.
In one embodiment, performing angle analysis on the area image based on the first extended focus and the second extended focus, determining a photographing angle of the area image includes:
determining a focus difference of the first extended focus and the second extended focus;
and according to the size relation between the focus difference value and the difference value threshold value, carrying out angle analysis on the area image, and determining the shooting angle of the area image.
In one embodiment, determining the first extended focus of the region image based on the positioning parameters includes:
predicting the pose of the region image according to the positioning parameters, and determining the longitude and latitude values and the pose orientation of sampling equipment of the region image;
and determining a first extension focus of the regional image according to the longitude and latitude values and the pose orientation.
In one embodiment, determining the first extended focus of the area image according to the longitude and latitude values and the pose orientation includes:
Coordinate conversion is carried out on the longitude and latitude values, and coordinate values of sampling equipment for collecting the regional images are obtained;
determining a first pitching angle change value of the sampling equipment according to the coordinate value;
determining a second pitching angle change value of the sampling equipment according to the pose orientation;
and determining a first extended focus of the area image according to the first pitching angle change value and the second pitching angle change value.
In one embodiment, determining the first extended focus of the area image based on the first pitch angle variation value and the second pitch angle variation value includes:
determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value;
and determining the first extended focus of the area image according to the first extended focus change value of the area image.
In one embodiment, generating map elements corresponding to the image elements in the area map of the target area according to the position information of the image elements includes:
determining a mapping position of the image element in a regional map of the target region based on the position information of the image element;
and mapping the image elements to the regional map of the target region based on the mapping positions, and generating map elements corresponding to the image elements.
In a second aspect, the application further provides a map element generation device. The device comprises:
the acquisition module is used for acquiring an area image of the target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
the first determining module is used for carrying out angle analysis on the area image according to the positioning parameters and determining the shooting angle of the area image;
the second determining module is used for carrying out position conversion processing on the image elements based on the shooting angles for each image element and determining the position information of the image elements in the regional image;
and the generation module is used for generating map elements corresponding to the image elements in the regional map of the target region according to the position information of the image elements.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the map element generation method according to any of the embodiments of the first aspect described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a map element generation method as in any of the embodiments of the first aspect described above.
According to the map element generation method, the map element generation device, the computer equipment and the storage medium, the regional image is subjected to angle analysis according to the positioning parameters by acquiring the regional image and the positioning parameters corresponding to the regional image, and further, the shooting angle corresponding to the regional image is determined; further, position information of the image element in the area image is determined based on the imaging angle corresponding to the area image, and a map element corresponding to the image element is generated in the area map of the target area. Therefore, in the process of constructing and updating the regional map of the target region, the regional image of the target region does not need to be manually subjected to data processing, and the map elements corresponding to the image elements in the target region can be automatically generated according to the regional image and the positioning parameters of the target region, so that the map drawing can be completed with low labor cost and high efficiency.
Drawings
FIG. 1 is an application environment diagram of a map element generation method according to an embodiment of the present application;
FIG. 2 is a flowchart of a map element generation method according to an embodiment of the present application;
FIG. 3 is a wiring diagram for calculating position information of an image element according to an embodiment of the present application;
fig. 4 is a flowchart of determining a shooting angle of an area image according to an embodiment of the present application;
FIG. 5 is a flowchart for determining map elements according to an embodiment of the present application;
FIG. 6 is a flowchart for determining a lane line map element according to an embodiment of the present application;
FIG. 7 is a flowchart of another map element generation method according to an embodiment of the present application;
FIG. 8 is a block flow diagram of determining map elements according to an embodiment of the present application;
fig. 9 is a block diagram of a first map element generating apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of a second map element generating apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of a third map element generating apparatus according to an embodiment of the present application;
fig. 12 is a block diagram of a fourth map element generating apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of a fifth map element generating apparatus according to an embodiment of the present application;
fig. 14 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. In the description of the present application, a description of the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The map element generation method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the acquisition device 102 communicates with the server 104 over a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The server 104, in the process of generating map elements: acquiring an area image of a target area from acquisition equipment 102 and positioning parameters corresponding to the area image, performing angle analysis on the area image according to the positioning parameters, and determining a shooting angle of the area image; for each image element, performing position conversion processing on the image element based on a shooting angle, and determining the position information of the image element in the area image; map elements corresponding to the image elements are generated in the area map of the target area according to the position information of the image elements. The collection device 102 is a device with an image collection function, and the collection device 102 may be, but not limited to, various personal computers, image collection devices, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a map element generating method is provided, which is illustrated by taking the application of the method to the server in fig. 1 as an example, and may include the following steps:
step 201, obtaining an area image of the target area and positioning parameters corresponding to the area image.
The regional image refers to image data of each road in the target region acquired by the acquisition equipment; and, the positioning parameters corresponding to the regional image refer to inertial measurement data and coordinate positioning data corresponding to the acquisition equipment when acquiring the regional image. Further, the acquisition device refers to a device having an image acquisition function, such as: cameras, video cameras, etc. In a particular embodiment, the acquisition device may refer to a device for crowd-sourced data acquisition. The crowdsourcing data acquisition refers to data acquisition by crowdsourcing equipment of a user terminal, acquired crowdsourcing data is uploaded to a server, and then the server constructs and updates data information according to the received crowdsourcing data, so that the server can provide better service for the user terminal based on the updated data information. The crowdsourcing device may be, for example, an in-vehicle terminal, an aircraft, a portable wearable device, and so forth.
The region image includes at least one image element. Image elements refer to assets in a target area that convey guidance, restriction, warning, or instructional information in text or symbols, and may include, by way of example and not limitation: lane lines, traffic lights, traffic signs, road arrows, crosswalks, etc.
When the region image of the target region needs to be acquired, the image of each road in the target region may be acquired by the acquisition device, so as to acquire the region image of the target region. For example, acquiring an area image of the target area may be achieved by controlling the vehicle or the drone by mounting the acquisition device to the vehicle or the drone.
In one embodiment of the application, a vehicle with a collection device is controlled to run on each road of a target area, and then the collection device is controlled to collect images of each road of the target area in the running process of the vehicle, so that an area image of the target area is obtained.
Further, when the regional image of the target region is acquired by the acquisition device, the inertial measurement unit and the global positioning unit are arranged in the acquisition device, so that the inertial measurement data of the acquisition device when acquiring the regional image can be acquired according to the inertial measurement unit, and the coordinate positioning data of the acquisition device when acquiring the regional image can be acquired according to the global positioning unit.
As another implementation manner, when the region image of the target region and the positioning parameters corresponding to the region image need to be acquired, whether the region image of the target region and the positioning parameters corresponding to the region image are included in the image database may also be searched, and if so, the region image of the target region and the positioning parameters corresponding to the region image may be acquired from the image database.
Specifically, the region image and the positioning parameters corresponding to the region image contained in the image database are screened based on the region ID (Identity document, identity number) of the target region, and whether the region image and the positioning parameters corresponding to the region image of the target region are contained in the image database is judged; and if the target region is included, extracting a region image of the target region and positioning parameters corresponding to the region image from an image database. The image database contains at least one region image of the candidate region and positioning parameters corresponding to the region image.
Step 202, angle analysis is carried out on the area image according to the positioning parameters, and the shooting angle of the area image is determined.
The shooting angle of the area image refers to the shooting angle of the acquisition equipment when the acquisition equipment acquires the area image. The shooting angle can be an included angle between the center line of the shooting visual angle and the ground, and can also be an included angle between the center line of the shooting visual angle and the vertical line of the ground.
When the shooting angle of the area image needs to be determined, the shooting angle of the area image may be determined by determining the first extended focus and the second extended focus of the area image, and performing angle analysis on the area image through the first extended focus and the second extended focus.
The extended focus (focus of expansion is indicated by english) is a key point corresponding to an image element in the region image, and further illustrates that the first extended focus (foe _prediction) is an extended focus determined according to a longitude and latitude value and a pose orientation of a sampling device for acquiring the region image; the second extended focus (foe _observe) is an extended focus determined from the pixel distribution of the image element.
In one embodiment of the application, the focus difference value of the first extended focus and the second extended focus can be determined, then the focus difference value is assigned according to the magnitude relation between the focus difference value and the difference value threshold, and then the angle analysis is performed on the region image according to the assigned focus difference value, the first extended focus and the second extended focus, so as to determine the shooting angle of the region image.
If the focus difference value is larger than the difference value threshold value, the focus difference value is assigned to be zero; and if the focus difference value is smaller than or equal to the difference value threshold value, assigning the focus difference value as an absolute value of the operation result of the first extended focus and the second extended focus difference value.
Further, when the shooting angle of the area image needs to be determined, the characteristic information of the image elements in the area image can be obtained; and comparing the feature information of the area image with the feature information in the analysis database to determine a target image with the highest feature information similarity of the feature information and the area image in the analysis database, and taking the known shooting angle of the target image as the shooting angle of the area image.
The analysis database stores feature information and shooting angles of at least one candidate image, wherein the feature information in the candidate image and the feature information of the regional image are the target image with the highest similarity.
Wherein, the characteristic information of the image element can include, but is not limited to: angle of the image element, position of the image element, etc.
Further, when the shooting angle of the regional image needs to be determined, an image pickup device with a gyroscope can be arranged in the same horizontal plane of the acquisition device, wherein the image pickup device can acquire the video of the target region in real time; comparing the region image with the region video acquired by the camera equipment frame by frame, and determining a frame image with the same shooting angle as the region image in the region video; the shooting angle of the image pickup device when shooting the frame image is acquired, wherein the shooting angle is the shooting angle of the area image. Wherein the photographing angle of the image pickup apparatus when photographing the frame image can be read according to the gyroscope of the image pickup apparatus.
In step 203, for each image element, position conversion processing is performed on the image element based on the shooting angle, and position information of the image element in the area image is determined.
When the position information of the image element in the region image needs to be determined, a first triangle of the image element and the optical center of the acquisition device and a second triangle of the position information of the optical center of the acquisition device and the image element in the region image can be constructed, and the position information of the image element in the region image is determined according to the shooting angle, the first triangle and the second triangle because the first triangle and the second triangle are similar triangles.
In one embodiment of the present application, taking an image element as a lane line for illustration, a first triangle of the image element and the optical center of the collecting device is constructed, and a second triangle of the optical center of the collecting device and the position information of the image element in the area image is constructed, wherein the first triangle and the second triangle are as shown in fig. 3, and determining the position information of the image element in the area image according to the shooting angle, the first triangle and the second triangle may include the following:
as shown in fig. 3, O is the position of the optical center of the collecting device, O ' is the center of the image plane of the collecting device, O O ' is the optical axis, I is the ground, l ' is a straight line parallel to the ground I and passing through the optical center, OA is parallel to the image plane a ' B ', B is a lane line point on the target area, OD is the ground height Hc of the collecting device, and θ is the shooting angle of the area image.
First, according to the calculation process (1), O' a is determined based on the acquisition device altitude and the shooting angle of the area image.
O'A= Hc / cos(θ) (1)
Where O 'a refers to the distance of the point O' from the point a, hc refers to the height of the acquisition device from the ground, and θ refers to the shooting angle of the area image.
And, by the calculation process (2), determining AB based on the similarity relationship of the first triangle and the second triangle.
AB=OA/A'B'*O A'
= Hc * sqrt(pow(f,2) + pow(foe_y - cy))/ cos(θ) / delta_y (2)
Wherein f refers to a focal length corresponding to the acquisition equipment when acquiring the regional image, and OA refers to a distance from the position of the optical center of the acquisition equipment to the point A; a 'B' refers to the distance from point A 'to point B', OA 'refers to the distance from point O to point A', foe _y is the first extended focus in the y-direction, delta_y is the value of the imaged point coordinate in the y-direction minus foe _y.
Further, according to the calculation process (3), the three-dimensional coordinate Y value corresponding to the image element is calculated based on O' A, AB and the imaging angle of the region image.
O'N = O'A – NA= O'A– AB sin(θ) (3)
Wherein O ' N refers to the three-dimensional coordinate Y value corresponding to the image element, O ' A refers to the distance from the point O ' to the point A, NA refers to the distance from the point N to the point A, AB refers to the distance from the point A to the point B, and θ refers to the shooting angle of the region image.
According to the calculation formula (4), the three-dimensional coordinate Z value corresponding to the image element is calculated based on the AB and the shooting angle of the area image.
NB = AB * cos(θ) (4)
Where NB refers to a three-dimensional coordinate Z value corresponding to an image element, AB refers to a distance from a point a to a point B, and θ refers to a shooting angle of an area image.
According to the calculation formula (5), a three-dimensional coordinate X value corresponding to the image element is calculated based on the three-dimensional coordinate Z value and OO' corresponding to the image element.
X_c = NB / OO' * (x - cx) (5)
Where x_c refers to the three-dimensional coordinate X value corresponding to the image element, O O' refers to the optical axis, X refers to the X-axis pixel value of the point B imaging coordinate, and cx refers to the principal point coordinate in the X-direction.
To sum up, three-dimensional coordinates of the image elements in the region image can be obtained, wherein the three-dimensional coordinates are the position information of the image elements in the region image.
Step 204, generating map elements corresponding to the image elements in the regional map of the target region according to the position information of the image elements.
In order to ensure the accuracy of the map elements corresponding to the image elements in the regional map, the mapping positions of the map element position information corresponding to the regional map need to be predetermined, so that the image elements are mapped to the corresponding mapping positions, and the map elements corresponding to the image elements are generated in the regional map of the target region.
Further, when the number of map elements is plural, the mapping position corresponding to each map element position information in the regional map is determined, and each image element is mapped to the corresponding mapping position according to the mapping position of each map element position, so as to ensure that the map elements corresponding to each image element are generated in the regional map of the target region.
In one embodiment of the present application, when a user wants to obtain a city map of city a, and city a contains 4 target areas, map elements corresponding to image elements can be generated in the area maps of the 4 target areas through the above steps, and the city map of city a can be obtained by combining the area maps of the four target areas according to the city a layout.
According to the map element generation method, the regional image and the positioning parameters corresponding to the regional image are obtained, so that the regional image is subjected to angle analysis according to the positioning parameters, and further, the shooting angle corresponding to the regional image is determined; further, position information of the image element in the area image is determined based on the imaging angle corresponding to the area image, and a map element corresponding to the image element is generated in the area map of the target area. Therefore, in the process of constructing and updating the regional map of the target region, the regional image of the target region does not need to be manually subjected to data processing, and the map elements corresponding to the image elements in the target region can be automatically generated according to the regional image and the positioning parameters of the target region, so that the map drawing can be completed with low labor cost and high efficiency. Because the collected related data is required to be constructed and updated manually in the process of drawing the map, the process is time-consuming and labor-consuming, and the efficiency of drawing the map is lower; in order to solve the above problem, the computer device of the present embodiment may perform angle analysis on the area image according to the positioning parameter in a manner as shown in fig. 4, to determine a shooting angle of the area image, and specifically includes the following steps:
Step 401, determining a first extended focus of the area image according to the positioning parameters.
When the first extended focal point of the area image needs to be determined, the first extended focal point of the area image may be determined according to the longitude and latitude values and the pose orientation of the sampling device of the area image, and specifically may include the following steps: predicting the pose of the region image according to the positioning parameters, and determining the longitude and latitude values and the pose orientation of sampling equipment of the region image; and determining a first extension focus of the regional image according to the longitude and latitude values and the pose orientation.
Specifically, an ESKF (error-state Kalman filter, kalman filtering) method may be used to predict the region image of the target region and the positioning parameters corresponding to the region image, so as to determine the longitude and latitude values and the pose orientation of the sampling device of the region image.
Further, when the first extended focus of the area image needs to be determined according to the longitude and latitude values and the pose orientation, the following may be specifically included: coordinate conversion is carried out on the longitude and latitude values, and coordinate values of sampling equipment for collecting the regional images are obtained; determining a first pitching angle change value of the sampling equipment according to the coordinate value; determining a second pitching angle change value of the sampling equipment according to the pose orientation; and determining a first extended focus of the area image according to the first pitching angle change value and the second pitching angle change value.
In one embodiment of the present application, determining the first extended focus of the area image according to the longitude and latitude values and the pose orientation may be achieved by the following calculation process:
by substituting the coordinate values of the sampling device that collects the area image into the calculation formula (6), the first pitch angle change value of the sampling device is determined, and the calculation formula (6) is as follows:
wherein delta_theta_v is the first pitch angle change value of the sampling device, [ X ] i ,Y i ,Z i ]For the coordinate values of the sampling device, i is the frame number index of the region image.
According to the pose orientation, determining a pitch angle yaw corresponding to the region image in a coordinate system i Furthermore, pitch angle difference value calculation can be performed according to the area images of two adjacent frames, so that a second pitch angle change value of the sampling device can be determined.
According to the above, the pitch angle corresponding to each area image in the coordinate system is substituted into the calculation formula (7), the second pitch angle change value of the sampling device is determined, and the calculation formula (7) is as follows:
delta_theta_c=yaw i -yaw i-1 (7)
wherein delta_theta_c is the second pitch angle change value of the sampling device, and yaw i And (3) for the pitch angle corresponding to the current frame region image in the coordinate system, wherein i is the frame number index of the region image.
In summary, the first pitch angle change value and the second pitch angle change value are determined according to the above, and then the first extended focus of the area image is determined according to the first pitch angle change value and the second pitch angle change value.
When the first extended focal point of the area image is determined according to the first pitch angle variation value and the second pitch angle variation value, the following may be specifically included: determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value; and determining the first extended focus of the area image according to the first extended focus change value of the area image.
In one embodiment of the present application, the first extended focus change value may be determined by the calculation formula (8), and then, the initial extended focus of the first area image and the first extended focus change value set in advance are substituted into the calculation formula (9), so as to determine the first extended focus of each subsequent area image.
Specifically, the calculation formula (8) is shown as follows:
delta_foe=f*sin(delta_theta_v+delta_theta_c) (8)
wherein delta_ foe is a first extended focus variation value, f is a focal length corresponding to the acquisition device when acquiring the region image, delta_theta_v is a first pitch angle variation value of the sampling device, and delta_theta_c is a second pitch angle variation value of the sampling device.
Further, the calculation formula (9) is shown as follows:
foe_predict = foe_last + delta_foe (9)
wherein foe _prediction is the first extended focus of the region image, foe _last is the initial extended focus of the preset first region image, and delta_ foe is the first extended focus variation value.
Step 402, extracting respective pixel distributions of the image elements in the area image, and determining a second extended focus of the area image based on the pixel distributions.
When the pixel distribution of each image element in the region image needs to be extracted, the semantic segmentation model can be used for identifying the region image, so that the pixel distribution of each image element can be determined.
The extraction operation of pixel distribution can be performed by adopting semantic segmentation models corresponding to different image elements aiming at the image elements which do not pass through; moreover, the semantic segmentation models corresponding to different image elements need to be trained by adopting different training samples, so that the semantic segmentation models can respectively extract pixel distribution aiming at different image elements.
For example, the semantic segmentation model corresponding to the lane line needs to be trained by using a training sample of the lane line, so that the semantic segmentation model corresponding to the lane line can perform pixel distribution extraction operation on the lane line.
After obtaining the pixel distribution of the image element, the coordinate representation corresponding to the image element may be determined, and the center point coordinate of the region element may be determined as the second extended focal point of the region image.
Further, when the area element is a lane line, since the lane line is usually a combination of two lane lines, the lane line may be characterized by a two-dimensional dot string having equal intervals, and the lane line on the left side and the lane line on the right side are negative and positive with respect to the center line of the two lane lines, so that the coordinate representation corresponding to the lane line is determined, the two lane lines are extended, and the focus after the extension of the two lane lines is used as the second extended focus of the area image.
Step 403, performing angle analysis on the area image based on the first extended focus and the second extended focus, and determining a shooting angle of the area image.
It should be noted that, when the angle analysis is performed on the area image, the following may be specifically included: determining a focus difference of the first extended focus and the second extended focus; and according to the size relation between the focus difference value and the difference value threshold value, carrying out angle analysis on the area image, and determining the shooting angle of the area image.
Specifically, the difference value between the first extended focus and the second extended focus can be obtained by calculating the formula (10), wherein the formula (10) is as follows:
diff_foe = abs(foe_predict – foe_observe) (10)
wherein diff_ foe is the focus difference between the first extended focus and the second extended focus, abs is the absolute value operation, foe _prediction is the first extended focus of the region image, and foe _observe is the second extended focus of the region image.
Further, when the angle analysis is performed on the region image according to the magnitude relation between the focus difference and the difference threshold, diff_ foe =0 is determined if the focus difference is greater than the difference threshold, and diff_ foe =abs (foe _prediction-foe _observe) is determined if the focus difference is less than or equal to the difference threshold.
The difference threshold may be set based on historical experience and the first extended focus and the second extended focus of the region image, and the range of the difference threshold is not limited.
Therefore, after a diff_ foe value result is determined according to the size relation between the focus difference value and the difference value threshold value, determining a target focus of the area image according to a calculation formula (11); and substituting the target focus of the area image and the focal length corresponding to the acquisition equipment when the area image is acquired into a calculation formula (12) to obtain the shooting angle of the area image.
Specifically, the calculation formula (11) is as follows:
foe_estimate=(diff_threshold–diff_foe)/diff_threshold*foe_predict+diff_foe/diff_threshold*foe_observe(11)
wherein foe _estimate is the target focus of the region image, diff_threshold is the difference threshold, diff_ foe is the focus difference, foe _predict is the first extended focus of the region image, foe _observe is the second extended focus of the region image.
Further, the calculation formula (12) is as follows:
theta_estimate = foe_estimate / f (12)
The theta_estmate is the shooting angle of the area image, foe _estmate is the target focus of the area image, and f is the corresponding focal length of the acquisition equipment when the area image is acquired.
According to the map element generation method, the first extended focus and the second extended focus are determined, so that angle analysis is carried out on the area image, the shooting angle of the area image is determined, the accuracy of shooting angle determination is guaranteed, a judgment basis is provided for subsequently determining the position information of the image element in the area image, the map element corresponding to the image element can be generated, and the map drawing with low labor cost and high efficiency is achieved.
In one embodiment, according to the position information of the image element, generating a map element corresponding to the image element in the regional map of the target region, which may specifically include the following content, as shown in fig. 5, the method includes:
step 501, determining a mapping position of the image element in the area map of the target area based on the position information of the image element.
When it is necessary to determine the mapping position of the image element in the area map of the target area, the corresponding position of the position information of the image element in the area map of the target area may be determined as the mapping position.
Further, when the mapping position of the image element in the area map of the target area needs to be determined, the same environmental information can be screened in the area image of the target area based on the environmental information around the image element, so that the position with the same environmental information in the area image of the target area can be used as the mapping position of the image element in the area map of the target area.
Step 502, mapping the image element to the regional map of the target region based on the mapping position, and generating a map element corresponding to the image element.
The image elements are mapped to the region map of the target region, and the image elements are added to the region map, thereby obtaining the map elements including the image elements. When a plurality of image elements are included, each image element is mapped to an area map of the target area, and further, a map element including each image element is obtained.
Further, since the lane lines have continuity, in order to ensure the authenticity and effectiveness of the map elements corresponding to the lane lines, it is necessary to perform mature verification on the lane lines, so that the map elements of the lane lines are determined according to the verification result. Illustrating: as shown in fig. 6, when the number of the area images is plural and the image element is a lane line, it may be determined whether the area image is a first frame area image, if so, the first frame area image is fused with a subsequent area image as a reference; if not, the area images are matched with each other so that the lane lines in the area images are communicated with each other, and after the area images are matched with each other, the area images are fused according to the matching sequence. After the fusion operation of all the area images is completed, carrying out maturity verification on the fused lane lines, and if the maturity verification is passed, obtaining map elements of the lane lines; and if the maturity verification is not passed, the plurality of area images are matched again.
Among others, maturation verification may include, but is not limited to: (1) If the lane line is a smooth connection line, if so, the maturation verification is passed, and if not, the maturation verification is not passed; (2) If the length of the lane line exceeds the threshold value, the maturity verification is passed, and if not, the maturity verification is not passed.
According to the map element generation method, the mapping operation on the image elements is realized by determining the mapping positions of the image elements in the regional map of the target region, so that the map elements corresponding to the image elements are obtained, and the map drawing is completed with low labor cost and high efficiency.
In this embodiment, as shown in fig. 7, fig. 7 is a flowchart of another map element generation method provided in the embodiment of the present application, and when a map element corresponding to an image element needs to be drawn, the method specifically may include the following:
step 701, obtaining an area image of a target area and positioning parameters corresponding to the area image.
Wherein the region image includes at least one image element.
Step 702, predicting the pose of the region image according to the positioning parameters, and determining the longitude and latitude values and the pose orientation of the sampling equipment of the region image.
In step 703, coordinate conversion is performed on the longitude and latitude values, so as to obtain coordinate values of the sampling device for collecting the regional image.
Step 704, determining a first pitch angle change value of the sampling device according to the coordinate values.
Step 705, determining a second pitch angle variation value of the sampling device according to the pose orientation.
Step 706, determining a first extended focus variation value of the area image according to the first pitch angle variation value and the second pitch angle variation value.
Step 707, determining a first extended focus of the area image according to the first extended focus variation value of the area image.
Step 708 extracts the pixel distribution of each of the image elements in the region image and determines a second extended focus of the region image based on the pixel distribution.
In step 709, a focus difference of the first extended focus and the second extended focus is determined.
And step 710, performing angle analysis on the area image according to the size relation between the focus difference value and the difference value threshold value, and determining the shooting angle of the area image.
In step 711, for each image element, position conversion processing is performed on the image element based on the imaging angle, and position information of the image element in the area image is determined.
Step 712, determining a mapping position of the image element in the region map of the target region based on the position information of the image element.
In step 713, the image elements are mapped to the region map of the target region based on the mapping positions, and map elements corresponding to the image elements are generated.
In an embodiment of the present application, taking an image element as a lane line as an example, obtaining a map element corresponding to the image element may include the following: as shown in fig. 8, a sensor acquires an area image of a target area and positioning parameters corresponding to the area image; determining pixel distribution of the lane lines through a voice segmentation model, and determining a second extension focus of the area image based on the pixel distribution; and determining a first extended focus of the area image through the positioning parameters. Determining a shooting angle of the area image through the first extended focus and the second extended focus, and determining position information of the lane line in the area image based on the shooting angle of the area image; furthermore, the lane lines in the area image are matched, the matched lane lines are fused according to the matching result, the fused lane lines are ripe verified, and if the ripe verification is passed, map elements of the lane lines are obtained; and if the maturity verification is not passed, the plurality of area images are matched again.
According to the map element generation method, the regional image and the positioning parameters corresponding to the regional image are obtained, so that the regional image is subjected to angle analysis according to the positioning parameters, and further, the shooting angle corresponding to the regional image is determined; further, position information of the image element in the area image is determined based on the imaging angle corresponding to the area image, and a map element corresponding to the image element is generated in the area map of the target area. Therefore, in the process of constructing and updating the regional map of the target region, the regional image of the target region does not need to be manually subjected to data processing, and the map elements corresponding to the image elements in the target region can be automatically generated according to the regional image and the positioning parameters of the target region, so that the map drawing can be completed with low labor cost and high efficiency. It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a map element generating device for realizing the map element generating method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiments of the map element generating device or devices provided below may refer to the limitation of the map element generating method hereinabove, and will not be described herein.
In one embodiment, as shown in fig. 9, there is provided a map element generation apparatus including: an acquisition module 10, a first determination module 20, a second determination module 30, and a generation module 40, wherein:
the acquisition module 10 is used for acquiring an area image of the target area and positioning parameters corresponding to the area image; the region image includes at least one image element.
The first determining module 20 is configured to perform angle analysis on the area image according to the positioning parameter, and determine a shooting angle of the area image.
The second determining module 30 is configured to determine, for each image element, positional information of the image element in the area image by performing a positional conversion process on the image element based on the shooting angle.
The generating module 40 is configured to generate map elements corresponding to the image elements in the area map of the target area according to the position information of the image elements.
In one embodiment, as shown in fig. 10, there is provided a map element generating apparatus, in which a first determining module 20 includes: a first determination unit 21, an extraction unit 22, and a second determination unit 23, wherein:
a first determining unit 21 for determining a first extended focus of the area image based on the positioning parameters.
An extraction unit 22 is configured to extract respective pixel distributions of the image elements in the area image, and determine a second extended focus of the area image based on the pixel distributions.
A second determination unit 23 for performing angle analysis on the area image based on the first extended focus and the second extended focus, and determining a photographing angle of the area image.
In one embodiment, as shown in fig. 11, there is provided a map element generation apparatus in which the second determination unit 23 includes: a first determination subunit 231 and a second determination subunit 232, wherein:
a first determining subunit 231 configured to determine a focus difference between the first extended focus and the second extended focus.
The second determining subunit 232 is configured to perform angle analysis on the area image according to the magnitude relation between the focus difference and the difference threshold, and determine a shooting angle of the area image.
In one embodiment, as shown in fig. 12, there is provided a map element generation apparatus in which a first determination unit 21 includes: a third determination subunit 211 and a fourth determination subunit 212, wherein:
the third determining subunit 211 is configured to predict the pose of the area image according to the positioning parameter, and determine the longitude and latitude value and the pose orientation of the sampling device of the area image.
The fourth determining subunit 212 is configured to determine the first extended focal point of the area image according to the longitude and latitude values and the pose orientation.
The fourth determining subunit is specifically configured to: coordinate conversion is carried out on the longitude and latitude values, and coordinate values of sampling equipment for collecting the regional images are obtained; determining a first pitching angle change value of the sampling equipment according to the coordinate value; determining a second pitching angle change value of the sampling equipment according to the pose orientation; and determining a first extended focus of the area image according to the first pitching angle change value and the second pitching angle change value.
The fourth determination subunit shown may also be used to: determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value; and determining the first extended focus of the area image according to the first extended focus change value of the area image.
In one embodiment, as shown in fig. 13, there is provided a map element generation apparatus in which a generation module 40 includes: a third determination unit 41 and a mapping unit 42, wherein:
the third determining unit 41 is configured to determine a mapping position of the image element in the area map of the target area based on the position information of the image element.
The mapping unit 42 is configured to map the image element to the region map of the target region based on the mapping position, and generate a map element corresponding to the image element.
Each of the modules in the map element generation apparatus described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be an acquisition device, the internal structure of which may be as shown in fig. 14. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a map element generation method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
acquiring an area image of a target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
performing angle analysis on the area image according to the positioning parameters, and determining the shooting angle of the area image;
for each image element, performing position conversion processing on the image element based on the shooting angle, and determining the position information of the image element in the area image;
map elements corresponding to the image elements are generated in the area map of the target area according to the position information of the image elements.
In one embodiment, the processor when executing the computer program further performs the steps of:
Determining a first extended focus of the area image according to the positioning parameters;
extracting respective pixel distribution of the image elements in the region image, and determining a second extended focus of the region image based on the pixel distribution;
and performing angle analysis on the region image based on the first extended focus and the second extended focus, and determining the shooting angle of the region image.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a focus difference of the first extended focus and the second extended focus;
and according to the size relation between the focus difference value and the difference value threshold value, carrying out angle analysis on the area image, and determining the shooting angle of the area image.
In one embodiment, the processor when executing the computer program further performs the steps of:
predicting the pose of the region image according to the positioning parameters, and determining the longitude and latitude values and the pose orientation of sampling equipment of the region image;
and determining a first extension focus of the regional image according to the longitude and latitude values and the pose orientation.
In one embodiment, the processor when executing the computer program further performs the steps of:
coordinate conversion is carried out on the longitude and latitude values, and coordinate values of sampling equipment for collecting the regional images are obtained;
determining a first pitching angle change value of the sampling equipment according to the coordinate value;
Determining a second pitching angle change value of the sampling equipment according to the pose orientation;
and determining a first extended focus of the area image according to the first pitching angle change value and the second pitching angle change value.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value;
and determining the first extended focus of the area image according to the first extended focus change value of the area image.
In one embodiment, the processor when executing the computer program further performs the steps of:
determining a mapping position of the image element in a regional map of the target region based on the position information of the image element;
and mapping the image elements to the regional map of the target region based on the mapping positions, and generating map elements corresponding to the image elements.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an area image of a target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
Performing angle analysis on the area image according to the positioning parameters, and determining the shooting angle of the area image;
for each image element, performing position conversion processing on the image element based on the shooting angle, and determining the position information of the image element in the area image;
map elements corresponding to the image elements are generated in the area map of the target area according to the position information of the image elements.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a first extended focus of the area image according to the positioning parameters;
extracting respective pixel distribution of the image elements in the region image, and determining a second extended focus of the region image based on the pixel distribution;
and performing angle analysis on the region image based on the first extended focus and the second extended focus, and determining the shooting angle of the region image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a focus difference of the first extended focus and the second extended focus;
and according to the size relation between the focus difference value and the difference value threshold value, carrying out angle analysis on the area image, and determining the shooting angle of the area image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Predicting the pose of the region image according to the positioning parameters, and determining the longitude and latitude values and the pose orientation of sampling equipment of the region image;
and determining a first extension focus of the regional image according to the longitude and latitude values and the pose orientation.
In one embodiment, the computer program when executed by the processor further performs the steps of:
coordinate conversion is carried out on the longitude and latitude values, and coordinate values of sampling equipment for collecting the regional images are obtained;
determining a first pitching angle change value of the sampling equipment according to the coordinate value;
determining a second pitching angle change value of the sampling equipment according to the pose orientation;
and determining a first extended focus of the area image according to the first pitching angle change value and the second pitching angle change value.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value;
and determining the first extended focus of the area image according to the first extended focus change value of the area image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining a mapping position of the image element in a regional map of the target region based on the position information of the image element;
And mapping the image elements to the regional map of the target region based on the mapping positions, and generating map elements corresponding to the image elements.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A map element generation method, characterized in that the method comprises:
acquiring an area image of a target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
performing angle analysis on the area image according to the positioning parameters, and determining the shooting angle of the area image;
for each image element, performing position conversion processing on the image element based on the shooting angle, and determining position information of the image element in the area image;
And generating map elements corresponding to the image elements in the regional map of the target region according to the position information of the image elements.
2. The method of claim 1, wherein the performing an angle analysis on the area image according to the positioning parameter, determining a shooting angle of the area image, comprises:
determining a first extended focus of the area image according to the positioning parameters;
extracting respective pixel distribution of the image elements in the area image, and determining a second extended focus of the area image based on the pixel distribution;
and carrying out angle analysis on the region image based on the first extended focus and the second extended focus, and determining the shooting angle of the region image.
3. The method of claim 2, wherein the determining the photographing angle of the area image based on the angle analysis of the area image by the first extended focus and the second extended focus comprises:
determining a focus difference of the first extended focus and the second extended focus;
and according to the size relation between the focus difference value and the difference value threshold value, carrying out angle analysis on the area image, and determining the shooting angle of the area image.
4. The method of claim 2, wherein said determining a first extended focus of the region image based on the positioning parameters comprises:
performing pose prediction on the region image according to the positioning parameters, and determining the longitude and latitude values and pose orientation of sampling equipment of the region image;
and determining a first extension focus of the region image according to the longitude and latitude values and the pose orientation.
5. The method of claim 4, wherein determining the first extended focus of the region image based on the longitude and latitude values and the pose orientation comprises:
performing coordinate conversion on the longitude and latitude values to obtain coordinate values of sampling equipment for acquiring the regional image;
determining a first pitching angle change value of the sampling equipment according to the coordinate value;
determining a second pitching angle change value of the sampling equipment according to the pose orientation;
and determining a first extension focus of the area image according to the first pitching angle change value and the second pitching angle change value.
6. The method of claim 5, wherein determining the first extended focus of the area image based on the first pitch angle change value and the second pitch angle change value comprises:
Determining a first extended focus variation value of the area image according to the first pitching angle variation value and the second pitching angle variation value;
and determining a first extended focus of the area image according to the first extended focus change value of the area image.
7. The method according to any one of claims 1 to 6, wherein generating map elements corresponding to the image elements in the area map of the target area according to the position information of the image elements, comprises:
determining a mapping position of the image element in a regional map of the target region based on the position information of the image element;
and mapping the image element to an area map of the target area based on the mapping position, and generating a map element corresponding to the image element.
8. A map element generation apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring an area image of the target area and positioning parameters corresponding to the area image; the regional image comprises at least one image element;
the first determining module is used for carrying out angle analysis on the area image according to the positioning parameters and determining the shooting angle of the area image;
A second determining module, configured to perform a position conversion process on the image element based on the shooting angle for each image element, and determine position information of the image element in the area image;
and the generation module is used for generating map elements corresponding to the image elements in the regional map of the target region according to the position information of the image elements.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311140093.3A 2023-09-06 2023-09-06 Map element generation method, map element generation device, computer device and storage medium Pending CN117173354A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311140093.3A CN117173354A (en) 2023-09-06 2023-09-06 Map element generation method, map element generation device, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311140093.3A CN117173354A (en) 2023-09-06 2023-09-06 Map element generation method, map element generation device, computer device and storage medium

Publications (1)

Publication Number Publication Date
CN117173354A true CN117173354A (en) 2023-12-05

Family

ID=88944486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311140093.3A Pending CN117173354A (en) 2023-09-06 2023-09-06 Map element generation method, map element generation device, computer device and storage medium

Country Status (1)

Country Link
CN (1) CN117173354A (en)

Similar Documents

Publication Publication Date Title
US10223829B2 (en) Method and apparatus for generating a cleaned object model for an object in a mapping database
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
CN111046125A (en) Visual positioning method, system and computer readable storage medium
Brejcha et al. State-of-the-art in visual geo-localization
US11255678B2 (en) Classifying entities in digital maps using discrete non-trace positioning data
US20170039450A1 (en) Identifying Entities to be Investigated Using Storefront Recognition
CN114842365A (en) Unmanned aerial vehicle aerial photography target detection and identification method and system
CN112232311B (en) Face tracking method and device and electronic equipment
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN110703805A (en) Method, device and equipment for planning three-dimensional object surveying and mapping route, unmanned aerial vehicle and medium
JP2022507716A (en) Surveying sampling point planning method, equipment, control terminal and storage medium
Guan et al. Detecting visually salient scene areas and deriving their relative spatial relations from continuous street-view panoramas
CN113012215A (en) Method, system and equipment for space positioning
CN110636248B (en) Target tracking method and device
CN114332648A (en) Position identification method and electronic equipment
CN114359231A (en) Parking space detection method, device, equipment and storage medium
CN111651547B (en) Method and device for acquiring high-precision map data and readable storage medium
US20230401837A1 (en) Method for training neural network model and method for generating image
CN114743395B (en) Signal lamp detection method, device, equipment and medium
CN115830073A (en) Map element reconstruction method, map element reconstruction device, computer equipment and storage medium
CN114238541A (en) Sensitive target information acquisition method and device and computer equipment
CN115797438A (en) Object positioning method, device, computer equipment, storage medium and program product
Moseva et al. Development of a Platform for Road Infrastructure Digital Certification
CN117173354A (en) Map element generation method, map element generation device, computer device and storage medium
CN110196638B (en) Mobile terminal augmented reality method and system based on target detection and space projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination