CN116188587A - Positioning method and device and vehicle - Google Patents

Positioning method and device and vehicle Download PDF

Info

Publication number
CN116188587A
CN116188587A CN202211096804.7A CN202211096804A CN116188587A CN 116188587 A CN116188587 A CN 116188587A CN 202211096804 A CN202211096804 A CN 202211096804A CN 116188587 A CN116188587 A CN 116188587A
Authority
CN
China
Prior art keywords
image
target
map
environment
environmental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211096804.7A
Other languages
Chinese (zh)
Inventor
陶圣
李春里
赵宏峰
林海
李雪健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lichi Semiconductor Co ltd
Original Assignee
Shanghai Lichi Semiconductor Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lichi Semiconductor Co ltd filed Critical Shanghai Lichi Semiconductor Co ltd
Priority to CN202211096804.7A priority Critical patent/CN116188587A/en
Publication of CN116188587A publication Critical patent/CN116188587A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The disclosure provides a positioning method, a positioning device and a vehicle, wherein the positioning method comprises the following steps: acquiring a target image; determining at least one map from a plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different; under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image; comparing the target image with the map image, wherein the environment scenes of the target image are consistent with each other; and determining the position corresponding to the target image according to the comparison result.

Description

Positioning method and device and vehicle
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a positioning method, a positioning device, and a vehicle.
Background
The position of the equipment receiving the signal can be determined by satellite positioning signals or other positioning signals, and when the positioning signals are affected, such as signal interference, signal shielding and the like, the positioning can not be realized. By adopting the positioning scheme of the visual map, the positioning can be realized under the environments of satellite positioning signals, bluetooth signals and the like. Visual positioning applications can be widely used for positioning, navigation, etc. of robots, vehicles, etc. Due to the influence of various environmental factors such as illumination, climate and the like, the images acquired by different environmental scenes have large differences, and inaccurate positioning or positioning failure can be caused.
Disclosure of Invention
The present disclosure provides a positioning method, a positioning device and a vehicle, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a positioning method, the method comprising:
acquiring a target image;
determining at least one map from a plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different;
under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image;
comparing the target image with the map image, wherein the environment scenes of the target image are consistent with each other;
and determining the position corresponding to the target image according to the comparison result.
In an embodiment, performing image enhancement processing on the target image and/or the map image of the target map includes:
acquiring environment parameters of the target map; performing image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with the environmental parameters of the target map; or alternatively
Acquiring environmental parameters of the target image; and carrying out image enhancement processing on the map image of the target map to enable the environment parameters of the map image to be consistent with the environment parameters of the target map.
In an embodiment, the target map includes a first target map and a second target map; performing image enhancement processing on the target image and/or the map image of the target map, including:
acquiring a first environment parameter of the first target map and a second environment parameter of the second target map; performing first image enhancement processing on the target image according to the first environment parameters, and eliminating first environment factors of the environment scene of the target image; performing second image enhancement processing on the target image after the first image enhancement processing to enable the environmental parameters of the second environmental factors of the environmental scene of the target image to be consistent with the second environmental parameters; or alternatively
Acquiring a first environment parameter of the first target map and a second environment parameter of the second target map; performing third image enhancement processing on the target image to enable the environmental parameters of the first environmental factors of the environmental scene of the target image to be consistent with the first environmental parameters; and performing fourth image enhancement processing on the target image after the third image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters.
In an embodiment, determining, according to the comparison result, a position corresponding to the target image includes:
determining a map image that matches the target image;
determining the relative azimuth and distance between the target image and the matched map image;
and determining the position corresponding to the target image according to the position corresponding to the map image, the relative azimuth and the distance.
In an embodiment, comparing the target image with the map image, the target image being consistent with the environment scene, includes:
extracting image features of the target image;
comparing the image features of the target image with the image features of the map image;
when the similarity between the image features of the target image and the image features of the map image reaches a threshold value, the target image is matched with the map image;
and determining the position corresponding to the target image according to the matched position corresponding to the map image.
In an embodiment, the image features of the map image include a first image feature and a second image feature, the second image feature describing an environmental scene of the map image, the first image feature being used for contrasting a location of the target image with the map image.
In one embodiment, before comparing the target image with the map image, the method includes:
dividing the target image into a plurality of sub-images;
deleting the subgraph of the dynamic barrier;
acquiring the image characteristics of the rest sub-images;
and merging the image features of the sub-images to obtain the image features of the target image.
In one embodiment, the plurality of maps are obtained based on image enhancement processing of an image.
In an embodiment, the plurality of maps are obtained based on image enhancement processing on an image, and include:
acquiring an original image of a target area;
and performing image enhancement processing on the original image of the target area to obtain a plurality of target map images with different environmental scenes.
In an embodiment, the method further comprises:
acquiring original image characteristics, wherein the original image characteristics are the image characteristics of the original image;
acquiring each enhanced image feature, wherein the enhanced image feature is the image feature of the target map image of each environment scene;
extracting a common part of the original image feature and each enhanced image feature corresponding to the same position as a first image feature of the original image and each target map image;
And extracting difference parts of the original image features and the enhanced image features corresponding to the same position as second image features of the corresponding original image and the corresponding target map images.
In one embodiment, acquiring an original image of a target area includes:
collecting video stream data of a target area, and synchronously collecting geographic position data;
and matching the video stream data with the synchronously acquired geographic position data respectively to obtain the original image of the target area corresponding to the geographic position data.
In an embodiment, performing image enhancement processing on the original image of the target area to obtain a target map image, including:
inputting the original image into a trained model, and performing image enhancement processing on the original image by the trained model to obtain the target map image.
In one embodiment, training the model includes:
acquiring a plurality of images at the same position, wherein the environment scene of each image is different;
and training the model by taking a first image in the images as input and a second image as output, wherein the first image and the second image are different in environmental scene.
According to a second aspect of the present disclosure there is provided a positioning device, the device comprising:
the acquisition module is used for acquiring the target object image;
the determining module is used for determining at least one map from the plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different;
the enhancement module is used for carrying out image enhancement processing on the target image and/or the map image of the target map under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image of the target map;
and the positioning module is used for comparing the target image with the same environment scene with the map image of the map, and determining the position corresponding to the target image according to the comparison result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a vehicle comprising an electronic device as described in the present disclosure.
According to a fifth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
In the positioning method disclosed by the invention, at least one map is determined from a plurality of maps as a target map for comparison with an acquired target image according to an environment scene, and the environment scenes of different maps are different; under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image; comparing the target image with the map image, wherein the environment scenes of the target image are consistent with each other; and determining the position corresponding to the target image according to the comparison result. According to the embodiment of the disclosure, the target image and the map image of the target map are compared under the condition that the environment scene is consistent through image enhancement, so that the positioning accuracy is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
FIG. 1 shows a first flowchart of an implementation of a positioning method according to an embodiment of the disclosure;
FIGS. 2 a-2 d illustrate a comparison of the image enhancement process before and after the positioning method of the embodiments of the present disclosure;
FIG. 3 shows a second flowchart of an implementation of a positioning method according to an embodiment of the disclosure;
FIG. 4 is a schematic view showing the constitution of a positioning device according to an embodiment of the present disclosure;
fig. 5 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Referring to fig. 1, an embodiment of the present disclosure provides a positioning method, which includes:
acquiring a target image;
determining at least one map from a plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different;
under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to make the environment scene of the target image consistent with the environment scene of the map image;
comparing the target image with the map image with the consistent environment scene;
and determining the position corresponding to the target image according to the comparison result.
In the positioning method disclosed by the invention, at least one map is determined from a plurality of maps as a target map for comparison with an acquired target image according to an environment scene, and the environment scenes of different maps are different; under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to make the environment scene of the target image consistent with the environment scene of the map image; comparing the target image with the map image with the consistent environment scene; and determining the position corresponding to the target image according to the comparison result. According to the embodiment of the disclosure, the target image and the map image of the target map are compared under the condition that the environment scene is consistent through image enhancement, so that the positioning accuracy is improved.
The positioning method of the embodiment of the disclosure can be used for positioning navigation of various robots, vehicles and the like, and can also be used for application scenes such as shop positioning and the like. The target image may be acquired in real time or may be acquired in advance as a target image. The acquisition mode of the target image can be selected according to the specific application scene. For example, when the method of the embodiment of the disclosure is applied to positioning or navigation of a vehicle or a robot, an image of the surrounding environment can be obtained in real time through a camera on the vehicle or the robot as a target image, so that the vehicle or the robot is positioned and navigated in real time. Or, when the method of the embodiment of the disclosure is applied to positioning the store in the image according to the existing image, the position of the store in the image is determined, and the target image can be a pre-obtained image.
In the embodiment of the disclosure, the environmental scene may be divided according to one or more of environmental factors such as different viewing angles, different illumination, different climates, and the like. For example, it may be classified into morning, daytime, evening, night, etc. according to the illumination. The ambient scene for night may also be classified into several levels according to the brightness of the moon. According to different climates, the environmental scenes can be divided into different environmental scenes such as sunny days, overcast days, rainy days, snowy days, fog days and the like. Specifically, each environmental scene may be further classified into several levels, for example, rainy, and according to the magnitude of the rainfall, the rainy environmental scene may be classified into 10 levels, from 1 to 10, with higher levels and greater rainfall. Depending on the viewing angle, it may be classified into head-up, bottom, top, oblique, etc. Similarly, the environmental scenes divided according to the viewing angle can be further refined, for example, the environmental scenes with oblique viewing can be divided into the environmental scenes with oblique viewing left, the environmental scenes with oblique viewing right, and the like, and the overlooking and the upward viewing can be divided into a plurality of grades according to the pitching angle, and the like. The environmental factors may be combined to divide the environmental scene, for example, the environmental scene may be divided into first-level daytime rainfall, third-level daytime rainfall, and the like when combined with the rainfall level according to weather and illumination.
In the method of the embodiment of the disclosure, at least one map may be determined as a target map from a plurality of maps according to an environmental scene. The target map is used for comparing with the target image so as to realize positioning. When determining the target map, the environment scene of the target image can be determined first, and each map in the plurality of maps corresponds to a respective environment scene. And according to the matching of the environment scene of the target image and the environment scene of the map, determining at least one map as the target map. According to the matching of the environment scene, the target image is consistent with the environment scene of the map, and the target image can be compared with the map image of the target map for positioning. And if the environment scenes are different, enabling the target image to be consistent with the environment scene of the target map through image enhancement.
In the embodiment of the disclosure, the corresponding target map is determined according to the environmental scene of the target image, which may be that the environmental scene of the target image is matched with the environmental scene of the map, and according to the matching result, one map with the highest similarity of the environmental scene may be selected as the target map, or the first n maps ordered according to the similarity of the environmental scene may be selected as the target map, where n is a positive integer, and in a specific implementation, the value range of n may be 1-5. The specific value of n may be determined based on at least one of historical experience, consumption of computer resources, positioning accuracy, and the like. It may be determined by a percentage of the total number of maps, for example, 15% of the total number of maps, where n=20×15% =3 when the total number of maps is 20. When the total number of maps is 18, n=18×15+=2. Of course, when the calculated result includes a fractional part, the calculated result may be rounded, or may be an integer part alone, or may be added by one on the basis of the integer part.
In other exemplary embodiments, when at least one map is determined as the target map according to the matching result, a map with the similarity of the environmental scene reaching a preset first threshold may be selected as the target map. For example, if the first threshold is 80%, a map with a similarity of 80% between the environmental scenes is used as the target map.
Of course, at least one map may be determined as the target map based on the matching result, or may be determined in common based on two or more conditions. Taking the example that the first n of the similarity orders of the environmental scenes and the similarity reaches a first threshold value, the target map is jointly determined, n=2, and the first threshold value is 75%. And the number of the maps with the similarity reaching 75% is 5, and the first 2 maps with the high and low sequences of the similarity of the environment scenes are finally determined as target maps to be compared with the target images according to a determination strategy.
In a specific implementation, the environment scene of the target image is raining in the evening, the environment scene of the map comprises the environment scene of the evening and the raining environment scene, and the map of the environment scene of the evening and the map of the raining environment scene are determined as the target map according to the contrast of the environment scene.
The corresponding target map is determined according to the environment scene of the target image, or the corresponding target map is determined in response to the selection operation of the user. For example, the user selects a target map through an input/output device, a touch operation, or the like, and the selected map is used as a target map for comparison with a target image to realize positioning according to the selection operation of the user. Each map may include environmental scene information from which a user may determine the environmental scene to which each map corresponds. For example, in the evening, the user may select a map in which the environmental scene is evening as the target map. When raining, a map in which the environmental scene is raining may be selected as the target map. The user can select through a menu, or the map display window is displayed on a display interface by the method, the map display window displays selectable maps, and the user selects the corresponding map as a target map through clicking, scribing, circling and other operations.
Each map comprises a plurality of map images of the target area, each map image corresponds to position information, and the position corresponding to the target image can be determined by comparing according to the position information corresponding to the map image successfully matched with the target image. The location information may be, for example, latitude and longitude coordinates.
The higher the consistency of the environment scenes of the target image and the target map is, the more favorable the comparison between the target image and the target map is, and the accuracy of the positioning result can be improved.
In the embodiment of the disclosure, the image enhancement processing is performed on the target image and/or the map image of the target map, so that the environment scene of the target image and the map image are compared under the condition that the environment scene is consistent, the positioning accuracy can be improved, and the positioning inaccuracy or the positioning failure probability can be reduced. By collecting images under different environment scenes and establishing maps of different scenes, maps of a plurality of environment scenes can be provided for positioning and navigation, but a large amount of manpower and material resources are consumed for collecting the images for multiple times, and the collection time cannot be infinitely long, so that enough maps of different environment scenes are difficult to collect, and the situation that the environment scenes of all the maps are inconsistent with the environment scenes of the target image still exists during positioning or navigation. According to the embodiment of the disclosure, the environmental scene of the target image and the map image are compared under the condition that the environmental scene is consistent through image enhancement, so that the problems of inaccurate positioning or positioning failure caused by inconsistent environmental scenes of a plurality of maps and the environmental scene of the target map are solved.
In the embodiment of the disclosure, the image enhancement processing may be performed on the target image, or the image enhancement processing may be performed on the map image of the target map, or the image enhancement processing may be performed on the target image, or the image enhancement processing may be performed on the map image of the target map. In embodiments of the present disclosure, image enhancement may include spatial enhancement, pixel enhancement, and the like. In particular, further exemplary descriptions are provided in the examples referred to below.
In an embodiment, performing image enhancement processing on a target image and/or a map image of a target map includes: acquiring environment parameters of a target map; and performing image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with the environmental parameters of the target map. In embodiments of the present disclosure, the environmental parameters may include brightness, contrast, pixels, color levels, and the like. And adjusting the environmental parameters of the target image to be consistent with the environmental parameters of the target map through image enhancement processing. The target map may correspond to the environmental parameter, the corresponding environmental parameter may be associated with the map when the map is established, and the corresponding environmental parameter may be directly acquired after the target map is determined. Of course, in the case where the map does not have an associated environmental parameter, the environmental parameter of the target map may be obtained by calculation. For example, the environment scene of the target image is daytime, the environment scene of the target map is night without lights, the environment parameters associated with the target map are read, and the environment parameters of the target image are adjusted to be consistent with the environment parameters of the target map through image enhancement processing, so that the target image can be compared with the target map under the condition that the environment scene is consistent.
In an embodiment, performing image enhancement processing on a target image and/or a map image of a target map includes: acquiring environmental parameters of a target image; and performing image enhancement processing on the map image of the target map to enable the environment parameters of the map image to be consistent with the environment parameters of the target map. In combination with the above embodiment, the positioning method of the present disclosure may also be that the map image of the target map is subjected to image enhancement processing, and the specific value of the environmental parameter of the target image may be obtained through calculation, and the environmental parameter of the target image is adjusted to be consistent with the environmental parameter of the target map through the image enhancement processing. In the embodiment of the disclosure, when the image enhancement processing is performed on the map image of the target map to adjust the environment scene, the map images of the target map can be uniformly enhanced to generate a new map, so that the comparison of the map image and the target image is convenient, and the maps of different environment scenes can be enriched. For example, the environmental scene of the target image is daytime, the environmental scene of the target map is rainy, the environmental parameters of the target image are calculated, the environmental parameters of the map image of the target map are adjusted to be consistent with the environmental parameters of the target image through image enhancement processing, and the environmental scene adjustment of the map image of the target map is daytime, so that the comparison between the target image and the target map is realized under the condition that the environmental scenes are consistent.
In an embodiment of the disclosure, the number of target maps may be two or more, for example, the target maps include a first target map and a second target map. Image enhancement processing is carried out on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring a first environment parameter of a first target map and a second environment parameter of a second target map; performing first image enhancement processing on the target image according to the first environment parameters, and eliminating first environment factors of an environment scene of the target image; and performing second image enhancement processing on the target image after the first image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters. In the embodiment of the disclosure, two or more maps may be determined as target maps for comparison with target images. Specifically, two maps closest to the environment scene of the target image may be selected as target maps according to the environment scene, and for convenience of explanation, one is a first target map and the other is a second target map. For example, the environmental scene of the target image is rainy at night, the first environmental factor is an illumination factor, the second environmental factor is a climate factor, the environmental scene of the first target map is rainy at night, the environmental parameter of the second target map is a first environmental parameter, the environmental parameter of the second target map is a second environmental parameter, the environmental factor of the target image at night is reversely eliminated by the first environmental parameter through the first image enhancement processing, the rainy environmental factor is reserved, the second image enhancement processing is performed on the target image with the rainy environmental factor reserved, the rainy rainfall of the target image and the second target map is the same, and the target image and the second target map can be compared under the condition that the environmental scenes are consistent. Of course, after the first environmental factor of the target image is eliminated in the reverse direction, the image enhancement processing may be performed on the second target map, so that the environmental parameters of the target image and the second target map are identical.
In one embodiment, the target map includes a first target map and a second target map. Image enhancement processing is carried out on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring a first environment parameter of a first target map and a second environment parameter of a second target map; performing third image enhancement processing on the target image to enable the environmental parameters of the first environmental factors of the environmental scene of the target image to be consistent with the first environmental parameters; and performing fourth image enhancement processing on the target image after the third image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters. Referring to the above embodiment, the environmental scene of the target image is rainy at night, the environmental scene of the first target map is rainy at night, the environmental scene of the second target map is rainy, the illumination condition of the target image at night is consistent with the illumination condition of the first target map at night through the third image enhancement processing, and the rainfall of the target image is consistent with the rainfall of the second target map through the fourth image enhancement processing. In the embodiment of the disclosure, the first target map and the second target map may be combined to obtain a third target map, the environmental parameter of the third target map is determined according to the first environmental parameter and the second environmental parameter, the environmental scene of the third target map is rainy at night, and the target image is compared with the map image of the third target map, so that the two images can be compared under the condition that the environmental scene is consistent.
In one embodiment, the target map includes a first target map and a second target map. Image enhancement processing is carried out on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring a first environment parameter of a first target map and a second environment parameter of a second target map; performing first image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with those of the first target map; and performing second image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with the environmental parameters of the second target map. In comparison, after the first image enhancement processing, the target image and the first target map are compared under the condition that the environment scenes are consistent, then after the second image enhancement processing, the target image and the second target map are compared under the condition that the environment scenes are consistent, and the map images of the same position of the first target map and the second target map are successfully matched with the target image, so that the corresponding position of the target image is determined according to the position. If one of the matches is unsuccessful, the image enhancement process is re-performed or the target map is re-determined. For example, the environmental scene of the target image is raining, the environmental scene of the first target map is second-level rainfall, and the environmental scene of the second target map is third-level rainfall. The rainfall of the target image is between the second-level rainfall and the third-level rainfall, the rainfall of the target image can be adjusted to be consistent with the rainfall of the first target map through image enhancement processing, and then the rainfall of the target image is adjusted to be consistent with the rainfall of the second target map.
In the embodiment of the disclosure, the environmental scene of the target image is consistent with the environmental scene of the target map image, and the similarity between the environmental scene of the target image and the environmental scene of the target map image can reach the second threshold value, so that the environmental scene of the target image and the environmental scene of the target map image can be determined to be consistent. The second threshold may be greater than the first threshold. The second threshold may be, for example, 95%, 96%, 98%, 99%, etc.
In the embodiment of the disclosure, when selecting the map according to the environmental scene, a map with similarity reaching the second threshold may be selected first, and under the condition that no map with similarity reaching the second threshold is selected, then the image enhancement processing is performed. If the map with the similarity reaching the second threshold value exists, the map is directly compared. When the number of the maps with the similarity reaching the second threshold value is larger than 1, the map with the maximum similarity can be selected to be compared with the target image.
In one embodiment, the target map includes a first target map and a second target map. Image enhancement processing is carried out on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring environmental parameters of a target image; performing first image enhancement processing on a first target map to enable the environment parameters of the target image to be consistent with the environment parameters of the first target map; and performing second image enhancement processing on the second target map to enable the environment parameters of the target image to be consistent with the environment parameters of the second target map. In comparison, the same is true of the respective comparison, and reference is made in particular to the above-described examples.
In an embodiment, determining the position corresponding to the target image according to the comparison result includes: determining a map image matching the target image; determining the relative azimuth and distance between the target image and the matched map image; and determining the position corresponding to the target image according to the position corresponding to the map image, the relative azimuth and the distance. By comparing, whether the target image and the map image are matched or not can be determined, and the position corresponding to the target image can be determined according to the position corresponding to the matched map image. For example, according to the view angle of the target image, the information such as the size of the object in the figure can determine whether the position corresponding to the target image is close to the photographer or far from the photographer relative to the position corresponding to the map image, and in the scene of positioning and navigation of the vehicle and the robot, the photographer is the vehicle or the robot. Assuming that the relative orientation of the target image to the map image is close to the photographer side and the distance between the target image and the map image is 5m, the position corresponding to the target image can be determined according to the position corresponding to the matched map image.
In one embodiment, comparing the target image with the map image, wherein the target image corresponds to the environment scene, comprises: extracting image features of a target image; comparing the image features of the target image with the image features of the map image; when the similarity between the image features of the target image and the image features of the map image reaches a threshold value, the target image is matched with the map image; and determining the position corresponding to the target image according to the position corresponding to the matched map image. In the embodiment of the disclosure, the target image is compared with the map image, and whether the target image and the map image are matched or not can be determined according to respective image features of the target image and the map image. The image features of the map image may be associated with the corresponding map image when the map is built. When the target image is compared with the map image, the similarity between the image features of the target image and the image features of the map image reaches a threshold value, and the target image can be determined to be matched with the map image. The threshold may be determined empirically and may also be adjusted in real time based on positioning results. The threshold may be, for example, 90%, 95%, 98%, etc.
The image features of the target image and the image features of the map image may both be extracted by a model. Models for extracting image features include, but are not limited to: SIFT-based models (Scale-invariant feature transforms, scale-invariant feature transform), SURF-based or ORB-based bag-of-words description models, or HASH-based image fingerprint models.
In an embodiment, the image features of the map image include a first image feature and a second image feature, the second image feature describing an environmental scene of the map image, the first image feature being usable for a contrasting location of the target image with the map image. When determining the target map according to the environment scene, the image features of the target image and the second image features can be matched, and the target map can be determined according to the matching degree of the second image features, and specific reference can be made to the related description related to the embodiment for determining the target map. When the image features of the target image are compared with the image features of the map image, whether the target image is matched with the map image or not can be determined, the second image features are independently compared with the image features of the target image, and the first image features and the second image features are compared with the image features of the target image as a whole.
In an embodiment, before comparing the target image and the map image that are consistent with the environmental scene, the positioning method of the embodiment of the disclosure further includes: and preprocessing the target image. The image features of the target image can be accurately extracted through preprocessing. Preprocessing may include, but is not limited to, image enhancement, image filtering, image segmentation, image stretching, edge detection, dynamic obstacle removal, and the like. In an exemplary embodiment, preprocessing the target image includes: dividing the target image into a plurality of sub-images; deleting the subgraph of the dynamic barrier; acquiring the image characteristics of the rest sub-images; and merging the image features of each sub-image to obtain the image features of the target image. Deleting the subgraph in which the dynamic obstacle is located may be determined according to the percentage of the area of the dynamic obstacle to the subgraph in which the dynamic obstacle is located. For example, when the percentage of the dynamic obstacle area to the subgraph reaches a threshold, the subgraph is deleted. The specific threshold may be set at 5%, 10%, 20%, 30%, 50%, 70%, 80%, etc. The subgraphs where the dynamic barrier is located can be deleted.
In one embodiment, the plurality of maps are based on image enhancement processing of the image. The images of different environmental scenes are obtained through image enhancement processing, so that maps of the different environmental scenes are established, and the problems that the image acquisition of the environmental scenes consumes manpower, material resources and time, the coverage of the environmental scenes is incomplete and the like can be solved.
In one embodiment, the plurality of maps are obtained based on image enhancement processing on the images, and the map processing includes: acquiring an original image of a target area; and performing image enhancement processing on the original image of the target area to obtain a plurality of target map images with different environmental scenes. The image before the image enhancement processing may be referred to as an original image, and the image obtained by the image enhancement processing on the original image may be referred to as a target map image. The original image may be an acquired image, for example, when a map of the target area is established, the image of the target area may be acquired by using the image acquisition device as the original image for image enhancement, so that map images of different environmental scenes may be obtained. The original image may be an image obtained by image enhancement processing. For example, a map generated by acquiring an image of a target area may correspond to a first environmental scene, the image may be subjected to image enhancement processing as an original image, may be referred to as first image enhancement processing, a resulting map image may be referred to as a first map image, the corresponding environmental scene may be referred to as a second environmental scene, and a first map may be established from the first map image. When the map of the different environmental scene of the target area is further expanded, the image enhancement processing may be continued with the acquired image as the original image, or the image obtained by the image enhancement processing may be used, for example, the first map image as the original image. Referring to fig. 2a to 2d, fig. 2a is an acquired image, the corresponding environmental scene is daytime, and fig. 2b, 2c and 2d are images obtained by image enhancement processing of a rainy day, a night without a lamp, and a night with a lamp, respectively.
In an implementation manner, referring to fig. 3, the positioning method of the embodiment of the present disclosure further includes: acquiring original image characteristics, wherein the original image characteristics are the image characteristics of an original image; acquiring each enhanced image feature, wherein the enhanced image feature is the image feature of the target map image of each environment scene; extracting the common part of the original image features and the enhanced image features corresponding to the same position as the first image features of the original image and the target map images; and extracting difference parts of the original image features and the enhanced image features corresponding to the same position as second image features of the corresponding original image and the target map images. The positions of the original image and the corresponding target map image obtained by the image enhancement processing are the same, the image features of the original image corresponding to the same position and the corresponding target map image obtained by the image enhancement processing are combined, a common part is extracted as a first image feature, the difference part of each image feature and other image features is extracted as a second image feature, and the second image feature is used for describing an environment scene. When the common part and the difference part are extracted, the threshold value can be set respectively, when the common part of the image features reaches the corresponding threshold value, the common part is used as the common part, and when the difference of the image features reaches the corresponding threshold value, the difference part is used as the difference part. In specific implementation, the features can be clustered, and the common part and the difference part can be extracted by setting the distance. For example, referring to fig. 3, the positions of the longitude and latitude coordinates (x, y) correspond to i images, including the original image and the image obtained by the image enhancement processing, and the numbers from 1 are respectively image 1, image 2, … …, and image i. Combining the image features of the images 1 to i, wherein the h image features of the image 1 comprise features 1-1, features 1-2 and … … and features 1-h, and extracting a features as common parts and b features as difference parts, wherein a+b is less than or equal to h.
In one embodiment, acquiring an original image of a target area includes: collecting video stream data of a target area, and synchronously collecting geographic position data; and matching the video stream data with the synchronously acquired geographic position data respectively to obtain an original image of the target area corresponding to the geographic position data. The video stream data are collected, and meanwhile, the geographic position data are synchronously collected, and can be obtained through a GPS, a Beidou satellite positioning system and the like. Because the video stream data and the geographic position data are synchronously acquired, the correspondence of the video stream data and the geographic position data can be realized based on time matching. For the same time point, the original image and the corresponding geographic position can be extracted, so that the map image forming the map corresponds to the geographic position one by one. For example, for the video stream data and the geographic position data collected synchronously, at the time point of 30 minutes 40 seconds at 20 days of 22 months of 2022, longitude and latitude corresponding to the geographic position data is (x 1, y 1), the video stream extracts the corresponding original image as IM1, and the corresponding relation of (x 1, y 1) -IM1 can be obtained. Image enhancement processing is performed according to IM1, and target map images of a plurality of environmental scenes can be obtained, which are respectively named IM2, IM3, … …, IMn. And (3) extracting image features of the IM1 to the IMn respectively according to the longitude and latitude (x 1, y 1) corresponding to the IM1 to the IMn, combining, extracting a common part and a difference part respectively, and forming the image features corresponding to the IM1 to the IMn respectively.
In one embodiment, performing image enhancement processing on an original image of a target area to obtain a target map image includes: inputting the original image into a trained model, and carrying out image enhancement processing on the original image by the trained model to obtain a target map image. The trained model is adopted to carry out image enhancement processing, so that the map building effect can be improved to cover more environment scenes. In a specific implementation, the model for the image enhancement process may be an AugGAN model or a SurfelGAN model, or the like.
In one embodiment, the training model comprises: acquiring a plurality of images at the same position, wherein the environmental scene of each image is different; and taking a first image in the plurality of images as an input, taking a second image as an output, training a model, and enabling the environmental scenes of the first image and the second image to be different. The images of different environmental scenes are used as the input and output of the model to train respectively at the same position, and a module for obtaining the different environmental scenes can be obtained. During training of the model, countermeasure training may be employed. For example, a first image of a daytime environmental scene is taken as an input, a second image of a rainfall level 1-10 environmental scene is taken as an output, and the trained model performs image enhancement processing on an input original image according to environmental parameters corresponding to the rainfall level 1-10 environmental scene, so that an image of the environmental scene corresponding to the rainfall level can be obtained. The trained model can be used for building a map and can also be used for image enhancement processing for enabling the environment scenes of the target image and the target map image to be consistent when the target image and the target map image are compared.
Referring to fig. 4, an embodiment of the present disclosure provides a positioning device, where the positioning device includes an acquisition module, a determination module, an enhancement module, and a positioning module, where the acquisition module is configured to acquire an image of a target object; the determining module is used for determining at least one map from the plurality of maps as a target map according to the environment scenes, and the environment scenes of different maps are different; the enhancement module is used for carrying out image enhancement processing on the target image and/or the map image of the target map under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image of the target map; the positioning module is used for comparing the target image with the environment scene being consistent with the map image of the map, and determining the position corresponding to the target image according to the comparison result.
In an embodiment, the enhancing module performs image enhancement processing on the target image and/or the map image of the target map, including: acquiring environment parameters of a target map; and performing image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with the environmental parameters of the target map.
In an embodiment, the enhancing module performs image enhancement processing on the target image and/or the map image of the target map, including: acquiring environmental parameters of a target image; and performing image enhancement processing on the map image of the target map to enable the environment parameters of the map image to be consistent with the environment parameters of the target map.
In one embodiment, the target map includes a first target map and a second target map; the enhancement module performs image enhancement processing on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring a first environment parameter of a first target map and a second environment parameter of a second target map; performing first image enhancement processing on the target image according to the first environment parameters, and eliminating first environment factors of an environment scene of the target image; and performing second image enhancement processing on the target image after the first image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters.
In one embodiment, the target map includes a first target map and a second target map; the enhancement module performs image enhancement processing on the target image and/or the map image of the target map, and the image enhancement processing comprises the following steps: acquiring a first environment parameter of a first target map and a second environment parameter of a second target map; performing third image enhancement processing on the target image to enable the environmental parameters of the first environmental factors of the environmental scene of the target image to be consistent with the first environmental parameters; and performing fourth image enhancement processing on the target image after the third image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters.
In an embodiment, the determining, by the positioning module, a position corresponding to the target image according to the comparison result includes: determining a map image matching the target image; determining the relative azimuth and distance between the target image and the matched map image; and determining the position corresponding to the target image according to the position corresponding to the map image, the relative azimuth and the distance.
In one embodiment, the positioning module compares a target image consistent with an environmental scene with a map image, including: extracting image features of a target image; comparing the image features of the target image with the image features of the map image; when the similarity between the image features of the target image and the image features of the map image reaches a threshold value, the target image is matched with the map image; and determining the position corresponding to the target image according to the position corresponding to the matched map image.
In an embodiment, the image features of the map image include a first image feature for describing an environmental scene of the map image and a second image feature for contrasting the target image with the map image.
In an embodiment, the apparatus further comprises a preprocessing module for: dividing the target image into a plurality of sub-images; deleting the subgraph of the dynamic barrier; acquiring the image characteristics of the rest sub-images; and merging the image features of each sub-image to obtain the image features of the target image.
In one embodiment, the plurality of maps are based on image enhancement processing of the image.
In one embodiment, the plurality of maps are obtained based on image enhancement processing on the images, and the map processing includes: acquiring an original image of a target area; and performing image enhancement processing on the original image of the target area to obtain a plurality of target map images with different environmental scenes.
In an embodiment, the apparatus further comprises an extraction module for: acquiring original image characteristics, wherein the original image characteristics are the image characteristics of an original image; acquiring each enhanced image feature, wherein the enhanced image feature is the image feature of the target map image of each environment scene; extracting the common part of the original image features and the enhanced image features corresponding to the same position as the first image features of the original image and the target map images; and extracting difference parts of the original image features and the enhanced image features corresponding to the same position as second image features of the corresponding original image and the target map images.
In one embodiment, the extracting module obtains an original image of the target area, including: collecting video stream data of a target area, and synchronously collecting geographic position data; and matching the video stream data with the synchronously acquired geographic position data respectively to obtain an original image of the target area corresponding to the geographic position data.
In an embodiment, the enhancing module performs image enhancement processing on an original image of a target area to obtain a target map image, including: inputting the original image into a trained model, and carrying out image enhancement processing on the original image by the trained model to obtain a target map image.
In one embodiment, the training model comprises: acquiring a plurality of images at the same position, wherein the environmental scene of each image is different; and taking a first image in the plurality of images as an input, taking a second image as an output, training a model, and enabling the environmental scenes of the first image and the second image to be different.
The positioning device of the embodiments of the present disclosure can implement the methods of the embodiments described above, and the descriptions of the embodiments of the methods are all used for understanding and explaining the device of the embodiments of the present disclosure. For the sake of brevity and economy of space, further description is omitted herein.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium. The electronic device of the embodiment of the disclosure can execute the method disclosed by the disclosure.
According to an embodiment of the present disclosure, there is also provided a vehicle including the electronic apparatus of the above embodiment. The electronic device comprises a vehicle-mounted terminal and an independent electronic device which is easy to move.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 5 illustrates a schematic block diagram of an example electronic device 500 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM 502, and RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the various methods and processes described above, such as positioning methods. For example, in some embodiments, the positioning method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by the computing unit 501, one or more steps of the positioning method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the positioning method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (16)

1. A method of positioning, the method comprising:
acquiring a target image;
determining at least one map from a plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different;
under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map, performing image enhancement processing on the target image and/or the map image of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image;
comparing the target image with the map image, wherein the environment scenes of the target image are consistent with each other;
and determining the position corresponding to the target image according to the comparison result.
2. The method according to claim 1, wherein performing image enhancement processing on the target image and/or a map image of the target map comprises:
acquiring environment parameters of the target map; performing image enhancement processing on the target image to enable the environmental parameters of the target image to be consistent with the environmental parameters of the target map; or alternatively
Acquiring environmental parameters of the target image; and carrying out image enhancement processing on the map image of the target map to enable the environment parameters of the map image to be consistent with the environment parameters of the target map.
3. The method of claim 1, wherein the target map comprises a first target map and a second target map; performing image enhancement processing on the target image and/or the map image of the target map, including:
acquiring a first environment parameter of the first target map and a second environment parameter of the second target map; performing first image enhancement processing on the target image according to the first environment parameters, and eliminating first environment factors of the environment scene of the target image; performing second image enhancement processing on the target image after the first image enhancement processing to enable the environmental parameters of the second environmental factors of the environmental scene of the target image to be consistent with the second environmental parameters; or alternatively
Acquiring a first environment parameter of the first target map and a second environment parameter of the second target map; performing third image enhancement processing on the target image to enable the environmental parameters of the first environmental factors of the environmental scene of the target image to be consistent with the first environmental parameters; and performing fourth image enhancement processing on the target image after the third image enhancement processing, so that the environmental parameters of the second environmental factors of the environmental scene of the target image are consistent with the second environmental parameters.
4. The method of claim 1, wherein determining the location corresponding to the target image based on the comparison result comprises:
determining a map image that matches the target image;
determining the relative azimuth and distance between the target image and the matched map image;
and determining the position corresponding to the target image according to the position corresponding to the map image, the relative azimuth and the distance.
5. The method of claim 1, wherein comparing the target image with the map image for the consistent environmental scene comprises:
extracting image features of the target image;
comparing the image features of the target image with the image features of the map image;
when the similarity between the image features of the target image and the image features of the map image reaches a threshold value, the target image is matched with the map image;
and determining the position corresponding to the target image according to the matched position corresponding to the map image.
6. The method of claim 1, wherein the image features of the map image include a first image feature and a second image feature, the second image feature being used to describe an environmental scene of the map image, the first image feature being used for contrasting a location of the target image with the map image.
7. The method of claim 1, wherein prior to comparing the target image with the map image for a consistent environmental scene, the method comprises:
dividing the target image into a plurality of sub-images;
deleting the subgraph of the dynamic barrier;
acquiring the image characteristics of the rest sub-images;
and merging the image features of the sub-images to obtain the image features of the target image.
8. The method of claim 1, wherein the plurality of maps are based on image enhancement processing of images.
9. The method of claim 8, wherein the plurality of maps are based on image enhancement processing of images, comprising:
acquiring an original image of a target area;
and performing image enhancement processing on the original image of the target area to obtain a plurality of target map images with different environmental scenes.
10. The method according to claim 9, wherein the method further comprises:
acquiring original image characteristics, wherein the original image characteristics are the image characteristics of the original image;
acquiring each enhanced image feature, wherein the enhanced image feature is the image feature of the target map image of each environment scene;
Extracting a common part of the original image feature and each enhanced image feature corresponding to the same position as a first image feature of the original image and each target map image;
and extracting difference parts of the original image features and the enhanced image features corresponding to the same position as second image features of the corresponding original image and the corresponding target map images.
11. The method of claim 9, wherein acquiring the original image of the target area comprises:
collecting video stream data of a target area, and synchronously collecting geographic position data;
and matching the video stream data with the synchronously acquired geographic position data respectively to obtain the original image of the target area corresponding to the geographic position data.
12. The method of claim 9, wherein performing image enhancement processing on the original image of the target area to obtain a target map image comprises:
inputting the original image into a trained model, and performing image enhancement processing on the original image by the trained model to obtain the target map image.
13. The method of claim 12, wherein training the model comprises:
Acquiring a plurality of images at the same position, wherein the environment scene of each image is different;
and training the model by taking a first image in the images as input and a second image as output, wherein the first image and the second image are different in environmental scene.
14. A positioning device, the device comprising:
the acquisition module is used for acquiring the target object image;
the determining module is used for determining at least one map from the plurality of maps as a target map according to the environment scene, wherein the environment scenes of different maps are different;
the enhancement module is used for carrying out image enhancement processing on the target image and/or the map image of the target map under the condition that the environment scene of the target image is inconsistent with the environment scene of the target map so as to enable the environment scene of the target image to be consistent with the environment scene of the map image of the target map;
and the positioning module is used for comparing the target image with the same environment scene with the map image of the map, and determining the position corresponding to the target image according to the comparison result.
15. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-13.
16. A vehicle comprising the electronic device of claim 15.
CN202211096804.7A 2022-09-08 2022-09-08 Positioning method and device and vehicle Pending CN116188587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211096804.7A CN116188587A (en) 2022-09-08 2022-09-08 Positioning method and device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211096804.7A CN116188587A (en) 2022-09-08 2022-09-08 Positioning method and device and vehicle

Publications (1)

Publication Number Publication Date
CN116188587A true CN116188587A (en) 2023-05-30

Family

ID=86442954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211096804.7A Pending CN116188587A (en) 2022-09-08 2022-09-08 Positioning method and device and vehicle

Country Status (1)

Country Link
CN (1) CN116188587A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078985A (en) * 2023-10-17 2023-11-17 之江实验室 Scene matching method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117078985A (en) * 2023-10-17 2023-11-17 之江实验室 Scene matching method and device, storage medium and electronic equipment
CN117078985B (en) * 2023-10-17 2024-01-30 之江实验室 Scene matching method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN113989450B (en) Image processing method, device, electronic equipment and medium
JP2022507077A (en) Compartment line attribute detection methods, devices, electronic devices and readable storage media
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN114140592A (en) High-precision map generation method, device, equipment, medium and automatic driving vehicle
JP2021179839A (en) Classification system of features, classification method and program thereof
CN116188587A (en) Positioning method and device and vehicle
CN113569911A (en) Vehicle identification method and device, electronic equipment and storage medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN112815958A (en) Navigation object display method, device, equipment and storage medium
CN113673288A (en) Idle parking space detection method and device, computer equipment and storage medium
CN116524143A (en) GIS map construction method
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN113569912A (en) Vehicle identification method and device, electronic equipment and storage medium
CN110910379B (en) Incomplete detection method and device
CN116245730A (en) Image stitching method, device, equipment and storage medium
CN113901903A (en) Road identification method and device
CN112651351A (en) Data processing method and device
CN113569600A (en) Method and device for identifying weight of object, electronic equipment and storage medium
CN115049895B (en) Image attribute identification method, attribute identification model training method and device
CN111383337A (en) Method and device for identifying objects
CN115265544A (en) Positioning method and device based on visual map
CN114612544B (en) Image processing method, device, equipment and storage medium
CN116229209B (en) Training method of target model, target detection method and device
CN117315406B (en) Sample image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination