CN114942031A - Visual positioning method, visual positioning and mapping method, device, equipment and medium - Google Patents

Visual positioning method, visual positioning and mapping method, device, equipment and medium Download PDF

Info

Publication number
CN114942031A
CN114942031A CN202210593340.4A CN202210593340A CN114942031A CN 114942031 A CN114942031 A CN 114942031A CN 202210593340 A CN202210593340 A CN 202210593340A CN 114942031 A CN114942031 A CN 114942031A
Authority
CN
China
Prior art keywords
positioning
visual
image quality
vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210593340.4A
Other languages
Chinese (zh)
Inventor
何潇
王亚慧
朱昊
张丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Uisee Technologies Beijing Co Ltd
Original Assignee
Uisee Technologies Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uisee Technologies Beijing Co Ltd filed Critical Uisee Technologies Beijing Co Ltd
Priority to CN202210593340.4A priority Critical patent/CN114942031A/en
Publication of CN114942031A publication Critical patent/CN114942031A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure relates to a visual positioning method, a visual positioning and mapping method, a device, equipment and a medium, wherein the visual positioning method comprises the following steps: acquiring positioning information of a vehicle and corresponding initial weight, and acquiring an image, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight; acquiring an image quality score of an acquired image, wherein the image quality score represents the proportion of an invalid area; redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight; and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicles. The embodiment of the disclosure greatly reduces the positioning error caused by low image quality, and further improves the accuracy and robustness of visual positioning and mapping.

Description

Visual positioning method, visual positioning and mapping method, device, equipment and medium
Technical Field
The present disclosure relates to the field of positioning technologies, and in particular, to a visual positioning method, a visual positioning and mapping device, a visual positioning and mapping equipment, and a visual positioning and mapping medium.
Background
Positioning systems are one of the most important components in vehicles, especially in intelligently driven vehicles. The positioning technology adopted by the positioning system can include various technologies, and the Visual positioning and Mapping (VSLAM) technology benefits from the advantages of high precision and low cost, and can be used as a key means for solving the problem of vehicle positioning.
Visual SLAM generally relies on rich texture information, and has high requirements on image quality, and in the case of few visual features and low quality of images, the positioning method is easy to fail, resulting in large positioning errors, and redundant low-quality features are also added to the map in an unexpected way.
Disclosure of Invention
In order to solve the technical problems or at least partially solve the technical problems, the present disclosure provides a visual positioning method, a visual positioning and mapping method, an apparatus, a device and a medium.
The embodiment of the present disclosure provides a visual positioning method, which includes:
acquiring positioning information of a vehicle and corresponding initial weight, and acquiring an image, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fusion positioning result of the vehicle.
The embodiment of the present disclosure further provides a visual positioning and mapping method, where the method includes:
acquiring a collected image of a vehicle;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features, and determining a second real-time visual feature which fails to be matched;
setting a confidence level of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence level is;
and inputting a visual mapping module to create a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
The disclosed embodiments also provide a visual positioning apparatus, the apparatus comprising:
the system comprises an acquisition module, a display module and a processing module, wherein the acquisition module is used for acquiring positioning information of a vehicle and corresponding initial weight and acquiring an image, the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
the image quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the weight distribution module is used for redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and the positioning module is used for positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicle.
The embodiment of the present disclosure further provides a visual positioning and mapping apparatus, the apparatus includes:
the image module is used for acquiring a collected image of the vehicle;
the quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the matching failure module is used for extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features and determining a second real-time visual feature which is failed to be matched;
a confidence module for setting a confidence of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence is;
and the map module is used for inputting a visual map building module to build a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instructions from the memory and executing the instructions to realize the visual positioning method or the visual positioning and mapping method provided by the embodiment of the disclosure.
The embodiment of the present disclosure further provides a computer-readable storage medium, where the storage medium stores a computer program, and the computer program is used to execute the visual positioning method or the visual positioning and mapping method provided by the embodiment of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the visual positioning scheme provided by the embodiment of the disclosure, positioning information of a vehicle and corresponding initial weight are obtained, and an image is acquired, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight; acquiring an image quality score of an acquired image, wherein the image quality score represents the proportion of an invalid area; redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight; and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicles. By adopting the technical scheme, the collected image is subjected to real-time quality evaluation, the weight of vehicle positioning in the visual positioning mode is redistributed according to the quality of the collected image, and then the final positioning result is obtained by positioning and fusing the adjusted weight and the positioning.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a visual positioning method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating training of an image quality determination model according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram illustrating a method for visual positioning and mapping according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a visual positioning and mapping system according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a visual positioning apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a visual positioning and mapping apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
Visual SLAM typically relies on rich texture information, with higher image quality requirements, in situations where lighting conditions are not ideal (e.g., too strong or too dark) and where low texture regions (e.g., walls, etc.) are too large in the image, the localization approach is prone to failure with less visual features and low quality of the image, resulting in larger localization errors, while redundant low quality features are undesirably added to the map. In order to solve the above problem, embodiments of the present disclosure provide a visual positioning method and a visual positioning and mapping method, which are described below with reference to specific embodiments.
Fig. 1 is a flowchart of a visual positioning method provided by an embodiment of the present disclosure, which may be executed by a visual positioning apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, obtaining positioning information of a vehicle and corresponding initial weight, and acquiring an image, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight.
The vehicle to which the embodiments of the present disclosure are directed may be various types of vehicles, for example, may be an unmanned vehicle or an intelligent driving vehicle, and the like, and is not limited in particular. The positioning information may be current position information of the vehicle, and specifically may include vehicle positioning determined by various positioning means. The positioning information in the embodiment of the present disclosure may include a first vehicle positioning determined by a visual positioning manner and a second vehicle positioning determined by at least one other positioning manner, where the visual positioning manner may be understood as a vehicle positioning implemented based on visual features, and the other positioning manners are not limited specifically, and may include, for example, positioning by a GPS, an Inertial Measurement Unit (IMU), positioning by a vehicle odometer, or the like, and when the other positioning manners are multiple, the number of the second vehicle positioning is multiple. The initial weight may be a weight determined internally by the positioning module for different positioning modes.
Specifically, the visual positioning device may obtain the positioning information and the corresponding initial weight of the vehicle from the visual positioning module and the other positioning modules, and obtain the collected image collected by the image collecting device in real time for subsequent use.
And 102, acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of the invalid area.
The image quality score may be a parameter for characterizing image quality output by the image quality determination model, and specifically represents the image quality by a size of an invalid region, where the invalid region may be understood as a region with a small pixel value change and is not helpful to vision, and the invalid region in the embodiment of the present disclosure includes at least one of an overexposed region, and a low-texture region.
In some embodiments, obtaining an image quality score for an acquired image may include: and inputting the collected image into a pre-trained image quality determination model to obtain an image quality score. The image quality determination model may be a deep learning model added in the embodiment of the present disclosure to evaluate the quality of the acquired image in real time.
In the embodiment of the disclosure, after acquiring the acquired image, the visual positioning device may input the acquired image into a pre-trained image quality determination model, and output the image quality score through model calculation, where the larger the image quality score is, the higher the occupation ratio of the representation invalid region is, that is, the lower the image quality is.
And 103, redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain the redistributed weight.
In some embodiments, reassigning the initial weight of the first vehicle location based on the image quality score, resulting in a reassigned weight, comprises: and increasing or decreasing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistribution weight, wherein the redistribution weight of the first vehicle positioning is inversely proportional to the image quality fraction.
After determining the image quality score, the visual localization apparatus may reassign the initial weight of the first vehicle location according to the image quality score, i.e., adjust the initial weight of the first vehicle location such that the adjusted reassignment weight is smaller when the image quality score is larger and larger when the image quality score is smaller. The specific adjustment manner may include increasing or decreasing, for example, decreasing the initial weight with a larger image quality score by a larger decreasing value, and setting a smaller decreasing value for the initial weight with a smaller image quality score; or, the initial weight with a smaller image quality score is increased and the increase value is larger, and the initial weight with a smaller image quality score is set with a smaller increase value, so that the above-mentioned result can be achieved.
Optionally, the redistribution weight may be equal to a product of the initial weight and a target score, where the target score is a difference between 1 and the image quality score, and a value of the image quality score is between 0 and 1.
The reassignment weight may be determined by the formula w ═ w (1-s), where w represents the initial weight of the first vehicle location, s represents the image quality score, w' is the reassignment weight of the visual location, and (1-s) represents the above target score. The image quality fraction takes on a value between 0 and 1, and the target fraction also takes on a value between 0 and 1. For example, when the initial weight is 0.5, the image quality score is 0.8, and the reassignment weight is 0.5(1-0.8) ═ 0.1.
And 104, positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicles.
The visual location device may reassign the initial weight of the first vehicle location based on the image quality score, resulting in a reassigned weight, and may then assign the initial weight of the first vehicle location to the second vehicle location based on the image quality scoreAnd the first vehicle positioning and the second vehicle positioning are positioned and fused according to the corresponding weights, so that a final fusion positioning result of the vehicle is obtained. Can be embodied by formulas
Figure BDA0003666535780000071
Calculated, wherein T' represents the fusion positioning result of the vehicle, T i Means an ith vehicle position including the first vehicle position and at least one second vehicle position, i ═ 1, 2, …, n, n means the total number including the first vehicle position and at least one second vehicle position, w i The weight corresponding to the ith vehicle position is represented, for example, when i is 1, the weight is assigned again to the first vehicle position, and when i is 2, the weight is assigned again to the second vehicle position.
According to the scheme, after the weight is redistributed to the vehicle positioning determined by the visual positioning mode, the weight of the low-quality acquired image can be reduced, the positioning weight is strongly related to the image quality, the positioning error caused by low image quality is greatly reduced, and the accuracy of vehicle positioning is improved.
According to the visual positioning scheme provided by the embodiment of the disclosure, positioning information of a vehicle and corresponding initial weight are obtained, and an image is acquired, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight; acquiring an image quality score of an acquired image, wherein the image quality score represents the proportion of an invalid area; redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight; and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicles. By adopting the technical scheme, the acquired image is subjected to real-time quality evaluation, the weight of vehicle positioning in a visual positioning mode is redistributed according to the quality of the acquired image, and then the final positioning result is obtained by positioning and fusing the adjusted weight and the positioning.
In some embodiments, the image quality determination model is obtained by training an initial model based on a neural network based on proportion labeling of the sample image and an invalid region corresponding to the sample image.
In the embodiment of the disclosure, when the image quality determination model is trained, a large number of sample images may be obtained first, and the proportion of the invalid region is labeled for each sample image, where the labeling value is between 0 and 1, and the labeling is performed according to the proportion of the pixels of the invalid region to the whole sample image. And then taking the sample image as input, taking the proportion mark of the invalid region corresponding to the sample image as output, training an initial model based on a neural network, and determining the final trained initial model of the parameters as an image quality determination model.
In some embodiments, the image quality determination model includes a feature extraction module for extracting image features, an attention module for performing weight enhancement on an invalid region of an image, and an output module for outputting an image quality score determined according to a proportion of the invalid region.
Exemplarily, fig. 2 is a schematic diagram of training an image quality determination model provided by an embodiment of the present disclosure, as shown in fig. 2, a Backbone network (Backbone) module in the diagram is a feature extraction module, which is used for performing feature extraction on an acquired image, and may use, but is not limited to, ResetNet, MobileNet, and the like; the attention module can comprise a maximum pooling layer (Maxpool), a Convolution layer (Convolition) and a Sigmoid function (Sigmoid) network layer, and is used for performing weight enhancement on an invalid region of an acquired image, outputting a weight map with a value 2 of 0-1, and multiplying the weight map with output characteristics of a backbone network to obtain final characteristics; the output module comprises an averaging pooling layer (AvgPool) and a Sigmoid function (Sigmoid) network layer and is used for outputting invalid area ratio between 0 and 1, namely image quality scores. Updating network parameters in a back propagation mode by the error between the output score and the supervision of the labeling module, and realizing the training of the model; and after the training is finished, carrying out real-time image quality evaluation by using the image quality determination model to obtain the image quality score of the acquired image.
According to the scheme, the representation of the image quality is carried out by adopting the occupation ratio of the invalid region in the image, and a relatively accurate image quality determination model is obtained through deep learning model training, so that the image quality can be quantitatively evaluated, the evaluation accuracy is high, and further, the subsequent distribution of the visual positioning weight based on the image quality is facilitated.
In some embodiments, the first vehicle position determined by the visual positioning means may comprise: extracting a plurality of real-time visual features and a plurality of map visual features which exist at present from the acquired image, wherein the map visual features have corresponding confidence degrees; matching the plurality of real-time visual features in the plurality of map visual features, and determining a first real-time visual feature which is successfully matched; and determining the vehicle positioning with the minimum error as the first vehicle positioning based on the first real-time visual feature, the first map visual feature matched with the first real-time visual feature, the confidence coefficient of the first map visual feature and the error calculation function.
The map visual features may be visual features included in a visual map that has been created in the visual mapping module, each map visual feature having a confidence assigned before being input into the visual mapping module, and in the embodiment of the present disclosure, the confidence may be strongly correlated with the image quality score, and the confidence may be lower as the image quality score is larger.
In the embodiment of the disclosure, the visual positioning device can extract a plurality of real-time visual features from the acquired image after acquiring the acquired image, and acquire a plurality of current map visual features and corresponding confidence levels thereof from the visual mapping module; each real-time visual feature can be matched in a plurality of map visual features, the specific matching mode is not limited, and for example, the matching result can be determined based on feature similarity; determining that the matching result in the multiple real-time visual features is a first real-time visual feature which is successfully matched, wherein the number of the first real-time visual features can be multiple; the vehicle location with the smallest error may then be determined to be the first vehicle location based on the first real-time visual feature, the first map visual feature that matches the first real-time visual feature, the confidence level of the first map visual feature, and the error calculation function.
Optionally, the formula of the visual positioning manner is expressed as:
T*=argmin∑c*e(T,f,f_map),
wherein argmin represents the optimal solution for vehicle positioning that minimizes Σ c × e (T, f, f _ map), i.e., T represents the first vehicle positioning, e represents an error calculation function, the error includes but is not limited to a reprojection error, etc., T represents the vehicle positioning and is an independent variable of the right function with equal sign, f represents the first real-time visual feature, f _ map represents the first map visual feature matching the first real-time visual feature, and c represents the confidence of the first map visual feature.
And inputting each first real-time visual feature, the first map visual feature matched with the first real-time visual feature and the confidence coefficient of the first map visual feature into the formula, and calculating a variable value for enabling Σ c × e (T, f, f _ map) to take the minimum value, namely calculating the vehicle positioning with the minimum error as the first vehicle positioning. The above formula of the visual positioning manner is merely an example and is not limited.
In the scheme, when the corresponding vehicle positioning is determined in the visual positioning mode, the vehicle positioning can be determined based on the matching result of the real-time visual features and the existing map visual features and the error calculation function, and then the vehicle positioning and the vehicle positioning in other modes can be positioned and fused, so that a more accurate fusion positioning result of the vehicle is obtained.
Fig. 3 is a flowchart of a visual positioning and mapping method according to an embodiment of the present disclosure, where the method may be performed by a visual positioning and mapping apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 3, the method includes:
step 301, acquiring a collected image of the vehicle.
Specifically, the visual positioning and mapping device can acquire the acquired image acquired by the image acquisition device in real time for subsequent use.
And 302, acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of the invalid area.
The image quality score may be a parameter for characterizing image quality output by the image quality determination model, and specifically represents the image quality by a size of an invalid region, where the invalid region may be understood as a region with a small pixel value change and is not helpful for vision, and the invalid region in the embodiment of the present disclosure includes at least one of an overexposed region, and a low-texture region.
In the embodiment of the disclosure, the image quality score is obtained by inputting the acquired image into a pre-trained image quality determination model. For the process of determining the specific image quality score, reference is made to the above embodiments, which are not described herein again.
Step 303, extracting a plurality of real-time visual features and a plurality of map visual features existing at present from the acquired image, matching the plurality of real-time visual features in the plurality of map visual features, and determining a second real-time visual feature which fails in matching.
The real-time visual features can be understood as visual features obtained by monitoring the acquired images in real time. The map visual features may be visual features included in a visual map that has already been created in the visual mapping module, each map visual feature having a confidence assigned prior to input into the visual mapping module, which in the disclosed embodiment may be strongly correlated with the image quality score described above, with higher image quality scores and lower confidence.
The visual positioning and mapping device may match each real-time visual feature in a plurality of map visual features, and the specific matching manner is not limited, for example, the matching result may be determined based on feature similarity; and determining that the matching result in the plurality of real-time visual features is the second real-time visual feature with failed matching, wherein the number of the second real-time visual features can be multiple.
And 304, setting the confidence coefficient of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence coefficient is.
The confidence level can be understood as a parameter that characterizes the degree of plausibility or reliability.
The visual positioning and mapping device can set the confidence of the second real-time visual features which fail to be matched according to the image quality scores, wherein the higher the image quality score is, the lower the confidence is, that is, the lower the image quality is, the lower the confidence is.
Optionally, the confidence is equal to a difference between 1 and the image quality score, and a value of the confidence is between 0 and 1.
When the confidence of the second real-time visual feature is set according to the image quality score, the following formula c-1-s may be adopted, where s represents the image quality score, c represents the confidence of the second real-time visual feature, and the confidence takes a value between 0 and 1. The above is merely an example, and the confidence may also be determined by other formulas as long as it is satisfied that the higher the image quality score is, the lower the confidence is.
And 305, inputting a visual map building module based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature to build a visual map.
After the visual positioning and mapping device sets the confidence of the second real-time visual feature, the fusion positioning result of the vehicle, the second real-time visual feature and the set confidence thereof can be input into the visual mapping module to create a visual map, so that the corresponding map visual feature is obtained, and the corresponding vehicle positioning is determined in a visual positioning mode at the next moment.
The fusion positioning result of the vehicles is obtained by positioning and fusing according to the third vehicle positioning determined by adopting a visual positioning mode and the fourth vehicle positioning determined by at least one other positioning mode; determining the third vehicle position using visual positioning may include: acquiring a third real-time visual feature which is successfully matched; and determining the third vehicle location by adopting a visual location mode based on the third real-time visual feature.
Optionally, determining the third vehicle location by using a visual location manner based on the third real-time visual feature includes: and determining the vehicle positioning with the minimum error as the third vehicle positioning based on the third real-time visual feature, the second map visual feature matched with the third real-time visual feature, the confidence coefficient of the second map visual feature and an error calculation function, wherein the confidence coefficient of the second map visual feature is set based on the historical image quality score.
The determination of the fusion positioning result of the vehicle is the same as that in the above-described embodiment, and the third vehicle positioning is determined by the visual positioning method, and the third vehicle positioning may be determined by the visual positioning method, or the third real-time visual feature, the second map visual feature matched with the third real-time visual feature, and the confidence of the second map visual feature may be input into the above-described formula of the visual positioning method, and the vehicle positioning with the smallest error may be calculated as the third vehicle positioning. For a specific process, refer to the above embodiments, which are not described herein again.
In the scheme, for the real-time visual features successfully matched with the map visual features, the corresponding vehicle positioning can be determined in a visual positioning mode; and for the real-time visual features which are failed to be matched with the map visual features, confidence can be allocated, so that subsequent visual mapping is facilitated.
In some embodiments, the visual positioning and mapping method may further include: after the creation of the visual map is completed, a fused positioning result of the vehicle is determined based on the created visual map. At this time, the visual map is not updated after the fusion positioning result of the vehicle is determined.
According to the visual positioning scheme provided by the embodiment of the disclosure, the collected image of the vehicle is obtained; acquiring an image quality score of an acquired image, wherein the image quality score represents the proportion of an invalid area; extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the plurality of real-time visual features in the plurality of map visual features, and determining a second real-time visual feature which fails to be matched; setting the confidence coefficient of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence coefficient is; and inputting the visual map building module to build a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature. By adopting the technical scheme, the real-time visual features of the acquired image are matched with the existing map visual features, the acquired image is subjected to real-time quality evaluation, the confidence coefficient of the real-time visual features which are failed in matching is set according to the quality of the acquired image, and then the map is built according to the real-time visual features, the confidence coefficient of the real-time visual features and the fusion positioning result of the vehicle.
The above-described visual positioning and mapping process is further described below by way of a specific example. By way of example, fig. 4 is a schematic diagram of a visual positioning and mapping system provided in an embodiment of the present disclosure, as shown in fig. 4, the visual positioning and mapping system may include a positioning module and a visual mapping module, the positioning module may include a visual positioning module, other positioning source modules, an image quality evaluation module and a fusion module, the visual positioning module may obtain the existing map visual features from the visual mapping module, extracting real-time visual features from the acquired images, matching the real-time visual features in the map visual features, determining the successfully matched real-time visual features and the unsuccessfully matched real-time visual features, determining the first vehicle positioning or the third vehicle positioning based on the successfully matched real-time visual features, and then outputting the first vehicle positioning or the third vehicle positioning, the initial weight and the unsuccessfully matched real-time visual features; then, the fusion module redistributes the initial weight of the first vehicle positioning or the third vehicle positioning based on the image quality fraction of the collected image obtained by the image quality evaluation module, and then performs positioning fusion with the positioning and weight of other positioning source modules to obtain the final fusion positioning result of the vehicle, and inputs the final fusion positioning result to the visual mapping module, and also inputs the final fusion positioning result to the visual mapping module after performing confidence degree distribution on the real-time visual characteristics which are failed to be matched; after the visual mapping module receives the fusion positioning result of the vehicle, the real-time visual features failed to match and the confidence coefficient of the real-time visual features, mapping can be carried out until the visual map is created, and then the fusion positioning result of the vehicle can be determined continuously based on the created visual map.
According to the visual positioning scheme and the visual positioning and mapping scheme, the defect that low-quality images are poor in processing capacity in the needle-like correlation technology is overcome, real-time quality evaluation is conducted on the collected images on the basis of a deep learning algorithm, the risk of generating large positioning errors and the negative influence of low-quality features in a map are reduced by reducing the positioning distribution weight of the low-quality images and the confidence degree of the corresponding visual features of the low-quality images, the medium accuracy and the robustness of a visual positioning and mapping system are further improved, the overall performance of an intelligent driving vehicle is enhanced, and the method and the system can be applied to various scenes using the visual mapping and positioning system, such as parking.
Fig. 5 is a schematic structural diagram of a visual positioning apparatus according to an embodiment of the present disclosure; the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 5, the apparatus includes:
an obtaining module 501, configured to obtain positioning information of a vehicle and corresponding initial weights, and acquire an image, where the positioning information includes a first vehicle positioning determined in a visual positioning manner and a second vehicle positioning determined in at least one other positioning manner, and each vehicle positioning has a corresponding initial weight;
an image quality module 502, configured to obtain an image quality score of the acquired image, where the image quality score represents a ratio of an invalid region;
the weight distribution module 503 is configured to redistribute the initial weight of the first vehicle positioning according to the image quality score to obtain a redistributed weight;
and a positioning module 504, configured to perform positioning fusion according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning, so as to obtain a fusion positioning result of the vehicle.
Optionally, the image quality module 502 is configured to:
and inputting the acquired image into a pre-trained image quality determination model to obtain an image quality score, wherein the image quality determination model is obtained by training an initial model based on a neural network based on a sample image and an occupation mark of an invalid region corresponding to the sample image.
Optionally, the image quality determination model includes a feature extraction module, an attention module, and an output module, where the feature extraction module is configured to extract image features, the attention module is configured to perform weight enhancement on an invalid region of an image, and the output module is configured to output an image quality score determined according to a ratio of the invalid region.
Optionally, the invalid region represents a region in which the pixel value change is small, and the invalid region includes at least one of an overexposed region, and a low-texture region.
Optionally, the weight assignment module 503 is configured to:
and increasing or decreasing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistribution weight, wherein the redistribution weight of the first vehicle positioning is inversely proportional to the image quality fraction, and the larger the image quality fraction is, the higher the occupation ratio of the invalid area is represented.
Optionally, the redistribution weight is equal to a product of the initial weight and a target score, the target score is a difference between 1 and the image quality score, and a value of the image quality score is between 0 and 1.
Optionally, the apparatus further includes a first positioning module, configured to:
extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, wherein the map visual features have corresponding confidence degrees;
matching the plurality of real-time visual features in the plurality of map visual features, and determining a first real-time visual feature which is successfully matched;
and determining the vehicle positioning with the minimum error as the first vehicle positioning based on the first real-time visual feature, the first map visual feature matched with the first real-time visual feature, the confidence coefficient of the first map visual feature and an error calculation function.
The visual positioning device provided by the embodiment of the disclosure can execute the visual positioning method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 6 is a schematic structural diagram of a visual positioning and mapping apparatus provided in an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 6, the apparatus includes:
the image module 601 is used for acquiring a collected image of a vehicle;
a quality module 602, configured to obtain an image quality score of the acquired image, where the image quality score represents a proportion of an invalid area;
a failed matching module 603, configured to extract a plurality of real-time visual features and a plurality of map visual features that are currently available from the acquired image, match the plurality of real-time visual features among the plurality of map visual features, and determine a second real-time visual feature that fails to be matched;
a confidence module 604, configured to set a confidence of the second real-time visual feature according to the image quality score, where the higher the image quality score is, the lower the confidence is;
and a map module 605, configured to input the visual mapping module to create a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature, and the confidence level of the second real-time visual feature.
Optionally, the confidence is equal to a difference between 1 and the image quality score, and a value of the confidence is between 0 and 1.
Optionally, the image quality score is obtained by inputting the acquired image into a pre-trained image quality determination model.
Optionally, the fusion positioning result of the vehicle is obtained by performing positioning fusion according to a third vehicle positioning determined by using a visual positioning manner and a fourth vehicle positioning determined by using at least one other positioning manner;
the apparatus also includes a third positioning module to:
acquiring a third real-time visual feature which is successfully matched;
and determining the third vehicle location by adopting a visual location mode based on the third real-time visual feature.
Optionally, the third positioning module is configured to:
and determining the vehicle positioning with the minimum error as the third vehicle positioning based on the third real-time visual feature, the second map visual feature matched with the third real-time visual feature, the confidence coefficient of the second map visual feature and an error calculation function, wherein the confidence coefficient of the second map visual feature is set based on the historical image quality score.
Optionally, the apparatus further includes a continuous positioning module, configured to:
after the visual map is created, determining a fused positioning result of the vehicle based on the created visual map.
The visual positioning and mapping device provided by the embodiment of the disclosure can execute the visual positioning and mapping method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 7, the electronic apparatus 700 includes a Central Processing Unit (CPU)701, which can execute various processes in the foregoing embodiments according to a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The CPU701, the ROM702, and the RAM703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, the above described methods may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the aforementioned visual localization method and/or visual localization and mapping method. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowcharts or block diagrams may represent a module, a program segment, or a portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
In addition, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the visual positioning methods and/or visual positioning and mapping methods described in the present disclosure.
In addition to the above methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform a visual positioning method and/or a visual positioning and mapping method provided by embodiments of the present disclosure.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Scheme 1, a visual positioning method, comprising:
acquiring positioning information of a vehicle and corresponding initial weight, and acquiring an image, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicle.
Scheme 2, according to the method of scheme 1, obtaining the image quality score of the acquired image includes:
and inputting the acquired image into a pre-trained image quality determination model to obtain an image quality score, wherein the image quality determination model is obtained by training an initial model based on a neural network based on a sample image and an occupation mark of an invalid region corresponding to the sample image.
Scheme 3, the method according to scheme 2, wherein the image quality determination model includes a feature extraction module, an attention module and an output module, the feature extraction module is used for extracting image features, the attention module is used for performing weight enhancement on invalid regions of the image, and the output module is used for outputting image quality scores determined according to the occupation ratios of the invalid regions.
Scheme 4, the method according to any of schemes 1 to 3, where the invalid region represents a region where a pixel value change is small, and the invalid region includes at least one of an overexposed region, and a low-texture region.
Scheme 5, according to the method of scheme 1, reallocating the initial weight of the first vehicle positioning according to the image quality score, obtaining a reallocated weight, comprising:
and increasing or decreasing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistribution weight, wherein the redistribution weight of the first vehicle positioning is inversely proportional to the image quality fraction, and the larger the image quality fraction is, the higher the occupation ratio of the invalid area is represented.
Scheme 6, according to the method of scheme 5, the redistribution weight is equal to the product of the initial weight and a target score, the target score is the difference between 1 and the image quality score, and the value of the image quality score is between 0 and 1.
Scheme 7, the method of scheme 1, the first vehicle location determined by visual location, comprising:
extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, wherein the map visual features have corresponding confidence degrees;
matching the plurality of real-time visual features in the plurality of map visual features, and determining a first real-time visual feature which is successfully matched;
and determining the vehicle positioning with the minimum error as the first vehicle positioning based on the first real-time visual feature, the first map visual feature matched with the first real-time visual feature, the confidence coefficient of the first map visual feature and an error calculation function.
Scheme 8, a visual positioning and mapping method, comprising:
acquiring a collected image of a vehicle;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features, and determining a second real-time visual feature which fails to be matched;
setting a confidence coefficient of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence coefficient is;
and inputting a visual mapping module to create a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
And a scheme 9, according to the method of the scheme 8, wherein the confidence coefficient is equal to a difference value between 1 and the image quality score, and the value of the confidence coefficient is between 0 and 1.
And 10, according to the method of the scheme 8, the image quality score is obtained by inputting the acquired image into a pre-trained image quality determination model.
Scheme 11, according to the method of scheme 8, the fusion positioning result of the vehicle is obtained by performing positioning fusion according to the third vehicle positioning determined by adopting the visual positioning mode and the fourth vehicle positioning determined by at least one other positioning mode;
determining a third vehicle position using a visual positioning method, comprising:
acquiring a third real-time visual feature which is successfully matched;
and determining the third vehicle positioning by adopting a visual positioning mode based on the third real-time visual characteristic.
Scheme 12, the method of scheme 11, determining the third vehicle position using a visual positioning method based on the third real-time visual feature, comprising:
and determining the vehicle positioning with the minimum error as the third vehicle positioning based on the third real-time visual feature, the second map visual feature matched with the third real-time visual feature, the confidence coefficient of the second map visual feature and an error calculation function, wherein the confidence coefficient of the second map visual feature is set based on the historical image quality score.
Scheme 13, the method of scheme 8, further comprising:
after the visual map is created, determining a fused positioning result of the vehicle based on the created visual map.
Scheme 14, a visual positioning apparatus, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring positioning information of a vehicle and corresponding initial weight and acquiring an image, the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
the image quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the weight distribution module is used for redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and the positioning module is used for positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicle.
Scheme 15, a visual positioning and mapping device, includes:
the image module is used for acquiring a collected image of the vehicle;
the quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the matching failure module is used for extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features and determining a second real-time visual feature which fails to be matched;
a confidence module for setting a confidence of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence is;
and the map module is used for inputting a visual map building module to build a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
Scheme 16, an electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory, and execute the instructions to implement the visual positioning method according to any one of the foregoing schemes 1 to 7, or the visual positioning and mapping method according to any one of the foregoing schemes 8 to 13.
Scheme 17, a computer-readable storage medium storing a computer program for executing the visual positioning method described in any of the above schemes 1 to 7, or the visual positioning and mapping method described in any of the above schemes 8 to 13.
It is noted that, in this document, relational terms such as "first" and "second," and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description is only for the purpose of describing particular embodiments of the present disclosure, so as to enable those skilled in the art to understand or implement the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A visual positioning method, comprising:
acquiring positioning information of a vehicle and corresponding initial weight, and acquiring an image, wherein the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicle.
2. The method of claim 1, wherein obtaining an image quality score for the captured image comprises:
and inputting the acquired image into a pre-trained image quality determination model to obtain an image quality score, wherein the image quality determination model is obtained by training an initial model based on a neural network based on a sample image and an occupation mark of an invalid region corresponding to the sample image.
3. The method according to claim 2, wherein the image quality determination model comprises a feature extraction module for extracting image features, an attention module for weight enhancement of invalid regions of an image, and an output module for outputting an image quality score determined according to a proportion of the invalid regions.
4. The method according to any one of claims 1 to 3, wherein the invalid region represents a region in which a pixel value change is small, and the invalid region includes at least one of an overexposed region, and a low texture region.
5. The method of claim 1, wherein reassigning the initial weight of the first vehicle location based on the image quality score, resulting in a reassigned weight, comprises:
and increasing or decreasing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistribution weight, wherein the redistribution weight of the first vehicle positioning is inversely proportional to the image quality fraction, and the larger the image quality fraction is, the higher the occupation ratio of the invalid area is represented.
6. A visual positioning and mapping method is characterized by comprising the following steps:
acquiring a collected image of a vehicle;
acquiring an image quality score of the acquired image, wherein the image quality score represents the proportion of an invalid area;
extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features, and determining a second real-time visual feature which fails to be matched;
setting a confidence coefficient of the second real-time visual feature according to the image quality score, wherein the higher the image quality score is, the lower the confidence coefficient is;
and inputting a visual mapping module to create a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
7. A visual positioning device, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring positioning information of a vehicle and corresponding initial weight and acquiring an image, the positioning information comprises a first vehicle positioning determined by a visual positioning mode and a second vehicle positioning determined by at least one other positioning mode, and each vehicle positioning has corresponding initial weight;
the image quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the weight distribution module is used for redistributing the initial weight of the first vehicle positioning according to the image quality fraction to obtain a redistributed weight;
and the positioning module is used for positioning and fusing according to the redistributed weight of the first vehicle positioning and the initial weight of the second vehicle positioning to obtain a fused positioning result of the vehicle.
8. A visual positioning and mapping device, comprising:
the image module is used for acquiring a collected image of the vehicle;
the quality module is used for acquiring an image quality score of the acquired image, and the image quality score represents the proportion of the invalid area;
the matching failure module is used for extracting a plurality of real-time visual features and a plurality of map visual features which exist currently from the acquired image, matching the real-time visual features in the map visual features and determining a second real-time visual feature which fails to be matched;
a confidence module, configured to set a confidence of the second real-time visual feature according to the image quality score, where the higher the image quality score is, the lower the confidence is;
and the map module is used for inputting a visual map building module to build a visual map based on the fusion positioning result of the vehicle, the second real-time visual feature and the confidence coefficient of the second real-time visual feature.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the visual positioning method of any one of the preceding claims 1 to 5 or the visual positioning and mapping method of the preceding claim 6.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the visual positioning method of any one of the preceding claims 1-5, or the visual positioning and mapping method of the preceding claim 6.
CN202210593340.4A 2022-05-27 2022-05-27 Visual positioning method, visual positioning and mapping method, device, equipment and medium Pending CN114942031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210593340.4A CN114942031A (en) 2022-05-27 2022-05-27 Visual positioning method, visual positioning and mapping method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210593340.4A CN114942031A (en) 2022-05-27 2022-05-27 Visual positioning method, visual positioning and mapping method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114942031A true CN114942031A (en) 2022-08-26

Family

ID=82908254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210593340.4A Pending CN114942031A (en) 2022-05-27 2022-05-27 Visual positioning method, visual positioning and mapping method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114942031A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342449A (en) * 2023-03-29 2023-06-27 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium
CN116342449B (en) * 2023-03-29 2024-01-16 银河航天(北京)网络技术有限公司 Image enhancement method, device and storage medium

Similar Documents

Publication Publication Date Title
CN108038474B (en) Face detection method, convolutional neural network parameter training method, device and medium
CN108280477B (en) Method and apparatus for clustering images
US11501162B2 (en) Device for classifying data
WO2012132418A1 (en) Characteristic estimation device
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN112232241A (en) Pedestrian re-identification method and device, electronic equipment and readable storage medium
JP6892606B2 (en) Positioning device, position identification method and computer program
CN105608687A (en) Medical image processing method and medical image processing device
TWI803243B (en) Method for expanding images, computer device and storage medium
CN112949519A (en) Target detection method, device, equipment and storage medium
CN114942031A (en) Visual positioning method, visual positioning and mapping method, device, equipment and medium
CN112668608A (en) Image identification method and device, electronic equipment and storage medium
CN114255381B (en) Training method of image recognition model, image recognition method, device and medium
CN115409896A (en) Pose prediction method, pose prediction device, electronic device and medium
CN112926496B (en) Neural network for predicting image definition, training method and prediction method
CN112967293A (en) Image semantic segmentation method and device and storage medium
CN113139612A (en) Image classification method, training method of classification network and related products
CN116957024A (en) Method and device for reasoning by using neural network model
CN116612382A (en) Urban remote sensing image target detection method and device
CN113420824B (en) Pre-training data screening and training method and system for industrial vision application
CN111881833B (en) Vehicle detection method, device, equipment and storage medium
CN117173075A (en) Medical image detection method and related equipment
CN114445649A (en) Method for detecting RGB-D single image shadow by multi-scale super-pixel fusion
CN114359700A (en) Data processing method and device, electronic equipment and storage medium
CN113792733B (en) Vehicle part detection method, system, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination