CN111583343B - Visual positioning method, related device, equipment and storage medium - Google Patents
Visual positioning method, related device, equipment and storage medium Download PDFInfo
- Publication number
- CN111583343B CN111583343B CN202010556359.2A CN202010556359A CN111583343B CN 111583343 B CN111583343 B CN 111583343B CN 202010556359 A CN202010556359 A CN 202010556359A CN 111583343 B CN111583343 B CN 111583343B
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- information
- preset
- visual positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 138
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000005286 illumination Methods 0.000 claims abstract description 94
- 238000003384 imaging method Methods 0.000 claims abstract description 32
- 230000003287 optical effect Effects 0.000 claims description 51
- 230000010365 information processing Effects 0.000 claims description 5
- 230000004807 localization Effects 0.000 claims 2
- 238000012545 processing Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 6
- 238000013507 mapping Methods 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000001960 triggered effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The application discloses a visual positioning method, a related device, equipment and a storage medium, wherein the visual positioning method comprises the following steps: acquiring the illumination intensity of a shooting environment where the image pickup device is located; if the illumination intensity meets the preset illumination condition, acquiring an image to be processed, which is shot by the imaging device in a shooting environment; positional information of the image pickup device is obtained based on the image to be processed. By the aid of the scheme, the success rate and accuracy of visual positioning can be improved.
Description
Technical Field
The present application relates to the field of computer vision, and in particular, to a visual positioning method, and related apparatus, device, and storage medium.
Background
With the development of information technology and electronic technology, people increasingly prefer to use electronic devices such as mobile phones and tablet computers to shoot, and realize visual positioning service on the basis of the shooting. Taking AR (Augmented Reality ) navigation in visual positioning service as an example, AR navigation is based on a real picture captured by an imaging device to perform navigation, so that the probability of driving errors can be effectively reduced, and traffic efficiency and traffic safety are improved, so that the AR navigation is attracting attention in the industry.
However, in the practical application process, the quality of the image shot by the image pickup device is often good and uneven, so the success rate and accuracy of visual positioning cannot be effectively ensured, and the user experience is affected. In view of this, how to improve the success rate and accuracy of visual positioning is a problem to be solved.
Disclosure of Invention
The application provides a visual positioning method, a related device, equipment and a storage medium.
The first aspect of the present application provides a visual positioning method, comprising: acquiring the illumination intensity of a shooting environment where the image pickup device is located; if the illumination intensity meets the preset illumination condition, acquiring an image to be processed, which is shot by the imaging device in a shooting environment; positional information of the image pickup device is obtained based on the image to be processed.
Therefore, the illumination intensity of the shooting environment where the image pickup device is located is obtained, and when the illumination intensity meets the preset illumination condition, the image to be processed of the image pickup device in the shooting environment is obtained, so that the position information of the image pickup device is obtained based on the image to be processed, the image to be processed can be ensured to be shot in the shooting environment meeting the illumination condition, the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
The preset illumination condition is that the illumination intensity is larger than or equal to a preset intensity threshold value.
Therefore, the image to be processed for visual positioning can be ensured to be shot in the shooting environment with the illumination intensity larger than or equal to the intensity threshold, so that the image quality of the image to be processed for visual positioning can be effectively ensured, and the success rate and the accuracy of visual positioning are further improved.
Before the illumination intensity of the shooting environment where the imaging device is located is acquired, the method further comprises the following steps: acquiring an included angle between an optical axis of the image pickup device and a horizontal plane; and if the included angle is smaller than the preset angle threshold, executing the step of acquiring the illumination intensity of the shooting environment where the image pickup device is positioned and the subsequent step.
Therefore, before the illumination intensity of the shooting environment where the image pickup device is located is obtained, the included angle between the optical axis of the image pickup device and the horizontal plane is obtained, and when the included angle is smaller than the preset angle threshold value, the step of obtaining the illumination intensity is executed again, so that the image to be processed shot by the image pickup device can be effectively prevented from being the ground, the image quality of the image to be processed for visual positioning can be effectively ensured, and the success rate and accuracy of visual positioning can be improved.
If the illumination intensity does not meet the preset illumination condition, outputting first prompt information, wherein the first prompt information is used for prompting the shooting environment where the camera device is replaced; the step of acquiring the angle between the optical axis of the image pickup device and the horizontal plane is re-performed.
Therefore, when the illumination intensity does not meet the preset illumination condition, the first prompt information is output, the shooting environment where the camera device is located can be prompted to be replaced, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is re-executed, so that the user experience can be improved, and the robustness of visual positioning is improved.
Under the condition that the first prompt information is continuously output within the first preset duration, the second prompt information is output, and the second prompt information is used for prompting failure in positioning.
Therefore, when the first prompt information is continuously output within the first preset duration, the second prompt information is output to prompt failure of positioning, so that the robustness of visual positioning can be improved.
Outputting third prompt information for prompting to reduce the included angle between the optical axis of the camera device and the horizontal plane when the included angle is larger than or equal to a preset angle threshold value; the step of acquiring the angle between the optical axis of the image pickup device and the horizontal plane is re-performed.
Therefore, when the included angle is larger than or equal to the preset angle threshold value, the third prompt information is output, the included angle between the optical axis of the image pickup device and the horizontal plane can be prompted to be reduced, the step of obtaining the included angle between the optical axis of the image pickup device and the horizontal plane is re-executed, the image to be processed, which is shot by the image pickup device, can be effectively prevented from being the ground, the image quality of the image to be processed, which is used for visual positioning, can be effectively ensured, the success rate and the accuracy of visual positioning can be improved, and the robustness of visual positioning can be improved.
Wherein after acquiring the image to be processed captured by the image capturing device in the capturing environment and before acquiring the position information of the image capturing device based on the image to be processed, the method further comprises: acquiring characteristic information of an image to be processed; scoring the feature information of the image to be processed by adopting a preset scoring mode to obtain feature scores of the image to be processed, wherein the feature scores are used for representing the feature richness of the image to be processed; executing a step of obtaining position information of the image pickup device based on the image to be processed in a case where the feature score is greater than or equal to a preset score threshold; and outputting fourth prompt information for prompting adjustment of the shooting picture of the imaging device and re-executing the step of acquiring the included angle between the optical axis of the imaging device and the horizontal plane under the condition that the feature score is smaller than the preset score threshold.
Therefore, the feature information of the image to be processed is scored in a preset scoring mode, so that the feature score of the image to be processed can be obtained, the feature richness of the image to be processed is represented, when the feature score is larger than or equal to a preset scoring threshold value, the step of obtaining the position information of the image pickup device based on the image to be processed is executed, when the feature score is smaller than the preset scoring threshold value, fourth prompting information is output to prompt and adjust the shooting picture of the image pickup device, the step of obtaining the included angle between the optical axis of the image pickup device and the horizontal plane is executed again, the feature richness of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, the success rate and the accuracy of visual positioning are improved, and the robustness of visual positioning is improved.
And outputting fifth prompt information for prompting failure of positioning under the condition that the fourth prompt information is continuously output within the second preset time length.
Therefore, when the fourth prompt information is continuously output within the second preset duration, the fifth prompt information is output to prompt failure of positioning, and the robustness of visual positioning can be improved.
Wherein obtaining positional information of the image pickup device based on the image to be processed includes: acquiring image information of an image to be processed; transmitting the image information to a server for positioning processing; the receiving server processes the obtained positional information of the image pickup device based on the image information.
Therefore, the image information of the image to be processed is transmitted to the server for positioning processing, and the position information of the image pickup device obtained by the server based on the image information processing is received, so that the processing load of the front-end electronic device can be reduced.
Wherein the image information includes: image data of the image to be processed, width and height information of the image to be processed, and focal length information adopted by the image pickup device for shooting the image to be processed.
Therefore, the image data of the image to be processed, the width and height information of the image to be processed and the focal length information set adopted by the image pickup device for shooting the image to be processed are packaged as the image information, so that information for visual positioning can be provided for the server, the accuracy of positioning processing of the server is ensured, and the accuracy of visual positioning is improved.
The second aspect of the application provides a visual positioning device, which comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring the illumination intensity of a shooting environment where an imaging device is positioned; the second acquisition module is used for acquiring an image to be processed, which is shot by the camera device in a shooting environment, when the illumination intensity meets the preset illumination condition; the third acquisition module is used for acquiring position information of the image pickup device based on the image to be processed.
A third aspect of the present application provides an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the visual positioning method of the first aspect.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the visual positioning method of the first aspect described above.
According to the scheme, the illumination intensity of the shooting environment where the image pickup device is located is obtained, and when the illumination intensity meets the preset illumination condition, the image to be processed of the image pickup device in the shooting environment is obtained, so that the position information of the image pickup device is obtained based on the image to be processed, the image to be processed can be ensured to be shot in the shooting environment meeting the illumination condition, and further, the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a visual positioning method according to the present application;
FIG. 2 is a flow chart of another embodiment of the visual positioning method of the present application;
FIG. 3 is a flow chart of a visual positioning method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a frame of an embodiment of a visual positioning apparatus of the present application;
FIG. 5 is a schematic diagram of a frame of an embodiment of an electronic device of the present application;
FIG. 6 is a schematic diagram of a frame of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a flow chart illustrating an embodiment of a visual positioning method according to the present application. Specifically, the method may include the steps of:
Step S11: the illumination intensity of the shooting environment in which the image pickup device is located is obtained.
In the embodiment of the disclosure, the camera device may include a mobile terminal such as a mobile phone and a tablet computer integrated with a camera. The imaging device may be a car navigation device to which a camera is connected, and is not limited herein.
The shooting environment may include, but is not limited to: malls, pedestrian streets, high-speed, urban roads. In one disclosed implementation scenario, the imaging device may be integrated with a photosensitive sensor for sensing the illumination intensity of the shooting environment in which the imaging device is located, so that the illumination intensity of the shooting environment may be quickly and accurately obtained; in another disclosed implementation scenario, the image to be processed captured by the image capturing device in the current capturing environment may be obtained in advance, the image to be processed is converted into a gray image, an average value of gray values of at least part of pixels (such as background pixels) in the gray image is counted, and the illumination intensity of the capturing environment where the image capturing device is located is determined through a mapping relationship between the gray values and the illumination intensity, so that the illumination intensity can be obtained without adding additional hardware, which is beneficial to reducing the cost, and the method is not limited herein.
In yet another disclosed implementation scenario, the acquisition of illumination intensity and subsequent related steps may be restarted upon detection of a user-triggered visual positioning. Specifically, the user may trigger visual positioning by clicking a physical key or a virtual key of the electronic device such as the mobile terminal or the vehicle navigation device, or may trigger visual positioning by a voice command, which is not limited herein.
Step S12: and judging whether the illumination intensity meets the preset illumination condition, if so, executing the step S13.
In a disclosure implementation scenario, in order to further effectively ensure the image quality of the image to be processed for visual positioning, further improve the success rate and accuracy of visual positioning, a preset illumination condition may be set to be that the illumination intensity is greater than or equal to a preset intensity threshold, when the illumination intensity is greater than or equal to the preset intensity threshold, the image to be processed photographed in the current photographing environment may be considered to be a high-quality image, otherwise, when the illumination intensity is less than the preset intensity threshold, the image to be processed photographed in the current environment may be considered to be a low-quality image.
Step S13: and acquiring an image to be processed, which is shot by the imaging device in a shooting environment.
When the illumination intensity meets the preset illumination condition, the image to be processed photographed in the current photographing environment can be considered to be a high-quality image with a high probability, so that the image to be processed photographed in the photographing environment by the photographing device can be obtained for subsequent visual positioning based on the image to be processed.
Step S14: positional information of the image pickup device is obtained based on the image to be processed.
In one disclosed implementation scenario, three-dimensional reconstruction may be performed in advance based on a time-series two-dimensional image by using a preset three-dimensional reconstruction method to obtain a three-dimensional model, and then position information of the image pickup device may be determined based on the image to be processed and the three-dimensional model. The preset three-dimensional reconstruction mode can be SFM (Structure From Motion) algorithm, the SFM algorithm carries out registration after carrying out feature extraction on the two-dimensional image, and the three-dimensional model is constructed by estimating camera parameters through global optimization and finally carrying out data fusion.
In one disclosed implementation scenario, in order to reduce the processing load of the front-end electronic device, image information of an image to be processed may also be acquired, and the image information may be sent to a server for positioning processing, so as to receive position information of an image capturing device obtained by the server based on the image information processing. In a specific disclosure implementation scenario, in order to ensure accuracy of positioning processing performed by the server, thereby improving accuracy of visual positioning, the image information may include, but is not limited to: image data of the image to be processed, width and height information of the image to be processed, and focal length information adopted by the image pickup device for shooting the image to be processed.
In one disclosed implementation scenario, in order to assist in positioning indoors, bluetooth beacons (beacons) may be further provided on different floors, where the bluetooth beacons are used for transmitting bluetooth signals, so that the floor on which the camera device is located can be determined according to the signal strength of the received bluetooth signals and a signal identifier, where the signal identifier is used for indicating the bluetooth beacon transmitting the bluetooth signals, and the signal identifier may be UUID (Universally Unique Identifier, universally unique identification code). For example, a bluetooth beacon a is set in layer 1, a bluetooth beacon B is set in layer 2, and the output power of both are preset to be the same, so that when only the bluetooth signal sent by the bluetooth beacon a is received, it can be determined that the imaging device is in layer 1; alternatively, when only the bluetooth signal transmitted from the bluetooth beacon B is received, it may be determined that the image pickup device is at layer 2; or when the Bluetooth signals sent by the Bluetooth beacon A and the Bluetooth beacon B are received at the same time, the signal intensity between the Bluetooth beacon A and the Bluetooth beacon B can be further compared, if the intensity of the Bluetooth signal sent by the received Bluetooth beacon A is larger than that of the Bluetooth signal sent by the received Bluetooth beacon B, the camera device can be determined to be in the 1 st layer, otherwise, the camera device can be determined to be in the 2 nd layer. In addition, in order to further improve the accuracy of the bluetooth auxiliary positioning, a plurality of bluetooth beacons may be further disposed on the same layer, so that the floor where the camera device is located is determined by combining bluetooth signals sent by the received bluetooth beacons, which is not illustrated here.
In a disclosure implementation scenario, the embodiment of the disclosure may be specifically executed by a mobile terminal such as a mobile phone or a tablet computer integrated with a camera, or may be executed by vehicle navigation connected with the camera, where when the embodiment of the disclosure is executed by the electronic device, a route may be planned based on the obtained location information and destination location information input by a user, and navigation prompt information may be output, for example, an image to be processed captured by the imaging device is displayed on a display screen of the electronic device, and a navigation mark (for example, a straight mark, a left turn mark, a right turn mark, and a turn around mark) is displayed on the image to be processed in an overlapping manner, where the navigation mark may be represented by an arrow, and is not limited herein.
In a disclosed implementation scenario, the illumination intensity of the shooting environment where the image capturing device is located may also not meet the preset illumination condition, for example, the image capturing device is located on a walking stair of a mall, or the image capturing device is located on a rural road at night, or the image capturing device is located on an underground parking lot with poor illumination condition, where the image to be processed shot in the shooting environment is a low-quality image with a high probability due to weak illumination intensity, so that the use of visual positioning cannot be met, and in order to improve the user experience and improve the robustness of visual positioning, the following steps S15 and S16 may also be executed.
Step S15: and outputting first prompt information to prompt the shooting environment in which the image pickup device is replaced.
In one embodiment, in order to enhance the user experience, the first prompting message may be an animation, for example, the animation may be displayed with a switch from a night state to a daytime state, so as to prompt the camera to move to a shooting environment with good illumination conditions, and in addition, the first prompting message may also be a voice message, for example, may broadcast "please move the camera to a place with sufficient light". Alternatively, the first prompt message may be a text message, for example, a text box "please move the image capturing device to a place with sufficient light" may be displayed, which is not limited herein. In a specific disclosure implementation scenario, if the first prompt information is continuously output within a first preset period (for example, 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that after the image capturing device continuously adjusts the shooting environment for multiple times, the illumination intensity still does not meet the preset illumination condition, or it may also be considered that the image capturing device does not respond to the first prompt information to replace the shooting environment within the first preset period, so as to improve the robustness of visual positioning, and then the second prompt information may be output to prompt that positioning fails. In addition, when the second prompt information is output, the visual positioning can be directly exited, and when the user is detected to start the visual positioning again, the step S11 is executed again, so that the visual positioning can be restarted from the step S11.
Step S16: step S11 is re-executed.
After outputting the first prompt information to prompt the replacement of the shooting environment in which the image pickup device is located, the above step S11 may be re-executed, so that the visual positioning may be re-performed from the above step S11. In a specific disclosure implementation scenario, in order to ensure that the capturing environment where the image capturing device is obtained again is the capturing environment after the replacement in response to the first prompt information, after the first prompt information is output, a preset waiting period (for example, 2 seconds, 4 seconds, etc.) may be waited for, and then the above step S11 is executed, so that the vision may be resumed from the above step S11.
According to the scheme, the illumination intensity of the shooting environment where the image pickup device is located is obtained, and when the illumination intensity meets the preset illumination condition, the image to be processed of the image pickup device in the shooting environment is obtained, so that the position information of the image pickup device is obtained based on the image to be processed, the image to be processed can be ensured to be shot in the shooting environment meeting the illumination condition, and further, the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
Referring to fig. 2, fig. 2 is a flow chart of another embodiment of the visual positioning method according to the present application. Specifically, in order to further ensure the image quality of the image to be processed for visual positioning, a related preprocessing procedure may be performed first, where the related preprocessing procedure in the embodiments of the present disclosure may specifically include detecting an included angle between an optical axis of the image capturing device and a horizontal plane, so that the image capturing device captures as little image of the ground as possible, and the image quality is improved, thereby improving the success rate and accuracy of visual positioning, and specifically, the embodiments of the present disclosure may include the following steps:
step S201: an included angle between an optical axis of the image pickup device and a horizontal plane is obtained.
The optical axis of the imaging device is the center line of the light beam passing through the center point of the lens, and when the light beam rotates around the optical axis, no change in optical characteristics occurs. Taking a mobile phone as an example, when the screen of the mobile phone is parallel to the horizontal plane, the optical axis of the camera device is perpendicular to the horizontal plane, namely, the included angle between the optical axis and the horizontal plane is 90 degrees, and when the screen of the mobile phone is perpendicular to the horizontal plane, the optical axis of the camera device is parallel to the horizontal plane, namely, the included angle between the optical axis and the horizontal plane is 0 degree. Therefore, in order to ensure that the image to be processed captured by the image capturing device contains as little ground as possible, so as to improve the image quality of the image to be processed, the included angle between the optical axis of the image capturing device and the horizontal plane may be as close to 0 degrees as possible.
In one disclosed implementation, the step of acquiring the angle between the optical axis of the camera device and the horizontal plane may be restarted when the user-triggered visual positioning is detected, so that the visual positioning may be performed again. Specifically, the user may trigger visual positioning by clicking a physical key or a virtual key of the electronic device such as the mobile terminal or the vehicle navigation device, or may trigger visual positioning by a voice command, which is not limited herein.
Step S202: and judging whether the included angle is smaller than a preset angle threshold, if so, executing step S203, otherwise, executing step S207.
The preset angle threshold may be set according to practical situations, for example, the preset angle threshold may be 30 degrees, 20 degrees, 10 degrees, and the like, which is not limited herein. When the included angle is smaller than the preset angle threshold, the imaging device can be considered to be capable of shooting the ground as few as possible, so that the probability of shooting the image to be processed into a high-quality image is high, otherwise, the imaging device can be considered to be capable of shooting more ground pictures, and therefore the probability of shooting the image to be processed into a low-quality image is high.
Step S203: the illumination intensity of the shooting environment in which the image pickup device is located is obtained.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S204: and judging whether the illumination intensity meets the preset illumination condition, if so, executing the step S205, otherwise, executing the step S209.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S205: and acquiring an image to be processed, which is shot by the imaging device in a shooting environment.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S206: positional information of the image pickup device is obtained based on the image to be processed.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S207: and outputting first prompt information to prompt reduction of an included angle between the optical axis of the image pickup device and the horizontal plane.
In one disclosed implementation scenario, in order to enhance the user experience, the first prompt information may be an animation for visually prompting to reduce an included angle between the optical axis of the image capturing device and the horizontal plane, for example, a mobile phone, where the mobile phone may be displayed on the animation to gradually rotate from a state where its screen is parallel to the ground to a state where its screen is perpendicular to the ground. In addition, the first prompt message may also be a voice message, for example, may broadcast "please not aim the image capturing device at the ground". Alternatively, the first prompt message may be a text message, for example, a text box "please not aim the image capturing device at the ground" may be displayed, which is not limited herein. In a specific disclosure implementation scenario, if the first prompt information is continuously output within a preset period (for example, 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that the imaging device is adjusted multiple times within the preset period, and still not adjusted to a state that the included angle between the optical axis and the horizontal plane is smaller than the preset angle threshold, or the user does not respond to the first prompt information to adjust the included angle between the optical axis and the horizontal plane of the imaging device within the preset period, in order to improve the robustness of visual positioning, the second prompt information may be output to prompt that positioning fails. In addition, when the second prompt information is output, the visual positioning may be directly exited, and when it is detected that the user starts the visual positioning again, the above step S201 is executed again, so that the visual positioning may be resumed from the step S201.
Step S208: step S201 is re-executed.
After outputting the first prompt information to prompt the reduction of the angle between the optical axis of the image pickup device and the horizontal plane, the above-described step S201 may be re-executed, so that the visual positioning may be re-performed from the step S201. In a specific disclosure implementation scenario, in order to ensure that the angle between the optical axis of the image capturing device and the horizontal plane that is acquired again is the angle that is adjusted in response to the first prompt information, after the first prompt information is output, a preset waiting period (for example, 2 seconds, 4 seconds, etc.) may be waited, and then the above step S201 is executed, so that the visual positioning may be performed again from step S201.
Step S209: and outputting third prompt information to prompt the shooting environment in which the imaging device is replaced.
In one embodiment, in order to enhance the user experience, the third prompting information may be an animation for visually prompting the shooting environment where the image capturing device is replaced, and the animation may be displayed to switch from a night state to a daytime state so as to prompt the image capturing device to be moved to the shooting environment with better illumination conditions. Alternatively, the third prompting message may be a text message, for example, a text box may be displayed, "please move the image capturing device to a place with sufficient light", which is not limited herein. In a specific disclosure implementation scenario, if the third prompt information is continuously output within a first preset period (for example, 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that after the imaging device continuously adjusts the shooting environment for multiple times, the illumination intensity still does not meet the preset illumination condition, or it may also be considered that the imaging device does not respond to the third prompt information to replace the shooting environment within the first preset period, and in order to improve the robustness of visual positioning, the fourth prompt information may be output to prompt that positioning fails. In addition, when the fourth prompt information is output, the visual positioning may be directly exited, and when it is detected that the user starts the visual positioning again, the above-described step S201 is executed again, so that the visual positioning may be resumed from the step S201.
Step S210: step S201 is re-executed.
After outputting the third prompting information to prompt the replacement of the shooting environment in which the image pickup device is located, the above-described step S201 may be re-executed, so that the visual positioning may be re-performed from the step S201. In a specific disclosure implementation scenario, in order to ensure that the re-acquired illumination intensity is the illumination intensity of the adjusted shooting environment, after outputting the third prompting message, a preset waiting period (for example, 2 seconds, 4 seconds, etc.) may be waited, and then the above step S201 is executed, so that the visual positioning may be performed again from the step S201.
Different from the above disclosed embodiments, by acquiring the included angle between the optical axis of the image capturing device and the horizontal plane before acquiring the illumination intensity of the capturing environment where the image capturing device is located, and executing the step of acquiring the illumination intensity when the included angle is smaller than the preset angle threshold, the image to be processed captured by the image capturing device can be effectively prevented from being ground, so that the image quality of the image to be processed for visual positioning can be effectively ensured, and the success rate and accuracy of visual positioning can be improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a visual positioning method according to another embodiment of the application. Specifically, in order to further ensure the image quality of the image to be processed for visual positioning, a related preprocessing procedure may be further performed, where the related preprocessing procedure in the embodiments of the present disclosure may specifically include detecting an included angle between an optical axis of the image capturing device and a horizontal plane, and detecting a feature richness of the captured image to be processed, so that the image capturing device captures as little of the ground as possible, and the feature richness of the captured image to be processed is as high as possible, thereby improving the image quality, and further improving the success rate and accuracy of visual positioning, and specifically, the embodiments of the present disclosure may include the following steps:
Step S301: an included angle between an optical axis of the image pickup device and a horizontal plane is obtained.
In one disclosed implementation, the step of acquiring the angle between the optical axis of the camera device and the horizontal plane may be restarted when the user-triggered visual positioning is detected, so that the visual positioning may be started. Reference may be made in particular to the relevant steps of the previous embodiments.
Step S302: and judging whether the included angle is smaller than a preset angle threshold, if so, executing the step S303, otherwise, executing the step S310.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S303: the illumination intensity of the shooting environment in which the image pickup device is located is obtained.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S304: and judging whether the illumination intensity meets the preset illumination condition, if so, executing the step S305, otherwise, executing the step S312.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S305: and acquiring an image to be processed, which is shot by the imaging device in a shooting environment.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S306: and acquiring characteristic information of the image to be processed.
In one disclosed implementation scenario, the feature information of the image to be processed may include information entropy of the image to be processed, specifically, the image to be processed may be converted into a gray scale image, and a proportion of each pixel value (for example, 0-255) in the gray scale image to the total number of pixels is counted, and the proportion is processed by adopting a preset function, so as to obtain the information entropy of the image to be processed.
In another disclosed implementation scenario, the feature information of the image to be processed may also include the number of kinds of target objects (for example, a guideboard, a tree, a building, a lamp post, a billboard, etc.) included in the image to be processed, and specifically, the trained neural network model may be used to detect the image to be processed, so as to obtain the number of kinds of target objects included in the image to be processed.
In still another embodiment, the feature information of the image to be processed may further include information entropy of the image to be processed and a category number of the target object in the image to be processed, specifically, a weight value may be set for the information entropy and the category number, and then the information entropy and the category number are weighted by using the weight value of the information entropy and the weight value of the category number, and a result after the weighting is used as the feature information of the image to be processed.
Step S307: and scoring the characteristic information of the image to be processed by adopting a preset scoring mode to obtain the characteristic score of the image to be processed.
In the embodiment of the disclosure, the feature score is used for representing the feature richness of the image to be processed. For example, when the feature information includes information entropy of the image to be processed, a mapping relationship between the information entropy and the feature score may be preset, and the larger the information entropy is, the larger the feature score is; when the feature information includes the kind number of the target object in the image to be processed, a mapping relation between the kind number and the feature score can be preset, and the larger the kind number is, the larger the feature score is; when the feature information includes weighted results of information entropy and category number, a mapping relationship between a weighted result and feature scores may be preset, and the larger the value of the weighted result, the larger the feature scores are, which is not limited herein.
Step S308: and judging whether the feature score is greater than or equal to a preset score threshold, if so, executing step S309, otherwise, executing step S314.
The preset scoring threshold may be set according to practical situations, and is not limited herein. When the feature score of the image to be processed is greater than or equal to a preset score threshold, the feature richness of the image to be processed can be considered to be higher, so that positioning processing can be performed based on the image to be processed to obtain the position information of the image pickup device, whereas when the feature score of the image to be processed is smaller than the preset score threshold, the feature richness of the image to be processed can be considered to be lower, for example, the image to be processed can be obtained by shooting towards a white wall, a green belt, a sky and the like by the image pickup device, so that the features in the image to be processed are single and insufficient for realizing subsequent visual positioning, and in order to improve user experience and improve the robustness of visual positioning, a user can be prompted to adjust a shooting picture.
Step S309: positional information of the image pickup device is obtained based on the image to be processed.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S310: and outputting first prompt information to prompt reduction of an included angle between the optical axis of the image pickup device and the horizontal plane.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S311: step S301 is re-executed.
After outputting the first prompt message, the visual positioning may be resumed from step S301, which may be specifically referred to the related steps in the foregoing embodiments.
Step S312: and outputting third prompt information to prompt the shooting environment in which the imaging device is replaced.
Reference may be made in particular to the relevant steps of the previous embodiments.
Step S313: step S301 is re-executed.
After outputting the third prompt message, the visual positioning may be resumed from step S301, which may refer to the relevant steps in the foregoing embodiments.
Step S314: and outputting fifth prompting information to prompt and adjust the shooting picture of the shooting device.
In one disclosed implementation scenario, to enhance the user experience, the fifth prompting message may be an animation for visually prompting to adjust the capturing image of the image capturing device, for example, a white wall, a street, and a right arrow (→) between the white wall and the street may be displayed on the animation for prompting to adjust the image capturing device to a capturing image with rich features. In addition, the fifth prompting message may be a voice message, for example, "please move the image capturing device to a place with rich features" may be broadcasted. Alternatively, the fifth prompting message may be a text message, for example, a text box "please move the image capturing device to a place with rich features" may be displayed, which is not limited herein. In a specific disclosure implementation scenario, in order to improve robustness of visual positioning, if the fifth prompt information is continuously output within the second preset duration, it may be considered that the image capturing device still has an insufficient feature enrichment degree after continuously adjusting the captured image for multiple times within the second preset duration, or the image capturing device does not adjust the captured image within the second preset duration, so in order to improve robustness of visual positioning, the sixth prompt information may be output to prompt positioning failure, and the second preset duration may be set according to actual situations, for example, the second preset duration may be 25 seconds, 20 seconds, 15 seconds, and so on, and is not limited herein. In addition, when the sixth prompt information is output, the visual positioning may be directly exited, and when the user is detected to start the visual positioning again, the above step S301 is executed again, so that the visual positioning may be resumed from the step S301.
Step S315: step S301 is re-executed.
After outputting the fifth hint information to hint adjustment of the shot screen of the image pickup device, the above-described step S301 may be re-executed, so that the visual positioning may be re-performed from the step S301. In a specific disclosure implementation scenario, in order to ensure that the image to be processed acquired again is the image captured after the image capturing device adjusts the capturing frame, after outputting the fifth prompting information, a preset waiting period (for example, 2 seconds, 4 seconds, etc.) may be waited for, and then the above step S301 is executed, so that the visual positioning may be performed again from the step S301.
Different from the above disclosed embodiment, the feature information of the image to be processed is scored in a preset scoring manner, so that the feature score of the image to be processed can be obtained, the feature richness of the image to be processed is represented, when the feature score is greater than or equal to a preset scoring threshold, the step of obtaining the position information of the image pickup device based on the image to be processed is executed, when the feature score is less than the preset scoring threshold, the fourth prompting information is output to prompt adjustment of the shooting picture of the image pickup device, and the step of obtaining the included angle between the optical axis of the image pickup device and the horizontal plane is executed again, so that the feature richness of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, the success rate and the accuracy of visual positioning are improved, and the robustness of visual positioning is improved.
Referring to fig. 4, fig. 4 is a schematic frame diagram of a visual positioning device 40 according to an embodiment of the application. The visual positioning device 40 comprises a first acquisition module 41, a second acquisition module 42 and a third acquisition module 43, wherein the first acquisition module 41 is used for acquiring the illumination intensity of a shooting environment where the image pickup device is located; the second obtaining module 42 is configured to obtain an image to be processed captured by the imaging device in the capturing environment when the illumination intensity satisfies a preset illumination condition; the third acquisition module 43 is configured to acquire positional information of the image pickup device based on the image to be processed.
According to the scheme, the illumination intensity of the shooting environment where the image pickup device is located is obtained, and when the illumination intensity meets the preset illumination condition, the image to be processed of the image pickup device in the shooting environment is obtained, so that the position information of the image pickup device is obtained based on the image to be processed, the image to be processed can be ensured to be shot in the shooting environment meeting the illumination condition, and further, the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
In some disclosed embodiments, the preset lighting condition is that the lighting intensity is greater than or equal to a preset intensity threshold.
Different from the above disclosed embodiments, it can be ensured that an image to be processed for visual positioning is captured in a capturing environment in which the illumination intensity is greater than or equal to the intensity threshold, so that the image quality of the image to be processed for visual positioning can be effectively ensured, and further the success rate and accuracy of visual positioning are improved.
In some disclosed embodiments, the visual positioning apparatus 40 further includes a fourth acquisition module for acquiring an angle between an optical axis of the image capturing device and a horizontal plane; the first obtaining module 41 is specifically configured to perform a step of obtaining the illumination intensity of the shooting environment where the imaging device is located when the included angle is smaller than the preset angle threshold by combining the second obtaining module 42 and the third obtaining module 43.
Different from the above disclosed embodiments, by acquiring the included angle between the optical axis of the image capturing device and the horizontal plane before acquiring the illumination intensity of the capturing environment where the image capturing device is located, and executing the step of acquiring the illumination intensity when the included angle is smaller than the preset angle threshold, the image to be processed captured by the image capturing device can be effectively prevented from being ground, so that the image quality of the image to be processed for visual positioning can be effectively ensured, and the success rate and accuracy of visual positioning can be improved.
In some disclosed embodiments, the visual positioning apparatus 40 further includes an information output module, configured to output a first prompt message when the illumination intensity does not meet the preset illumination condition, where the first prompt message is used to prompt to replace the shooting environment where the image capturing device is located, and in combination with the fourth obtaining module, the first obtaining module 41, the second obtaining module 42, and the third obtaining module 43, the step of obtaining the included angle between the optical axis of the image capturing device and the horizontal plane is performed again.
Different from the above disclosed embodiments, when the illumination intensity does not meet the preset illumination condition, the first prompt information is output, so that the shooting environment where the camera device is replaced can be prompted, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is re-executed, so that the user experience can be improved, and the robustness of visual positioning is improved.
In some disclosed embodiments, the information output module is further configured to output a second prompt message when the first prompt message is continuously output within the first preset duration, where the second prompt message is used to prompt failure in positioning.
Unlike the previously disclosed embodiments, when the first prompt message is continuously output within the first preset time period, the second prompt message is output to prompt failure of positioning, so that the robustness of visual positioning can be improved.
In some disclosed embodiments, the information output module is further configured to output a third prompting message when the included angle is greater than or equal to the preset angle threshold, where the third prompting message is used to prompt to reduce the included angle between the optical axis of the image capturing device and the horizontal plane, and the fourth acquiring module, the first acquiring module 41, the second acquiring module 42, and the third acquiring module 43 are combined to re-execute the step of acquiring the included angle between the optical axis of the image capturing device and the horizontal plane.
Different from the above disclosed embodiments, when the included angle is greater than or equal to the preset angle threshold, the third prompting information is output, so that the included angle between the optical axis of the image pickup device and the horizontal plane can be prompted to be reduced, the step of obtaining the included angle between the optical axis of the image pickup device and the horizontal plane is re-executed, the image to be processed photographed by the image pickup device can be effectively prevented from being ground, the image quality of the image to be processed for visual positioning can be effectively ensured, the success rate and the accuracy of visual positioning can be improved, and the robustness of visual positioning can be improved.
In some disclosed embodiments, the visual positioning device 40 further includes a fifth obtaining module, configured to obtain feature information of the image to be processed, the visual positioning device 40 further includes a feature scoring module, configured to score the feature information of the image to be processed by using a preset scoring manner, to obtain a feature score of the image to be processed, where the feature score is used to represent a feature richness of the image to be processed, the third obtaining module 43 is specifically configured to perform a step of obtaining, based on the image to be processed, location information of the image capturing device if the feature score is greater than or equal to a preset scoring threshold, and the information output module is configured to output fourth prompt information, where the feature score is less than the preset scoring threshold, to prompt adjustment of a captured image of the image capturing device, and to re-perform a step of obtaining an included angle between an optical axis and a horizontal plane of the image capturing device in combination with the fourth obtaining module, the first obtaining module 41, the second obtaining module 42, the third obtaining module 43, the fifth obtaining module, and the feature scoring module.
Different from the above disclosed embodiment, the feature information of the image to be processed is scored in a preset scoring manner, so that the feature score of the image to be processed can be obtained, the feature richness of the image to be processed is represented, when the feature score is greater than or equal to a preset scoring threshold, the step of obtaining the position information of the image pickup device based on the image to be processed is executed, when the feature score is less than the preset scoring threshold, the fourth prompting information is output to prompt adjustment of the shooting picture of the image pickup device, and the step of obtaining the included angle between the optical axis of the image pickup device and the horizontal plane is executed again, so that the feature richness of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, the success rate and the accuracy of visual positioning are improved, and the robustness of visual positioning is improved.
In some disclosed embodiments, the information output module is further configured to output a fifth prompting message when the fourth prompting message is continuously output within the second preset duration, where the fifth prompting message is used to prompt failure in positioning.
Different from the embodiment disclosed above, when the fourth prompt information is continuously output within the second preset duration, the fifth prompt information is output to prompt failure of positioning, so that the robustness of visual positioning can be improved.
In some disclosed embodiments, the third obtaining module 43 includes an image information obtaining sub-module for obtaining image information of an image to be processed, the third obtaining module 43 further includes an information sending sub-module for sending the image information to a server for positioning processing, and the third obtaining module 43 further includes an information receiving sub-module for receiving position information of an image capturing device obtained by the server based on the image information processing.
Unlike the above-described embodiments, the image information of the image to be processed is sent to the server for positioning processing, and the position information of the image pickup device obtained by the server based on the image information processing is received, so that the processing load of the front-end electronic device can be reduced.
In some disclosed embodiments, the image information includes: image data of the image to be processed, width and height information of the image to be processed, and focal length information adopted by the image pickup device for shooting the image to be processed.
Different from the above disclosed embodiments, the image data of the image to be processed, the width and height information of the image to be processed, and the focal length information set adopted by the image pickup device to capture the image to be processed are packaged as the image information, so that information for visual positioning can be provided for the server, the accuracy of positioning processing of the server is ensured, and the accuracy of visual positioning is improved.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an electronic device 50 according to an embodiment of the application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, the processor 52 being adapted to execute program instructions stored in the memory 51 for implementing the steps of any of the above-described embodiments of the visual positioning method. In one particular disclosed implementation, electronic device 50 may include, but is not limited to: the mobile terminal such as a mobile phone and a tablet personal computer integrated with the camera can also be a vehicle navigation connected with the camera, and the mobile terminal is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the visual positioning method embodiments described above. The processor 52 may also be referred to as a CPU (Central Processing Unit ). The processor 52 may be an integrated circuit chip having signal processing capabilities. Processor 52 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
According to the scheme, the image to be processed can be shot in the shooting environment meeting the illumination condition, so that the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
Referring to fig. 6, fig. 6 is a schematic diagram illustrating a frame of an embodiment of a computer readable storage medium 60 according to the present application. The computer readable storage medium 60 stores program instructions 601 executable by a processor, the program instructions 601 for implementing the steps of any of the visual positioning method embodiments described above.
According to the scheme, the image to be processed can be shot in the shooting environment meeting the illumination condition, so that the image quality of the image to be processed for visual positioning can be effectively improved, and the success rate and accuracy of visual positioning can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (11)
1. A method of visual localization comprising:
acquiring an included angle between an optical axis of the image pickup device and a horizontal plane;
if the included angle is smaller than a preset angle threshold value, acquiring the illumination intensity of the shooting environment where the image pickup device is located;
if the illumination intensity meets the preset illumination condition, acquiring an image to be processed, which is shot by the imaging device in the shooting environment;
acquiring characteristic information of the image to be processed, wherein the characteristic information of the image to be processed comprises at least one of the following: the information entropy of the image to be processed and the type number of the target objects included in the image to be processed;
scoring the feature information of the image to be processed by adopting a preset scoring mode to obtain a feature score of the image to be processed, wherein the feature score is used for representing the feature richness of the image to be processed;
acquiring position information of the image pickup device based on the image to be processed under the condition that the feature score is greater than or equal to a preset score threshold;
and outputting fourth prompt information under the condition that the feature score is smaller than a preset score threshold, wherein the fourth prompt information is used for prompting and adjusting the shooting picture of the imaging device, and re-executing the step of acquiring the included angle between the optical axis of the imaging device and the horizontal plane.
2. The visual positioning method of claim 1, wherein the predetermined illumination condition is that the illumination intensity is greater than or equal to a predetermined intensity threshold.
3. The visual positioning method of claim 1, further comprising:
if the illumination intensity does not meet the preset illumination condition, outputting first prompt information, wherein the first prompt information is used for prompting the replacement of the shooting environment where the shooting device is located;
and re-executing the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane.
4. A visual positioning method as set forth in claim 3, further comprising:
and under the condition that the first prompt information is continuously output within the first preset time length, outputting second prompt information, wherein the second prompt information is used for prompting failure in positioning.
5. The visual positioning method of claim 1, further comprising:
outputting third prompt information when the included angle is larger than or equal to a preset angle threshold, wherein the third prompt information is used for prompting to reduce the included angle between the optical axis of the camera device and the horizontal plane;
And re-executing the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane.
6. The visual positioning method of claim 1, further comprising:
and outputting fifth prompt information under the condition that the fourth prompt information is continuously output within the second preset time period, wherein the fifth prompt information is used for prompting failure in positioning.
7. The visual positioning method according to any one of claims 1 to 6, wherein the obtaining positional information of the image pickup device based on the image to be processed includes:
acquiring image information of the image to be processed;
the image information is sent to a server to be positioned;
and receiving the position information of the image pickup device, which is obtained by the server based on the image information processing.
8. The visual positioning method of claim 7, wherein the image information comprises at least one of: the image data of the image to be processed, the width and height information of the image to be processed and the focal length information adopted by the image pickup device for shooting the image to be processed.
9. A visual positioning device, comprising:
A fourth acquisition module for acquiring an included angle between an optical axis of the imaging device and a horizontal plane;
the first acquisition module is used for acquiring the illumination intensity of the shooting environment where the imaging device is located when the included angle is smaller than a preset angle threshold value;
the second acquisition module is used for acquiring an image to be processed, which is shot by the imaging device in the shooting environment, when the illumination intensity meets the preset illumination condition;
a fifth obtaining module, configured to obtain feature information of the image to be processed, where the feature information of the image to be processed includes at least one of: the information entropy of the image to be processed and the type number of the target objects included in the image to be processed;
the feature scoring module is used for scoring the feature information of the image to be processed in a preset scoring mode to obtain feature scores of the image to be processed, wherein the feature scores are used for representing feature richness of the image to be processed;
a third obtaining module, configured to obtain, based on the image to be processed, location information of the image capturing device, if the feature score is greater than or equal to a preset score threshold;
the information output module is used for outputting fourth prompt information under the condition that the feature score is smaller than a preset score threshold value, wherein the fourth prompt information is used for prompting and adjusting the shooting picture of the image pickup device, and the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane is re-executed by combining the fourth acquisition module.
10. An electronic device comprising a memory and a processor coupled to each other, the processor configured to execute program instructions stored in the memory to implement the visual localization method of any one of claims 1 to 8.
11. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the visual positioning method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010556359.2A CN111583343B (en) | 2020-06-17 | 2020-06-17 | Visual positioning method, related device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010556359.2A CN111583343B (en) | 2020-06-17 | 2020-06-17 | Visual positioning method, related device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583343A CN111583343A (en) | 2020-08-25 |
CN111583343B true CN111583343B (en) | 2023-11-07 |
Family
ID=72111294
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010556359.2A Active CN111583343B (en) | 2020-06-17 | 2020-06-17 | Visual positioning method, related device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583343B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348888B (en) * | 2020-09-09 | 2023-06-20 | 北京市商汤科技开发有限公司 | Positioning method and device of display device, display device and computer storage medium |
CN112950713A (en) * | 2021-02-25 | 2021-06-11 | 深圳市慧鲤科技有限公司 | Positioning method and device, electronic equipment and storage medium |
CN112950714A (en) * | 2021-02-25 | 2021-06-11 | 深圳市慧鲤科技有限公司 | Positioning method and device, electronic equipment and storage medium |
CN113587917A (en) * | 2021-07-28 | 2021-11-02 | 北京百度网讯科技有限公司 | Indoor positioning method, device, equipment, storage medium and computer program product |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018120074A1 (en) * | 2016-12-30 | 2018-07-05 | 天彩电子(深圳)有限公司 | Night-vision switching method for monitoring photographing apparatus, and system thereof |
CN109229400A (en) * | 2017-07-10 | 2019-01-18 | 深圳市道通智能航空技术有限公司 | The control method and device of aircraft, aircraft |
CN109639973A (en) * | 2018-12-21 | 2019-04-16 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium |
CN110132274A (en) * | 2019-04-26 | 2019-08-16 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | A kind of indoor orientation method, device, computer equipment and storage medium |
CN110853185A (en) * | 2019-11-29 | 2020-02-28 | 长城汽车股份有限公司 | Vehicle panoramic all-round looking recording system and method |
-
2020
- 2020-06-17 CN CN202010556359.2A patent/CN111583343B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018120074A1 (en) * | 2016-12-30 | 2018-07-05 | 天彩电子(深圳)有限公司 | Night-vision switching method for monitoring photographing apparatus, and system thereof |
CN109229400A (en) * | 2017-07-10 | 2019-01-18 | 深圳市道通智能航空技术有限公司 | The control method and device of aircraft, aircraft |
CN109639973A (en) * | 2018-12-21 | 2019-04-16 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium |
CN110132274A (en) * | 2019-04-26 | 2019-08-16 | 中国铁道科学研究院集团有限公司电子计算技术研究所 | A kind of indoor orientation method, device, computer equipment and storage medium |
CN110853185A (en) * | 2019-11-29 | 2020-02-28 | 长城汽车股份有限公司 | Vehicle panoramic all-round looking recording system and method |
Also Published As
Publication number | Publication date |
---|---|
CN111583343A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583343B (en) | Visual positioning method, related device, equipment and storage medium | |
CN109661812B (en) | Multi-viewpoint camera system, three-dimensional space reconstruction system and three-dimensional space identification system | |
US9928710B2 (en) | Danger alerting method and device, portable electronic apparatus | |
CN107534789B (en) | Image synchronization device and image synchronization method | |
JP6123120B2 (en) | Method and terminal for discovering augmented reality objects | |
US11205284B2 (en) | Vehicle-mounted camera pose estimation method, apparatus, and system, and electronic device | |
JPWO2019225681A1 (en) | Calibration equipment and calibration method | |
JP5747549B2 (en) | Signal detector and program | |
US20160178728A1 (en) | Indoor Positioning Terminal, Network, System and Method | |
US8965040B1 (en) | User correction of pose for street-level images | |
AU2020309094B2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN111724437B (en) | Visual positioning method and related device, equipment and storage medium | |
US20210168279A1 (en) | Document image correction method and apparatus | |
WO2021057244A1 (en) | Light intensity adjustment method and apparatus, electronic device and storage medium | |
US20210407052A1 (en) | Method for processing image, related device and storage medium | |
CN109883433B (en) | Vehicle positioning method in structured environment based on 360-degree panoramic view | |
WO2021005659A1 (en) | Information processing system, sensor system, information processing method, and program | |
JP6019114B2 (en) | Pedestrian gait recognition method and device for portable terminal | |
CN114095910B (en) | Anti-piracy method, device and medium for intelligent AR glasses | |
US20130308829A1 (en) | Still image extraction apparatus | |
WO2019189768A1 (en) | Communication method, communication device, transmitter, and program | |
CN106203279B (en) | Recognition methods, device and the mobile terminal of target object in a kind of augmented reality | |
CN107230373B (en) | Information recommendation method and device, computer readable storage medium and mobile terminal | |
US10789830B2 (en) | Method and apparatus for gathering visual data using an augmented-reality application | |
US20190373224A1 (en) | Collection system, program for terminal, and collection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |