CN111583343A - Visual positioning method and related device, equipment and storage medium - Google Patents

Visual positioning method and related device, equipment and storage medium Download PDF

Info

Publication number
CN111583343A
CN111583343A CN202010556359.2A CN202010556359A CN111583343A CN 111583343 A CN111583343 A CN 111583343A CN 202010556359 A CN202010556359 A CN 202010556359A CN 111583343 A CN111583343 A CN 111583343A
Authority
CN
China
Prior art keywords
image
processed
visual positioning
camera device
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010556359.2A
Other languages
Chinese (zh)
Other versions
CN111583343B (en
Inventor
韦豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010556359.2A priority Critical patent/CN111583343B/en
Publication of CN111583343A publication Critical patent/CN111583343A/en
Application granted granted Critical
Publication of CN111583343B publication Critical patent/CN111583343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses a visual positioning method, a related device, equipment and a storage medium, wherein the visual positioning method comprises the following steps: acquiring the illumination intensity of the shooting environment where the camera device is located; if the illumination intensity meets the preset illumination condition, acquiring an image to be processed shot by the camera device in a shooting environment; position information of the image pickup device is obtained based on the image to be processed. By the scheme, the success rate and accuracy of visual positioning can be improved.

Description

Visual positioning method and related device, equipment and storage medium
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a visual positioning method, and a related apparatus, device, and storage medium.
Background
With the development of information technology and electronic technology, people are more and more interested in using electronic devices such as mobile phones and tablet computers to shoot and realize visual positioning services on the basis of the electronic devices. Taking the AR (Augmented Reality) navigation in the visual positioning service as an example, the AR navigation performs navigation based on the real picture shot by the camera device, can effectively reduce the probability of driving error, and is beneficial to improving the traffic efficiency and the traffic safety, so the AR navigation is gradually spotlighted by the industry.
However, in the practical application process, the quality of the image captured by the image capturing device is often uneven, so the success rate and accuracy of the visual positioning cannot be effectively guaranteed, and the user experience is affected. In view of this, how to improve the success rate and accuracy of visual positioning becomes an urgent problem to be solved.
Disclosure of Invention
The application provides a visual positioning method, a related device, equipment and a storage medium.
A first aspect of the present application provides a visual positioning method, including: acquiring the illumination intensity of the shooting environment where the camera device is located; if the illumination intensity meets the preset illumination condition, acquiring an image to be processed shot by the camera device in a shooting environment; position information of the image pickup device is obtained based on the image to be processed.
Therefore, the attention intensity of the shooting environment where the camera device is located is obtained, and when the illumination intensity meets the preset illumination condition, the to-be-processed image of the camera device in the shooting environment is obtained, so that the position information of the camera device is obtained based on the to-be-processed image, the to-be-processed image can be guaranteed to be shot in the shooting environment meeting the illumination condition, the image quality of the to-be-processed image for visual positioning can be effectively improved, and the success rate and the accuracy of the visual positioning can be improved.
The preset illumination condition is that the illumination intensity is greater than or equal to a preset intensity threshold value.
Therefore, the to-be-processed image for visual positioning can be obtained by shooting in the shooting environment with the illumination intensity greater than or equal to the intensity threshold, so that the image quality of the to-be-processed image for visual positioning can be effectively ensured, and the success rate and the accuracy of visual positioning are further improved.
Before obtaining the illumination intensity of the shooting environment where the camera device is located, the method further comprises the following steps: acquiring an included angle between an optical axis of the camera device and a horizontal plane; and if the included angle is smaller than the preset angle threshold, executing the step of obtaining the illumination intensity of the shooting environment where the camera device is located and the subsequent steps.
Therefore, before the illumination intensity of the shooting environment where the camera device is located is obtained, the included angle between the optical axis of the camera device and the horizontal plane is obtained, and when the included angle is smaller than the preset angle threshold value, the step of obtaining the illumination intensity is executed, so that the to-be-processed image shot by the camera device can be effectively prevented from being the ground, the image quality of the to-be-processed image for visual positioning can be effectively ensured, and the success rate and the accuracy of the visual positioning can be improved.
If the illumination intensity does not meet the preset illumination condition, outputting first prompt information, wherein the first prompt information is used for prompting the shooting environment in which the camera device is replaced; and re-executing the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane.
Therefore, when the illumination intensity does not meet the preset illumination condition, the first prompt message is output, the shooting environment where the camera device is located can be prompted to be replaced, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is executed again, so that the user experience can be improved, and the robustness of visual positioning is improved.
And outputting second prompt information under the condition that the first prompt information is continuously output within the first preset time length, wherein the second prompt information is used for prompting the positioning failure.
Therefore, when the first prompt message is continuously output within the first preset duration, the second prompt message is output to prompt the positioning failure, so that the robustness of the visual positioning can be improved.
Outputting a third prompt message when the included angle is greater than or equal to a preset angle threshold, wherein the third prompt message is used for prompting to reduce the included angle between the optical axis of the camera device and the horizontal plane; and re-executing the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane.
Therefore, when the included angle is larger than or equal to the preset angle threshold value, the third prompt message is output, the included angle between the optical axis of the camera device and the horizontal plane can be prompted to be reduced, the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is executed again, the image to be processed shot by the camera device can be effectively prevented from being the ground, the image quality of the image to be processed for visual positioning can be effectively ensured, the success rate and the accuracy of the visual positioning can be improved, and the robustness of the visual positioning can be improved.
After acquiring the to-be-processed image shot by the image pickup device in the shooting environment and before obtaining the position information of the image pickup device based on the to-be-processed image, the method further comprises the following steps: acquiring characteristic information of an image to be processed; grading the feature information of the image to be processed by adopting a preset grading mode to obtain a feature grade of the image to be processed, wherein the feature grade is used for expressing the feature abundance degree of the image to be processed; executing a step of obtaining position information of the image pickup device based on the image to be processed in a case where the feature score is greater than or equal to a preset score threshold; and outputting a fourth prompt message when the feature score is smaller than a preset score threshold, wherein the fourth prompt message is used for prompting the adjustment of a shooting picture of the camera device, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is executed again.
Therefore, the feature information of the image to be processed is scored in a preset scoring mode, the feature score of the image to be processed can be obtained, the feature abundance degree of the image to be processed is represented, when the feature score is larger than or equal to a preset scoring threshold value, the step of obtaining the position information of the camera device based on the image to be processed is executed, when the feature score is smaller than the preset scoring threshold value, a fourth prompt message is output to prompt adjustment of a shooting picture of the camera device, and the step of obtaining the included angle between the optical axis of the camera device and the horizontal plane is executed again, the feature abundance degree of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, the success rate and the accuracy of the visual positioning are improved, and the robustness of the visual positioning is improved.
And outputting a fifth prompt message under the condition that the fourth prompt message is continuously output within a second preset time length, wherein the fifth prompt message is used for prompting the positioning failure.
Therefore, when the fourth prompt message is continuously output within the second preset duration, the fifth prompt message is output to prompt that the positioning fails, and the robustness of the visual positioning can be improved.
Wherein obtaining position information of the image pickup device based on the image to be processed includes: acquiring image information of an image to be processed; sending the image information to a server for positioning processing; and receiving the position information of the camera device obtained by the server based on the image information processing.
Therefore, the image information of the image to be processed is sent to the server for positioning processing, and the position information of the camera device obtained by the server based on the image information processing is received, so that the processing load of the front-end electronic equipment can be reduced.
Wherein the image information includes: the image processing method comprises the steps of obtaining image data of an image to be processed, width and height information of the image to be processed and focal length information adopted by a camera device for shooting the image to be processed.
Therefore, the image data of the image to be processed, the width and height information of the image to be processed and the focal length information adopted by the image pickup device to shoot the image to be processed are packaged into the image information, so that the information for visual positioning can be provided for the server, the accuracy of positioning processing performed by the server is ensured, and the accuracy of the visual positioning is improved.
A second aspect of the present application provides a visual positioning apparatus, which includes a first obtaining module, a second obtaining module, and a third obtaining module, where the first obtaining module is configured to obtain an illumination intensity of a shooting environment where an image pickup device is located; the second acquisition module is used for acquiring the image to be processed shot by the camera device in the shooting environment when the illumination intensity meets the preset illumination condition; the third acquisition module is used for acquiring the position information of the camera device based on the image to be processed.
A third aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the visual positioning method in the first aspect.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions, which when executed by a processor, implement the visual positioning method of the first aspect.
According to the scheme, the lighting intensity of the shooting environment where the camera device is located is obtained, and when the lighting intensity meets the preset lighting condition, the to-be-processed image of the camera device in the shooting environment is obtained, so that the position information of the camera device is obtained based on the to-be-processed image, the to-be-processed image can be guaranteed to be shot under the shooting environment meeting the lighting condition, the image quality of the to-be-processed image for visual positioning can be effectively improved, and the success rate and the accuracy of the visual positioning can be improved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a visual positioning method of the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a visual positioning method of the present application;
FIG. 3 is a schematic flow chart diagram illustrating a visual positioning method according to another embodiment of the present application;
FIG. 4 is a schematic diagram of a frame of an embodiment of the visual positioning apparatus of the present application;
FIG. 5 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a visual positioning method according to an embodiment of the present application. Specifically, the method may include the steps of:
step S11: and acquiring the illumination intensity of the shooting environment where the camera device is located.
In the embodiment of the present disclosure, the camera device may include a mobile terminal such as a mobile phone and a tablet computer integrated with a camera. In addition, the imaging device may also be a car navigation device connected with a camera, which is not limited herein.
The shooting environment may include, but is not limited to: mall, pedestrian street, high speed, urban road. In a public implementation scene, the camera device can be integrated with a photosensitive sensor and used for sensing the illumination intensity of the shooting environment where the camera device is located, so that the illumination intensity of the shooting environment can be rapidly and accurately acquired; in another open implementation scenario, an image to be processed captured by the image capturing device in the current capturing environment may also be obtained in advance, the image to be processed is converted into a gray scale image, an average value of gray scale values of at least some pixel points (such as background pixel points) in the gray scale image is counted, and the illumination intensity of the capturing environment where the image capturing device is located is determined according to a mapping relationship between the gray scale values and the illumination intensity, so that the illumination intensity may be obtained without adding extra hardware, which is beneficial to reducing cost and is not limited herein.
In yet another disclosed implementation scenario, the acquisition of the illumination intensity and the subsequent correlation steps may be resumed upon detection of a user-triggered visual positioning. Specifically, the user may trigger the visual positioning by clicking an entity key or a virtual key of the electronic device such as the mobile terminal and the car navigation device, or may trigger the visual positioning by a voice instruction, which is not limited herein.
Step S12: and judging whether the illumination intensity meets the preset illumination condition, if so, executing the step S13.
In an open implementation scene, in order to further effectively ensure the image quality of the to-be-processed image for visual positioning and further improve the success rate and accuracy of visual positioning, a preset illumination condition can be set such that the illumination intensity is greater than or equal to a preset intensity threshold, when the illumination intensity is greater than or equal to the preset intensity threshold, the greater probability of the to-be-processed image shot under the current shooting environment can be considered as a high-quality image, and conversely, when the illumination intensity is less than the preset intensity threshold, the greater probability of the to-be-processed image shot under the current environment can be considered as a low-quality image.
Step S13: and acquiring an image to be processed shot by the camera device in a shooting environment.
When the illumination intensity meets the preset illumination condition, the image to be processed shot under the current shooting environment can be considered to be a high-quality image with a high probability, so that the image to be processed shot by the camera device under the shooting environment can be obtained for subsequent visual positioning based on the image to be processed.
Step S14: position information of the image pickup device is obtained based on the image to be processed.
In a public implementation scene, a two-dimensional image based on a time sequence can be pre-reconstructed in a preset three-dimensional reconstruction mode to obtain a three-dimensional model, and then the position information of the camera device is determined based on the image to be processed and the three-dimensional model. The preset three-dimensional reconstruction mode can be an SFM (structure From motion) algorithm, the SFM algorithm performs registration after performing feature extraction on the two-dimensional image, estimates camera parameters through global optimization, and finally performs data fusion to construct a three-dimensional model.
In a disclosed implementation scenario, in order to reduce the processing load of the front-end electronic device, image information of an image to be processed may be acquired, and the image information may be sent to a server for positioning processing, so as to receive position information of the image pickup device obtained by the server based on the image information processing. In a specific implementation scenario disclosed herein, in order to ensure the accuracy of the positioning process performed by the server, and thus improve the accuracy of the visual positioning, the image information may include, but is not limited to: the image processing method comprises the steps of obtaining image data of an image to be processed, width and height information of the image to be processed and focal length information adopted by a camera device for shooting the image to be processed.
In a public implementation scenario, in order to assist in indoor positioning, bluetooth beacons (beacons) may be further set on different floors, where the bluetooth beacons are used to transmit bluetooth signals, so that the floor where the image pickup device is located can be determined according to signal strength and a signal Identifier of the received bluetooth signals, where the signal Identifier is used to indicate the bluetooth beacon that transmits the bluetooth signals, and the signal Identifier may be a Universally Unique Identifier (UUID). For example, the bluetooth beacon a is set at the layer 1, the bluetooth beacon B is set at the layer 2, and the output power of the bluetooth beacon a and the output power of the bluetooth beacon B are set to be the same in advance, so that when only the bluetooth signal sent by the bluetooth beacon a is received, the imaging device can be determined to be at the layer 1; alternatively, when only the bluetooth signal transmitted by the bluetooth beacon B is received, it may be determined that the image pickup device is in layer 2; or when receiving the bluetooth signals sent by the bluetooth beacon a and the bluetooth beacon B at the same time, the signal strength between the bluetooth beacon a and the bluetooth beacon B may be further compared, and if the strength of the received bluetooth signal sent by the bluetooth beacon a is greater than the strength of the received bluetooth signal sent by the bluetooth beacon B, it may be determined that the image pickup device is on the layer 1, otherwise, it may be determined that the image pickup device is on the layer 2. In addition, in order to further improve the accuracy of bluetooth-assisted positioning, a plurality of bluetooth beacons may be further arranged on the same floor, so as to determine the floor where the camera device is located by combining the received bluetooth signals sent by the plurality of bluetooth beacons, which is not illustrated here.
In a public implementation scenario, the embodiment of the present disclosure may be specifically executed by a mobile terminal such as a mobile phone and a tablet computer integrated with a camera, or may be executed by a vehicle-mounted navigation device connected with a camera, and when the embodiment of the present disclosure is executed by the electronic device, a route may be planned based on acquired position information and destination position information input by a user, and navigation prompt information is output, for example, an image to be processed captured by a camera device is displayed on a display screen of the electronic device, and a navigation mark (for example, a straight mark, a left turn mark, a right turn mark, and a turn-around mark) is displayed on the image to be processed in an overlapping manner, and the navigation mark may be represented by an arrow, which is not limited herein.
In a public implementation scenario, the illumination intensity of the shooting environment where the image pickup device is located may not meet the preset illumination condition, for example, the image pickup device is located on a walking stair in a shopping mall, or the image pickup device is located on a rural road at night, or the image pickup device is located in an underground parking lot with poor illumination condition, and at this time, because the illumination intensity is weak, the to-be-processed image shot in the shooting environment is a low-quality image with a high probability, and cannot meet the purpose of visual positioning, and in order to improve user experience and improve robustness of visual positioning, the following steps S15 and S16 may be further performed.
Step S15: and outputting first prompt information to prompt the shooting environment where the camera device is replaced.
In a disclosed implementation scenario, in order to enhance the user experience, the first prompt message may be an animation for visually prompting to adjust the shooting picture of the image pickup device, for example, the animation may display a state switched from a night state to a day state to prompt to move the image pickup device to a shooting environment with better lighting conditions, and the first prompt message may also be a voice message, for example, to report "please move the image pickup device to a place with sufficient light". Alternatively, the first prompt message may be a text message, for example, a text box "please move the camera device to a place with sufficient light" may be displayed, which is not limited herein. In a specific implementation scenario of the disclosure, if the first prompt message is continuously output within a first preset time duration (e.g., 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that the illumination intensity still does not satisfy the preset illumination condition after the image pickup device continuously adjusts the shooting environment for multiple times, or it may be considered that the image pickup device does not respond to the first prompt message within the first preset time duration to change the shooting environment, and in order to improve robustness of the visual positioning, a second prompt message may be output to prompt that the positioning fails. Further, when the second prompt message is output, the visual positioning may be directly exited, and when it is detected that the user starts the visual positioning again, the above step S11 is performed again, so that the visual positioning may be resumed from the above step S11.
Step S16: step S11 is re-executed.
After the first prompting message is output to prompt the replacement of the shooting environment in which the image pickup device is located, the above-described step S11 may be re-executed, so that the visual positioning may be re-performed from the above-described step S11. In a specific disclosed implementation scenario, in order to ensure that the shooting environment of the image pickup device acquired again is the shooting environment after being changed in response to the first prompt message, the step S11 may be executed again after waiting for a preset waiting time period (e.g., 2 seconds, 4 seconds, etc.) after outputting the first prompt message, so that the vision may be resumed from the step S11.
According to the scheme, the lighting intensity of the shooting environment where the camera device is located is obtained, and when the lighting intensity meets the preset lighting condition, the to-be-processed image of the camera device in the shooting environment is obtained, so that the position information of the camera device is obtained based on the to-be-processed image, the to-be-processed image can be guaranteed to be shot under the shooting environment meeting the lighting condition, the image quality of the to-be-processed image for visual positioning can be effectively improved, and the success rate and the accuracy of the visual positioning can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a visual positioning method according to another embodiment of the present application. Specifically, in order to further ensure the image quality of the to-be-processed image for visual positioning, a related preprocessing process may be performed first, and the related preprocessing process in the embodiment of the present disclosure may specifically include detecting an included angle between an optical axis of the image pickup device and a horizontal plane, so that the image pickup device shoots the ground as little as possible, and the image quality is improved, thereby improving the success rate and accuracy of visual positioning, and specifically, the embodiment of the present disclosure may include the following steps:
step S201: and acquiring an included angle between the optical axis of the camera device and the horizontal plane.
The optical axis of the image pickup device is the central line of the light beam passing through the central point of the lens, and when the light beam rotates around the optical axis, no change of optical characteristics is generated. Taking a mobile phone as an example, when a screen of the mobile phone is parallel to a horizontal plane, an optical axis of the camera device is perpendicular to the horizontal plane, that is, an included angle between the optical axis and the horizontal plane is 90 degrees, and when the screen of the mobile phone is perpendicular to the horizontal plane, the optical axis of the camera device is parallel to the horizontal plane, that is, an included angle between the optical axis and the horizontal plane is 0 degree. Therefore, in order to ensure that the image to be processed shot by the camera device contains the ground as little as possible so as to improve the image quality of the image to be processed, the included angle between the optical axis of the camera device and the horizontal plane can be as close to 0 degree as possible.
In one disclosed implementation scenario, when it is detected that the user triggers the visual positioning, the step of obtaining the included angle between the optical axis of the imaging device and the horizontal plane may be started again, so that the visual positioning may be performed again. Specifically, the user may trigger the visual positioning by clicking an entity key or a virtual key of the electronic device such as the mobile terminal and the car navigation device, or may trigger the visual positioning by a voice instruction, which is not limited herein.
Step S202: and judging whether the included angle is smaller than a preset angle threshold value, if so, executing step S203, and otherwise, executing step S207.
The preset angle threshold may be set according to actual conditions, for example, the preset angle threshold may be 30 degrees, 20 degrees, 10 degrees, and the like, and is not limited herein. When the included angle is smaller than the preset angle threshold value, the camera device can be considered to be capable of shooting the ground as few as possible, so that the probability that the shot to-be-processed image is a high-quality image is high, otherwise, the camera device can be considered to be capable of shooting more ground images, so that the probability that the shot to-be-processed image is a low-quality image is high.
Step S203: and acquiring the illumination intensity of the shooting environment where the camera device is located.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S204: whether the illumination intensity meets the preset illumination condition is judged, if yes, step S205 is executed, otherwise, step S209 is executed.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S205: and acquiring an image to be processed shot by the camera device in a shooting environment.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S206: position information of the image pickup device is obtained based on the image to be processed.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S207: and outputting a first prompt message to prompt the reduction of the included angle between the optical axis of the camera device and the horizontal plane.
In a disclosed implementation scenario, in order to enhance the user experience, the first prompt message may be an animation for visually prompting to reduce an included angle between the optical axis of the camera device and a horizontal plane, and taking a mobile phone as an example, the animation may display that the mobile phone gradually rotates from a state where a screen thereof is parallel to the ground to a state where the screen thereof is perpendicular to the ground. In addition, the first prompt message may be a voice message, for example, a message "please not to aim the image pickup device at the ground" may be broadcast. Alternatively, the first prompt message may be a text message, for example, a text box "please aim the camera device at the ground" may be displayed, which is not limited herein. In a specific implementation scenario of the disclosure, if the first prompt message is continuously output within a preset time period (e.g., 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that the image pickup device has been adjusted multiple times within the preset time period and still has not been adjusted to a state where an included angle between the optical axis and the horizontal plane of the image pickup device is smaller than a preset angle threshold, or the user has not adjusted the included angle between the optical axis and the horizontal plane of the image pickup device in response to the first prompt message within the preset time period, then in order to improve robustness of the visual positioning, a second prompt message may be output to prompt positioning failure. Furthermore, when the second prompt message is output, the visual positioning may be directly exited, and when it is detected that the user starts the visual positioning again, the above step S201 is executed again, so that the visual positioning may be resumed from step S201.
Step S208: step S201 is re-executed.
After outputting the first prompting message to prompt to reduce the angle between the optical axis of the image pickup device and the horizontal plane, the above step S201 may be re-executed, so that the visual positioning may be re-performed from step S201. In a specific implementation scenario disclosed herein, in order to ensure that the angle between the optical axis of the image pickup device and the horizontal plane acquired again is the angle adjusted in response to the first prompt message, the step S201 may be executed again after a preset waiting time (for example, 2 seconds, 4 seconds, and the like) is waited after the first prompt message is output, so that the visual positioning may be performed again from the step S201.
Step S209: and outputting third prompt information to prompt the shooting environment in which the camera device is replaced.
In a public implementation scenario, in order to enhance user experience, the third prompt message may be an animation for visually prompting a shooting environment where the camera device is replaced, and the animation may display a state switched from a night state to a day state to prompt the camera device to be moved to a shooting environment with a better lighting condition. Alternatively, the third prompt message may be a text message, for example, a text box "please move the camera device to a place with sufficient light" may be displayed, which is not limited herein. In a specific implementation scenario of the disclosure, if the third prompt message is continuously output within a first preset time period (e.g., 15 seconds, 20 seconds, 25 seconds, etc.), it may be considered that the illumination intensity still does not satisfy the preset illumination condition after the image pickup device continuously adjusts the shooting environment for multiple times, or it may be considered that the image pickup device does not respond to the third prompt message within the first preset time period to change the shooting environment, and in order to improve robustness of the visual positioning, a fourth prompt message may be output to prompt that the positioning fails. Furthermore, when the fourth prompting message is output, the visual positioning may be directly exited, and when it is detected again that the user starts the visual positioning again, the above step S201 is executed again, so that the visual positioning may be resumed from step S201.
Step S210: step S201 is re-executed.
After outputting the third prompting message to prompt the replacement of the shooting environment in which the image pickup device is located, the above-described step S201 may be re-executed, so that the visual positioning may be re-performed from the step S201. In a specific implementation scenario disclosed herein, in order to ensure that the illumination intensity obtained again is the illumination intensity of the adjusted shooting environment, after the third prompt message is output, a preset waiting time (e.g., 2 seconds, 4 seconds, etc.) may be waited, and then the step S201 is executed, so that the visual positioning may be performed again from the step S201.
Different from the aforementioned embodiment, before the illumination intensity of the shooting environment where the camera device is located is obtained, the included angle between the optical axis of the camera device and the horizontal plane is obtained, and when the included angle is smaller than the preset angle threshold value, the step of obtaining the illumination intensity is executed, so that the to-be-processed image shot by the camera device can be effectively avoided as the ground, the image quality of the to-be-processed image for visual positioning can be effectively ensured, and the success rate and the accuracy of the visual positioning can be improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a visual positioning method according to another embodiment of the present application. Specifically, in order to further ensure the image quality of the to-be-processed image for visual positioning, a related preprocessing process may also be performed, where the related preprocessing process in the embodiment of the present disclosure may specifically include detecting an included angle between an optical axis of the imaging device and a horizontal plane, and detecting a feature abundance degree of the to-be-processed image obtained by shooting, so that the imaging device shoots the ground as little as possible, and the feature abundance degree of the to-be-processed image shot is as high as possible, thereby improving the image quality, and further improving the success rate and accuracy of visual positioning, and specifically, the embodiment of the present disclosure may include the following steps:
step S301: and acquiring an included angle between the optical axis of the camera device and the horizontal plane.
In one disclosed implementation scenario, when it is detected that the user triggers the visual positioning, the step of acquiring the included angle between the optical axis of the imaging device and the horizontal plane may be started, so that the visual positioning may be started. Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S302: and judging whether the included angle is smaller than a preset angle threshold value, if so, executing the step S303, and otherwise, executing the step S310.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S303: and acquiring the illumination intensity of the shooting environment where the camera device is located.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S304: determining whether the illumination intensity satisfies a predetermined illumination condition, if so, performing step S305, otherwise, performing step S312.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S305: and acquiring an image to be processed shot by the camera device in a shooting environment.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S306: and acquiring the characteristic information of the image to be processed.
In one disclosure implementation scenario, the feature information of the image to be processed may include an information entropy of the image to be processed, and specifically, the image to be processed may be converted into a grayscale image, and a ratio of the number of each pixel value (e.g., 0 to 255) in the grayscale image to the total number of pixels is counted, and the ratio is processed by using a preset function, so as to obtain the information entropy of the image to be processed.
In another disclosure implementation scenario, the feature information of the image to be processed may also include the number of types of target objects (e.g., a guideboard, a tree, a building, a lamp post, a billboard, etc.) included in the image to be processed, and specifically, the trained neural network model may be used to detect the image to be processed, so as to obtain the number of types of target objects included in the image to be processed.
In another disclosure implementation scenario, the feature information of the image to be processed may include both the information entropy of the image to be processed and the number of types of the target objects in the image to be processed, specifically, a weight value may be set for the information entropy and the number of types, and then the information entropy and the number of types are weighted by using the weight value of the information entropy and the weight value of the number of types, respectively, and the result after the weighting is used as the feature information of the image to be processed.
Step S307: and grading the characteristic information of the image to be processed by adopting a preset grading mode to obtain the characteristic grade of the image to be processed.
In the embodiment of the present disclosure, the feature score is used to represent the feature richness of the image to be processed. For example, when the feature information includes the information entropy of the image to be processed, a mapping relationship between the information entropy and the feature score may be preset, and the larger the information entropy is, the larger the feature score is; when the feature information includes the type number of the target object in the image to be processed, a mapping relation between the type number and the feature score can be preset, and the greater the type number is, the greater the feature score is; when the feature information includes the information entropy and the weighting results of the category number, a mapping relationship between the weighting results and the feature scores may be preset, and the greater the value of the weighting results, the greater the feature scores, which is not limited herein.
Step S308: and judging whether the feature score is greater than or equal to a preset score threshold value, if so, executing the step S309, otherwise, executing the step S314.
The preset scoring threshold may be set according to actual conditions, and is not limited herein. When the feature score of the image to be processed is greater than or equal to the preset score threshold, the feature abundance degree of the image to be processed can be considered to be high, so that positioning processing can be performed based on the image to be processed to obtain the position information of the camera device, otherwise, when the feature score of the image to be processed is smaller than the preset score threshold, the feature abundance degree of the image to be processed can be considered to be low, for example, the image to be processed can be obtained by shooting the camera device towards a white wall, a green belt, the sky and the like, so that the features in the image to be processed are single, subsequent visual positioning cannot be achieved sufficiently, user experience is improved, robustness of the visual positioning is improved, and a user can be prompted to adjust a shooting picture.
Step S309: position information of the image pickup device is obtained based on the image to be processed.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S310: and outputting a first prompt message to prompt the reduction of the included angle between the optical axis of the camera device and the horizontal plane.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S311: step S301 is re-executed.
After the first prompt information is output, the visual positioning may be performed again from step S301, which may be referred to as the related steps in the foregoing embodiments.
Step S312: and outputting a third prompt message to prompt the replacement of the shooting environment of the camera device.
Reference may be made in particular to the relevant steps in the preceding embodiments.
Step S313: step S301 is re-executed.
After the third prompt message is output, the visual positioning may be performed again from step S301, which may be referred to as the related steps in the foregoing embodiments.
Step S314: and outputting a fifth prompt message to prompt the adjustment of the shooting picture of the camera device.
In one disclosed implementation scenario, to enhance the user experience, the fifth prompt message may be an animation for visually prompting the adjustment of the shot of the camera device, for example, a white wall, a street, and a right arrow (→) between the white wall and the street may be displayed on the animation for prompting the adjustment of the camera device to the feature-rich shot. In addition, the fifth prompt message may also be a voice message, for example, "please move the image pickup device to a location with rich features" may be broadcasted. Alternatively, the fifth prompting message may also be a text message, for example, a text box "please move the image pickup device to a location with rich features" may be displayed, which is not limited herein. In a specific implementation scenario of the disclosure, in order to improve robustness of the visual positioning, if the fifth prompt message is continuously output within the second preset duration, it may be considered that the image pickup device still has a problem of insufficient feature abundance after continuously adjusting the captured image for multiple times within the second preset duration, or the image pickup device does not adjust the captured image within the second preset duration, so in order to improve robustness of the visual positioning, the sixth prompt message may be output to prompt that the positioning fails, and the second preset duration may be set according to an actual situation, for example, the second preset duration may be 25 seconds, 20 seconds, 15 seconds, and the like, and is not limited herein. Furthermore, when the sixth prompt message is output, the visual positioning may be directly exited, and when it is detected again that the user starts the visual positioning, the above step S301 is executed again, so that the visual positioning may be resumed from step S301.
Step S315: step S301 is re-executed.
After the fifth prompting message is output to prompt adjustment of the shooting screen of the image pickup device, the above-described step S301 may be re-executed, so that visual positioning may be re-performed from the step S301. In a specific implementation scenario disclosed herein, in order to ensure that the image to be processed acquired again is an image captured after the image capturing device adjusts the capturing screen, the above step S301 may be executed again after a preset waiting time (e.g., 2 seconds, 4 seconds, etc.) is waited after the fifth prompt message is output, so that the visual positioning may be performed again from step S301.
Different from the foregoing disclosed embodiment, the feature information of the image to be processed is scored in a preset scoring manner, so that the feature score of the image to be processed can be obtained, thereby representing the feature abundance degree of the image to be processed, when the feature score is greater than or equal to a preset scoring threshold, the step of obtaining the position information of the camera device based on the image to be processed is executed, when the feature score is less than the preset scoring threshold, a fourth prompt message is output to prompt adjustment of a shooting picture of the camera device, and the step of obtaining an included angle between an optical axis of the camera device and a horizontal plane is executed again, so that the feature abundance degree of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, thereby improving the success rate and accuracy of visual positioning, and improving the robustness of visual positioning.
Referring to fig. 4, fig. 4 is a schematic diagram of a frame of an embodiment of a visual positioning apparatus 40 according to the present application. The visual positioning device 40 includes a first obtaining module 41, a second obtaining module 42, and a third obtaining module 43, where the first obtaining module 41 is configured to obtain an illumination intensity of a shooting environment where the image pickup device is located; the second obtaining module 42 is configured to obtain a to-be-processed image shot by the image pickup device in a shooting environment when the illumination intensity meets a preset illumination condition; the third obtaining module 43 is configured to obtain position information of the image capturing device based on the image to be processed.
According to the scheme, the lighting intensity of the shooting environment where the camera device is located is obtained, and when the lighting intensity meets the preset lighting condition, the to-be-processed image of the camera device in the shooting environment is obtained, so that the position information of the camera device is obtained based on the to-be-processed image, the to-be-processed image can be guaranteed to be shot under the shooting environment meeting the lighting condition, the image quality of the to-be-processed image for visual positioning can be effectively improved, and the success rate and the accuracy of the visual positioning can be improved.
In some disclosed embodiments, the preset lighting condition is that the lighting intensity is greater than or equal to a preset intensity threshold.
Different from the embodiment disclosed in the foregoing, it can be ensured that the image to be processed for visual positioning is obtained by shooting in a shooting environment in which the illumination intensity is greater than or equal to the intensity threshold, so that the image quality of the image to be processed for visual positioning can be effectively ensured, and the success rate and accuracy of visual positioning are further improved.
In some disclosed embodiments, the visual positioning apparatus 40 further includes a fourth obtaining module, configured to obtain an included angle between an optical axis of the image pickup device and a horizontal plane; the first obtaining module 41 is specifically configured to, in combination with the second obtaining module 42 and the third obtaining module 43, execute a step of obtaining the illumination intensity of the shooting environment where the image pickup device is located when the included angle is smaller than a preset angle threshold.
Different from the aforementioned embodiment, before the illumination intensity of the shooting environment where the camera device is located is obtained, the included angle between the optical axis of the camera device and the horizontal plane is obtained, and when the included angle is smaller than the preset angle threshold value, the step of obtaining the illumination intensity is executed, so that the to-be-processed image shot by the camera device can be effectively avoided as the ground, the image quality of the to-be-processed image for visual positioning can be effectively ensured, and the success rate and the accuracy of the visual positioning can be improved.
In some disclosed embodiments, the visual positioning apparatus 40 further includes an information output module, configured to output first prompt information when the illumination intensity does not satisfy the preset illumination condition, where the first prompt information is used to prompt a shooting environment in which the image capturing device is replaced, and perform the step of acquiring the included angle between the optical axis of the image capturing device and the horizontal plane again in combination with the fourth obtaining module, the first obtaining module 41, the second obtaining module 42, and the third obtaining module 43.
Different from the foregoing disclosed embodiment, when the illumination intensity does not satisfy the preset illumination condition, the first prompt message is output, so that the shooting environment where the camera device is replaced can be prompted, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is executed again, so that the user experience can be improved, and the robustness of visual positioning is improved.
In some disclosed embodiments, the information output module is further configured to output a second prompt message under the condition that the first prompt message is continuously output within the first preset duration, where the second prompt message is used to prompt that the positioning fails.
Different from the embodiment disclosed in the foregoing, when the first prompt message is continuously output within the first preset duration, the second prompt message is output to prompt that the positioning fails, so that the robustness of the visual positioning can be improved.
In some disclosed embodiments, the information output module is further configured to output a third prompt message when the included angle is greater than or equal to the preset angle threshold, where the third prompt message is used to prompt to reduce the included angle between the optical axis of the image pickup device and the horizontal plane, and the step of acquiring the included angle between the optical axis of the image pickup device and the horizontal plane is executed again in combination with the fourth acquisition module, the first acquisition module 41, the second acquisition module 42, and the third acquisition module 43.
Different from the aforementioned disclosed embodiment, when the included angle is greater than or equal to the preset angle threshold, the third prompt message is output, so that the reduction of the included angle between the optical axis of the camera device and the horizontal plane can be prompted, the step of obtaining the included angle between the optical axis of the camera device and the horizontal plane is executed again, the image to be processed shot by the camera device can be effectively avoided as the ground, the image quality of the image to be processed for visual positioning can be effectively ensured, the success rate and the accuracy of visual positioning can be improved, and the robustness of the visual positioning can be improved.
In some disclosed embodiments, the visual positioning apparatus 40 further includes a fifth obtaining module configured to obtain feature information of the image to be processed, the visual positioning apparatus 40 further includes a feature scoring module configured to score the feature information of the image to be processed by using a preset scoring method to obtain a feature score of the image to be processed, where the feature score is used to indicate a feature abundance degree of the image to be processed, the third obtaining module 43 is specifically configured to execute a step of obtaining the position information of the image pickup device based on the image to be processed if the feature score is greater than or equal to a preset scoring threshold, the information output module is configured to output a fourth prompt message if the feature score is less than the preset scoring threshold, the fourth prompt message is used to prompt adjustment of a shooting picture of the image pickup device, and the fourth obtaining module, the first obtaining module 41, the second obtaining module 42, and the third obtaining module 43 are combined, And the fifth acquisition module and the feature scoring module re-execute the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane.
Different from the foregoing disclosed embodiment, the feature information of the image to be processed is scored in a preset scoring manner, so that the feature score of the image to be processed can be obtained, thereby representing the feature abundance degree of the image to be processed, when the feature score is greater than or equal to a preset scoring threshold, the step of obtaining the position information of the camera device based on the image to be processed is executed, when the feature score is less than the preset scoring threshold, a fourth prompt message is output to prompt adjustment of a shooting picture of the camera device, and the step of obtaining an included angle between an optical axis of the camera device and a horizontal plane is executed again, so that the feature abundance degree of the image to be processed can be ensured, the image quality of the image to be processed for visual positioning is effectively improved, thereby improving the success rate and accuracy of visual positioning, and improving the robustness of visual positioning.
In some disclosed embodiments, the information output module is further configured to output a fifth prompting message under the condition that the fourth prompting message is continuously output within a second preset time period, where the fifth prompting message is used for prompting a positioning failure.
Different from the embodiment disclosed in the foregoing, when the fourth prompt message is continuously output within the second preset duration, the fifth prompt message is output to prompt that the positioning fails, so that the robustness of the visual positioning can be improved.
In some disclosed embodiments, the third obtaining module 43 includes an image information obtaining sub-module, configured to obtain image information of an image to be processed, the third obtaining module 43 further includes an information sending sub-module, configured to send the image information to a server for positioning processing, and the third obtaining module 43 further includes an information receiving sub-module, configured to receive position information of the image pickup device, which is obtained by the server based on the image information processing.
Different from the embodiment disclosed in the foregoing, the image information of the image to be processed is sent to the server for positioning processing, and the position information of the image pickup device obtained by the server based on the image information processing is received, so that the processing load of the front-end electronic device can be reduced.
In some disclosed embodiments, the image information includes: the image processing method comprises the steps of obtaining image data of an image to be processed, width and height information of the image to be processed and focal length information adopted by a camera device for shooting the image to be processed.
Different from the embodiment disclosed in the foregoing, by grouping the image data of the image to be processed, the width and height information of the image to be processed, and the focal length information used by the image pickup device to capture the image to be processed into image information, information for visual positioning can be provided for the server, the accuracy of positioning processing performed by the server is ensured, and the accuracy of visual positioning is improved.
Referring to fig. 5, fig. 5 is a schematic block diagram of an embodiment of an electronic device 50 according to the present application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement the steps of any of the embodiments of the visual positioning method described above. In one particular disclosed implementation scenario, electronic device 50 may include, but is not limited to: the mobile terminal such as a mobile phone and a tablet personal computer integrated with a camera may also be a vehicle-mounted navigation system connected with the camera, which is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement the steps of any of the embodiments of the visual positioning method described above. Processor 52 may also be referred to as a CPU (Central processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
By means of the scheme, the image to be processed can be obtained by shooting under the shooting environment meeting the illumination condition, and the image quality of the image to be processed for visual positioning can be effectively improved, so that the success rate and accuracy of the visual positioning can be improved.
Referring to fig. 6, fig. 6 is a block diagram illustrating an embodiment of a computer readable storage medium 60 according to the present application. The computer readable storage medium 60 stores program instructions 601 capable of being executed by a processor, the program instructions 601 being for implementing the steps of any of the embodiments of the visual localization method described above.
By means of the scheme, the image to be processed can be obtained by shooting under the shooting environment meeting the illumination condition, and the image quality of the image to be processed for visual positioning can be effectively improved, so that the success rate and accuracy of the visual positioning can be improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. A visual positioning method, comprising:
acquiring the illumination intensity of the shooting environment where the camera device is located;
if the illumination intensity meets a preset illumination condition, acquiring an image to be processed shot by the camera device in the shooting environment;
and obtaining the position information of the camera device based on the image to be processed.
2. The visual positioning method of claim 1, wherein the preset lighting condition is that the lighting intensity is greater than or equal to a preset intensity threshold.
3. The visual positioning method according to any one of claims 1 to 2, wherein before the obtaining of the illumination intensity of the shooting environment in which the image pickup device is located, the method further comprises:
acquiring an included angle between an optical axis of the camera device and a horizontal plane;
and if the included angle is smaller than a preset angle threshold value, executing the step of obtaining the illumination intensity of the shooting environment where the camera device is located and the subsequent steps.
4. The visual positioning method of claim 3, further comprising:
if the illumination intensity does not meet the preset illumination condition, outputting first prompt information, wherein the first prompt information is used for prompting the replacement of the shooting environment where the camera device is located;
and re-executing the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane.
5. The visual positioning method of claim 4, further comprising:
and under the condition that the first prompt message is continuously output within a first preset time, outputting a second prompt message, wherein the second prompt message is used for prompting the positioning failure.
6. The visual positioning method of claim 3, further comprising:
outputting a third prompt message when the included angle is greater than or equal to a preset angle threshold, wherein the third prompt message is used for prompting the reduction of the included angle between the optical axis of the camera device and the horizontal plane;
and re-executing the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane.
7. The visual positioning method according to claim 3, wherein after the acquiring of the to-be-processed image captured by the imaging device in the capturing environment and before the obtaining of the position information of the imaging device based on the to-be-processed image, the method further comprises:
acquiring characteristic information of the image to be processed;
grading the feature information of the image to be processed by adopting a preset grading mode to obtain a feature grade of the image to be processed, wherein the feature grade is used for expressing the feature abundance degree of the image to be processed;
executing the step of obtaining the position information of the camera device based on the image to be processed when the feature score is greater than or equal to the preset score threshold;
and outputting a fourth prompt message under the condition that the feature score is smaller than the preset score threshold, wherein the fourth prompt message is used for prompting the adjustment of a shooting picture of the camera device, and the step of acquiring the included angle between the optical axis of the camera device and the horizontal plane is executed again.
8. The visual positioning method of claim 7, further comprising:
and under the condition that the fourth prompt message is continuously output within a second preset time, outputting a fifth prompt message, wherein the fifth prompt message is used for prompting the positioning failure.
9. The visual positioning method according to any one of claims 1 to 8, wherein the obtaining position information of the image pickup device based on the image to be processed includes:
acquiring image information of the image to be processed;
sending the image information to a server for positioning processing;
and receiving the position information of the camera device obtained by the server based on the image information processing.
10. The visual positioning method of claim 9, wherein the image information comprises: the image processing device comprises image data of the image to be processed, width and height information of the image to be processed and focal length information adopted by the image pickup device for shooting the image to be processed.
11. A visual positioning device, comprising:
the first acquisition module is used for acquiring the illumination intensity of the shooting environment where the camera device is located;
the second acquisition module is used for acquiring the to-be-processed image shot by the camera device in the shooting environment when the illumination intensity meets a preset illumination condition;
and the third acquisition module is used for acquiring the position information of the camera device based on the image to be processed.
12. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the visual positioning method of any one of claims 1 to 10.
13. A computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the visual positioning method of any of claims 1 to 10.
CN202010556359.2A 2020-06-17 2020-06-17 Visual positioning method, related device, equipment and storage medium Active CN111583343B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010556359.2A CN111583343B (en) 2020-06-17 2020-06-17 Visual positioning method, related device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010556359.2A CN111583343B (en) 2020-06-17 2020-06-17 Visual positioning method, related device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111583343A true CN111583343A (en) 2020-08-25
CN111583343B CN111583343B (en) 2023-11-07

Family

ID=72111294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010556359.2A Active CN111583343B (en) 2020-06-17 2020-06-17 Visual positioning method, related device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111583343B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348888A (en) * 2020-09-09 2021-02-09 北京市商汤科技开发有限公司 Display equipment positioning method and device, display equipment and computer storage medium
CN112950713A (en) * 2021-02-25 2021-06-11 深圳市慧鲤科技有限公司 Positioning method and device, electronic equipment and storage medium
CN112950714A (en) * 2021-02-25 2021-06-11 深圳市慧鲤科技有限公司 Positioning method and device, electronic equipment and storage medium
CN113587917A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Indoor positioning method, device, equipment, storage medium and computer program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120074A1 (en) * 2016-12-30 2018-07-05 天彩电子(深圳)有限公司 Night-vision switching method for monitoring photographing apparatus, and system thereof
CN109229400A (en) * 2017-07-10 2019-01-18 深圳市道通智能航空技术有限公司 The control method and device of aircraft, aircraft
CN109639973A (en) * 2018-12-21 2019-04-16 中国科学院自动化研究所南京人工智能芯片创新研究院 Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium
CN110132274A (en) * 2019-04-26 2019-08-16 中国铁道科学研究院集团有限公司电子计算技术研究所 A kind of indoor orientation method, device, computer equipment and storage medium
CN110853185A (en) * 2019-11-29 2020-02-28 长城汽车股份有限公司 Vehicle panoramic all-round looking recording system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120074A1 (en) * 2016-12-30 2018-07-05 天彩电子(深圳)有限公司 Night-vision switching method for monitoring photographing apparatus, and system thereof
CN109229400A (en) * 2017-07-10 2019-01-18 深圳市道通智能航空技术有限公司 The control method and device of aircraft, aircraft
CN109639973A (en) * 2018-12-21 2019-04-16 中国科学院自动化研究所南京人工智能芯片创新研究院 Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium
CN110132274A (en) * 2019-04-26 2019-08-16 中国铁道科学研究院集团有限公司电子计算技术研究所 A kind of indoor orientation method, device, computer equipment and storage medium
CN110853185A (en) * 2019-11-29 2020-02-28 长城汽车股份有限公司 Vehicle panoramic all-round looking recording system and method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348888A (en) * 2020-09-09 2021-02-09 北京市商汤科技开发有限公司 Display equipment positioning method and device, display equipment and computer storage medium
CN112950713A (en) * 2021-02-25 2021-06-11 深圳市慧鲤科技有限公司 Positioning method and device, electronic equipment and storage medium
CN112950714A (en) * 2021-02-25 2021-06-11 深圳市慧鲤科技有限公司 Positioning method and device, electronic equipment and storage medium
CN113587917A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Indoor positioning method, device, equipment, storage medium and computer program product

Also Published As

Publication number Publication date
CN111583343B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN111583343B (en) Visual positioning method, related device, equipment and storage medium
US20210312214A1 (en) Image recognition method, apparatus and non-transitory computer readable storage medium
CN107534789B (en) Image synchronization device and image synchronization method
CN109661812B (en) Multi-viewpoint camera system, three-dimensional space reconstruction system and three-dimensional space identification system
JP2022103160A (en) Three-dimensional information processing method and three-dimensional information processing device
JP6622308B2 (en) Interactive binocular video display
EP3457380A1 (en) Traffic accident pre-warning method and traffic accident pre-warning device
US9928710B2 (en) Danger alerting method and device, portable electronic apparatus
JPWO2019225681A1 (en) Calibration equipment and calibration method
JP6123120B2 (en) Method and terminal for discovering augmented reality objects
US20160178728A1 (en) Indoor Positioning Terminal, Network, System and Method
US8965040B1 (en) User correction of pose for street-level images
CN111724437B (en) Visual positioning method and related device, equipment and storage medium
CN111667089A (en) Intelligent disaster prevention system and intelligent disaster prevention method
US9438340B2 (en) Communication method
CN111222408A (en) Method and apparatus for improved location decision based on ambient environment
WO2021057244A1 (en) Light intensity adjustment method and apparatus, electronic device and storage medium
US20210168279A1 (en) Document image correction method and apparatus
US20210407052A1 (en) Method for processing image, related device and storage medium
WO2021005659A1 (en) Information processing system, sensor system, information processing method, and program
CN104426957A (en) Method for sending configuration parameter, and method and device for receiving configuration parameter
CN110796580A (en) Intelligent traffic system management method and related products
CN110864913A (en) Vehicle testing method and device, computer equipment and storage medium
CN111337049A (en) Navigation method and electronic equipment
JP6019114B2 (en) Pedestrian gait recognition method and device for portable terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant