CN116664684A - Positioning method, electronic device and computer readable storage medium - Google Patents

Positioning method, electronic device and computer readable storage medium Download PDF

Info

Publication number
CN116664684A
CN116664684A CN202211603109.5A CN202211603109A CN116664684A CN 116664684 A CN116664684 A CN 116664684A CN 202211603109 A CN202211603109 A CN 202211603109A CN 116664684 A CN116664684 A CN 116664684A
Authority
CN
China
Prior art keywords
image
determining
electronic device
matched
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211603109.5A
Other languages
Chinese (zh)
Other versions
CN116664684B (en
Inventor
赵渊
曹鹏蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211603109.5A priority Critical patent/CN116664684B/en
Publication of CN116664684A publication Critical patent/CN116664684A/en
Application granted granted Critical
Publication of CN116664684B publication Critical patent/CN116664684B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/535Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application provides a positioning method, an electronic device and a computer readable storage medium. The positioning method comprises the following steps: acquiring a first image shot by electronic equipment; identifying a first identification image in the first image; determining a position area corresponding to the first image according to the position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period; searching the images in the position area corresponding to the positions in the database, and determining the images to be matched of the first image; and determining the target pose of the electronic equipment according to the map information corresponding to the first image and the image to be matched. By determining the position area corresponding to the first image, the searching range of the image is reduced, and further the time consumed for determining the target pose of the electronic equipment is shortened.

Description

Positioning method, electronic device and computer readable storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a positioning method, an electronic device, and a computer readable storage medium.
Background
In the positioning field, common positioning technologies include ultra wideband positioning, bluetooth positioning, WIFI positioning, visual positioning (Visual Positioning System, VPS), and the like. The visual positioning method is low in visual positioning cost and high in positioning precision, so that the visual positioning method is widely applied.
Visual localization generally requires retrieving and matching images captured by an electronic device with a large number of images in a database to determine the pose of the electronic device when capturing the images. The image searching and comparing process takes a long time, which results in a long time consumption for the positioning process of the electronic device.
Disclosure of Invention
The application provides a positioning method, electronic equipment and a computer readable storage medium, which solve the problem that the positioning process of the electronic equipment in the prior art needs to consume a long time.
In order to achieve the above purpose, the application adopts the following technical scheme:
in a first aspect, a positioning method is provided, applied to an electronic device, and includes:
acquiring a first image shot by the electronic equipment; identifying a first identification image in the first image; determining a position area corresponding to the first image according to the position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period; searching images in the position area corresponding to the positions in the database, and determining images to be matched of the first image; and determining the target pose of the electronic equipment according to the first image and the map information corresponding to the image to be matched.
In the above embodiment, after the first image captured by the electronic device is obtained, the first identification image in the first image is first identified, the position area corresponding to the first image is determined according to the position data corresponding to the first identification image and the pose change information of the electronic device in the preset period, and then the image in the position area is searched in the database, so that the image to be matched of the first image is obtained, and the searching range of the image can be reduced. And then, according to the map information corresponding to the first image and the image to be matched, determining the target pose of the electronic equipment, thereby shortening the time required for determining the target pose of the electronic equipment.
In an embodiment, the determining, according to the map information corresponding to the first image and the image to be matched, the target pose of the electronic device includes: determining a target matching image from the images to be matched, wherein the target matching image comprises the first identification image; and determining the target pose of the electronic equipment according to the map information corresponding to the first image and the target matching image.
By determining the target matching images, the matching precision can be improved, the image matching quantity is reduced, and the calculation speed and the calculation precision of the target pose are further improved.
In an embodiment, the determining the target matching image from the images to be matched includes: and determining a target matching image from the images to be matched according to the identification image information corresponding to each image to be matched stored in the database. By storing the identification image information of the images in the database in advance, the speed of image searching is improved, and the calculation efficiency is further improved.
In an embodiment, the determining, according to the map information corresponding to the first image and the target matching image, the target pose of the electronic device includes: and carrying out feature point matching on the first image and the target matching image, and determining the target pose of the electronic equipment according to the feature point matching result and map information corresponding to the target matching image. And through feature point matching, the accuracy of the determined target pose of the electronic equipment is improved.
In an embodiment, before searching the database for the image with the corresponding location in the location area, the method further includes: acquiring first positioning information of the electronic equipment; determining a first database identifier corresponding to the first positioning information; correspondingly, the searching the image of the corresponding position in the database in the position area comprises the following steps: and searching the images in the position area in the database corresponding to the first database identifier. By determining the database where the images are located, the efficiency of subsequent image searching is improved.
In an embodiment, the determining, according to the map information corresponding to the first image and the image to be matched, the target pose of the electronic device includes: grouping the images to be matched according to the map information corresponding to the images to be matched; according to the first image and map information corresponding to each group of images to be matched, determining candidate poses corresponding to each group of images to be matched; and determining the target pose of the electronic equipment from the candidate poses, thereby improving the accuracy of the determined target pose.
In an embodiment, the preset period is a period between a first time and a second time, the first time is a time when the target pose of the electronic device is determined last time, and the second time is a time when the first image shot by the electronic device is acquired last time; the determining the location area corresponding to the first image according to the location data corresponding to the first identification image and pose change information of the electronic device within a preset period of time includes: and determining a position area corresponding to the first image according to the position data corresponding to the first identification image, the last target pose of the electronic equipment and pose change information of the electronic equipment in a preset period. By recording the last target pose and combining pose change information of the electronic equipment and position data corresponding to the first identification image, the position area is determined, the range of the position area is reduced, the number of image searches is further reduced, and the calculation efficiency is improved.
In an embodiment, before the capturing the first image captured by the electronic device, the method further includes: and responding to the positioning instruction input by the user, and outputting prompt information for shooting the identification image, so that the user can be prompted to shoot the image with the identification image, and the subsequent positioning efficiency is improved.
In a second aspect, a positioning device is provided, applied to an electronic device, and includes:
the storage module is used for acquiring a first image shot by the electronic equipment;
a processing module for identifying a first identification image in the first image; determining a position area corresponding to the first image according to the position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period; searching images in the position area corresponding to the positions in the database, and determining images to be matched of the first image;
and the output module is used for determining the target pose of the electronic equipment according to the first image and the map information corresponding to the image to be matched.
In one embodiment, the output module is specifically configured to:
determining a target matching image from the images to be matched, wherein the target matching image comprises the first identification image;
And determining the target pose of the electronic equipment according to the map information corresponding to the first image and the target matching image.
In one embodiment, the output module is specifically configured to:
and determining a target matching image from the images to be matched according to the identification image information corresponding to each image to be matched stored in the database.
In one embodiment, the output module is specifically configured to:
and carrying out feature point matching on the first image and the target matching image, and determining the target pose of the electronic equipment according to the feature point matching result and map information corresponding to the target matching image.
In one embodiment, the processing module is specifically configured to:
acquiring first positioning information of the electronic equipment;
determining a first database identifier corresponding to the first positioning information;
correspondingly, the searching the image of the corresponding position in the database in the position area comprises the following steps:
and searching the images in the position area in the database corresponding to the first database identifier.
In one embodiment, the output module is specifically configured to:
grouping the images to be matched according to the map information corresponding to the images to be matched;
According to the first image and map information corresponding to each group of images to be matched, determining candidate poses corresponding to each group of images to be matched;
and determining the target pose of the electronic equipment from the candidate poses.
In an embodiment, the preset period is a period between a first time and a second time, the first time is a time when the target pose of the electronic device is determined last time, and the second time is a time when the first image shot by the electronic device is acquired last time; the processing module is specifically configured to:
and determining a position area corresponding to the first image according to the position data corresponding to the first identification image, the last target pose of the electronic equipment and pose change information of the electronic equipment in a preset period.
In one embodiment, the storage module is specifically configured to:
and outputting prompt information of the shooting identification image in response to a positioning instruction input by a user.
In a third aspect, there is provided an electronic device comprising a processor for executing a computer program stored in a memory for implementing a positioning method as described in the first aspect above.
In a fourth aspect, a computer readable storage medium is provided, the computer readable storage medium storing a computer program, which when executed by a processor implements the positioning method according to the first aspect.
In a fifth aspect, there is provided a chip comprising a processor coupled to a memory, the processor executing a computer program or instructions stored in the memory to implement the positioning method as described in the first aspect above.
In a sixth aspect, a computer program product is provided which, when run on an electronic device, causes the electronic device to perform the positioning method according to the first aspect described above.
It will be appreciated that the advantages of the second to sixth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
FIG. 1 is a flow chart of a visual positioning method according to an embodiment of the present application;
FIG. 2 is a scene diagram of a positioning method according to an embodiment of the present application;
FIG. 3 is another scene diagram of a positioning method according to an embodiment of the present application;
fig. 4 is a flow chart of a positioning method according to an embodiment of the present application;
FIG. 5 is a flowchart of an algorithm for identifying a recognition model according to an embodiment of the present application;
FIG. 6 is a flowchart for determining a target pose according to an embodiment of the present application;
fig. 7 is a software architecture diagram of an electronic device according to an embodiment of the present application;
Fig. 8 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
Visual localization generally requires comparing an image taken by an electronic device with a large number of images in a database to determine the pose of the electronic device when the image was taken. As shown in fig. 1, the visual positioning process generally includes capturing a captured image, retrieving an image to be matched from a database according to the captured image, extracting feature points from the captured image and the image to be matched, matching the feature points, calculating a pose of the electronic device according to a feature point matching result and map information corresponding to the image to be matched, and finally outputting the pose of the electronic device.
Since a database generally includes a large number of pictures, a large number of picture searches are required in the pose calculation process, and thus it takes a long time to determine the pose of the electronic device.
Therefore, the application provides a positioning method, which is characterized in that a first identification image in a first image shot by electronic equipment is identified, a position area corresponding to the first image is determined according to position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period, and then the image in the position area is searched in a database to obtain an image to be matched of the first image, so that the searching range of the image can be narrowed. And then, according to the map information corresponding to the first image and the image to be matched, determining the target pose of the electronic equipment, thereby shortening the time required for determining the pose of the electronic equipment.
The positioning method provided by the embodiment of the application is exemplified below.
The positioning method provided by the embodiment of the application can be executed in the electronic equipment or a server in communication with the electronic equipment. The following describes a positioning method provided by an embodiment of the present application, taking an electronic device as an example.
The electronic device in the embodiments of the present application may be a mobile phone, a tablet computer, a handheld computer, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, a media player, a wearable device, or the like, which may be held/operated by one hand, and the specific form/type of the electronic device is not particularly limited in the embodiments of the present application. The electronic device includes but is not limited to a mounted deviceHong Mongolian System (Harmony OS) or other operating system devices.
First, an application scenario of the positioning method provided by the embodiment of the present application is described.
Illustratively, in one application scenario, when the user is in a mall, the "smart life" of the mobile phone is opened (which can be understood as a business presentation module), and a visual positioning instruction is input on the "smart life" page. The mobile phone responds to a visual positioning instruction input by a user to acquire first positioning information of the mobile phone, wherein the first positioning information can be real-time positioning information of the mobile phone, or can be positioning information of the mobile phone at preset time (for example, when the mobile phone is determined to enter a preset area according to preset electronic fence information or when the signal intensity of the mobile phone is greater than a preset value). For example, the mobile phone displays that the current position is an x market on the display interface according to the positioning information, and the mobile phone can also display the position information input by the user on the display interface in response to the instruction of switching the position information by the user. And then, after detecting the instruction of confirming the position information of the user, the mobile phone opens a camera on the mobile phone. As shown in fig. 2, when the mobile phone detects a photographing instruction of the user, a first image with a first identification image (e.g., an "HONOR" image) is photographed. As shown in fig. 3, when detecting the operation of clicking the confirm button by the user, the mobile phone determines the current position of the mobile phone according to the first identification image in the first image and outputs the current position. For example, the handset outputs the current location: x store x layer x number. For example, the mobile phone may also output navigation options, such as "restaurant," "elevator," "toilet" options, and then display corresponding locations according to the navigation options selected by the user, and plan corresponding paths for the user according to the locations selected by the user and display on the mobile phone.
In another scenario, when a user is in a scenic spot, the mobile phone responds to a visual positioning instruction input by the user to acquire first positioning information of the mobile phone. For example, the mobile phone displays on the display interface that the current location is an xx scenic spot. Then, the mobile phone shoots a first image with a scenic spot mark (for example, an image of an X pavilion) according to a user instruction, and determines the current position of the mobile phone according to the scenic spot mark in the first image and outputs the current position. For example, the handset outputs the current location: and (3) an X-shaped scenic spot south door passenger service area.
In another application scene, when the robot confirms that the current position is x-way and the signal intensity is smaller than a preset value, a camera of the robot is instructed to shoot a first image with a sign or a road sign, and the current position of the robot is determined according to the sign or the road sign in the first image. For example, the robot determines that the current position is an xx road xx number. The robot may then perform path planning or perform a corresponding operation according to the current position.
The positioning method provided by the embodiment of the application is described in detail below.
As shown in fig. 4, a positioning method provided in an embodiment of the present application includes:
s401: and acquiring a first image shot by the electronic equipment.
In an embodiment, the electronic device starts a camera on the electronic device to acquire a first image shot by the camera in response to a positioning instruction (which may be an operation instruction on a display interface of the electronic device or a voice input instruction) input by a user.
After the camera is started, the electronic device may output a prompt message for capturing an image with a logo, for example, if the current position is a mall, a voice prompt may be provided or a "please capture an image with a trademark" may be displayed on the capturing interface, and if the current position is a road, a voice prompt may be provided or a "please capture an image with a landmark" may be displayed on the capturing interface, so as to prompt the user to capture a first image with a logo, thereby improving the subsequent visual positioning efficiency.
In another embodiment, the user may input a positioning instruction after capturing the first image, and the electronic device determines the first image for positioning according to the positioning instruction input by the user.
S402: a first identification image in the first image is identified.
In an embodiment, the electronic device inputs the first image into the identification model to obtain a tag of the first identification image output by the identification model. The label of the first identification image is used to represent the content of the first identification image, and may be the name or number of the first identification image. The identification recognition model is a model obtained by training a preset classification model by taking a plurality of identification images as training samples. The logo image may be a brand image (e.g., an image of "HONOR," glory An image of a road sign, an image of a building, etc. The classification model can be a model based on artificial intelligence algorithms such as a neural network algorithm, a deep learning algorithm, a clustering algorithm and the like.
For example, the algorithm flow for identifying the recognition model is shown in FIG. 5. The identification recognition model comprises a global feature extraction network, a region candidate network, a region pooling network, a region feature extraction network, a classification network and a boundary box correction network. Inputting the photographed original image into an identification recognition model, and extracting image features of the original image by a global feature extraction network. The region candidate network determines a plurality of candidate frames from the original image according to the image characteristics of the original image, and the position of each candidate frame represents the candidate position of the identification image. The regional pooling network segments the image in each candidate frame from the original image according to the image characteristics of the original image and each candidate frame. The region feature extraction network extracts image features of the images within each candidate frame. The classification network determines the image in the candidate frame with the highest confidence according to the image characteristics of the image in each candidate frame, and determines the category of the image in the candidate frame with the highest confidence. The bounding box correction network determines the exact location of the highest confidence candidate box and ultimately outputs the location of the identification image (e.g., the "HONOR" image) in the original image and the category of the identification image.
In an embodiment, if it is determined that a partial area of the identification image exists in the first image according to the identification result, the electronic device extracts an image feature of the partial area, compares the image feature with image features of all the identification images stored in the database, and determines a first identification image corresponding to the partial area.
In an embodiment, if it is determined that the first identification image does not exist in the first image according to the identification result, a prompt message for re-shooting is output, so that a user is prompted to shoot an image with the first identification, and the subsequent image matching efficiency is improved.
In another embodiment, if it is determined that the first identification image does not exist in the first image according to the identification result, the electronic device may also directly search the database for the image to be matched of the first image in a subsequent calculation process, so as to determine the target pose of the electronic device.
In one embodiment, if the first image includes a first identification image, a tag of the first identification image is output. If the first images comprise a plurality of first identification images, determining labels of the plurality of first identification images at the same time.
In another embodiment, if the first image includes a plurality of identification images, a larger identification image may be selected as the first identification image, or a corresponding identification image with less position data may be selected as the first identification image. For example, if the identification image in the first image includes an identification image of an elevator and an "hono" image, the number of position data corresponding to the "hono" image is 2, and the number of position data corresponding to the identification image of an elevator is 10, the first identification image is determined to be the "hono" image.
S403: and determining a position area corresponding to the first image according to the position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period.
Specifically, the position data corresponding to each identification image is stored in the database in advance, and the electronic device queries the database to determine the position data corresponding to the first identification image. The position data corresponding to the first identification image represents the position of the first identification image. For example, a mall includes a plurality of shops, each of which corresponds to a location, with a trademark image on a doorway or a wall surface of the shop. The database stores the correspondence of images of trademarks and locations, where a location may be coordinates (e.g., longitude and latitude) or a store number in a store. For example, correspondence information of the identification image and the position data stored in the database corresponding to the market a is shown in table 1.
Numbering device Identification image Address of Floor system Longitude and latitude Latitude of latitude Classification
1 Staircase 1 X-way Building 1 ×× ×× Traffic facility
2 HONOR X-way Building 1 ×× ×× Shopping service
3 X-shaped food and beverage X-way Building 1 ×× ×× Shopping service
4 XX beverage X-way Building 1 ×× ×× Shopping service
5 Staircase 2 X-way Building 2 ×× ×× Traffic facility
6 X jewelry X-way Building 2 ×× ×× Shopping service
7 X-shaped glasses X-way Building 2 ×× ×× Shopping service
TABLE 1
The position area corresponding to the first image represents the position area of the electronic equipment obtained by coarsely positioning the electronic equipment.
The first image shot by the electronic device comprises a first identification image, which indicates that the electronic device is positioned near the position where the first identification image is, so that the position data corresponding to the first identification image can reflect the position area corresponding to the first image. The position data corresponding to the first identification image is used for determining the position area corresponding to the first image, so that the probability of matching errors caused by similar scenes can be reduced, and the precision of subsequent image searching and feature matching is improved.
The pose change information of the electronic equipment in the preset period reflects the motion trail of the electronic equipment, and the position area where the electronic equipment is located can be reflected according to the pose of the electronic equipment before the preset period and the accumulated pose change of the electronic equipment in the preset period. The pose of the electronic device can be obtained from components such as an inertial measurement unit (Inertial Measurement Unit, IMU), a gyroscope sensor or a six-axis sensor on the electronic device.
In an embodiment, a first location area is determined according to location data corresponding to the first identification image, and a second location area is determined according to pose change information of the electronic device within a preset period. The electronic equipment combines the first position area and the second position area to determine the position area corresponding to the first image.
In an embodiment, an area around the location where the first identification image is located is taken as a first location area. Illustratively, the position data corresponding to the first identification image is coordinates (coordinates in the space map or longitude and latitude) of the first identification image, and an area where coordinates whose distance from the coordinates corresponding to the first identification image is within a preset range (for example, 20 meters) is taken as the first position area. Or the position data corresponding to the first identification image is the floor of the shop where the first identification image is located, and the area where the floor is located is taken as the first position area.
It may be understood that, if the number of the first identification images is plural, the position data corresponding to the first identification images may be plural, and the first position area determined according to the plural sets of position data may be one or plural.
In an embodiment, the preset period is a period between a first time and a second time, and the second time is a time when the first image captured by the electronic device is acquired. The electronic equipment records the pose of the electronic equipment at the first time, and then according to the pose change information of the electronic equipment in a preset period, the first pose of the electronic equipment when the time (second time) for acquiring the first image is determined, and the second position area can be determined according to the first pose. For example, the height information of the electronic device is determined according to the second pose, the floor where the electronic device is located can be determined according to the height information of the electronic device, and all areas of the determined floor are used as the second location area. For another example, the coordinates of the electronic device in the horizontal direction are determined according to the second pose, and the region where the coordinates whose distance from the coordinates is within the preset range are located is used as the second position region. For another example, the three-dimensional coordinates of the electronic device are determined according to the second pose, and the region where the coordinates with the distance from the three-dimensional coordinates in the space within the preset range are located is used as the second position region.
The first time may be a time when the target pose of the electronic device was last determined. For example, the target pose is recorded the last time the electronic device determines the target pose. And then, determining a second position area according to the pose change information of the electronic equipment in the preset period.
The first time may also be a time when the electronic device acquired the first positioning information. For example, the electronic device first obtains first positioning information, determines that the electronic device is currently located in the market a according to the first positioning information, and then determines a target pose of the electronic device (i.e., a specific position in the market a) according to a first image captured in the market a. When the electronic equipment determines that the electronic equipment is located near the market A according to the first positioning information, the current pose of the electronic equipment is recorded, wherein the current pose can be comprehensively determined according to WIFI information, bluetooth information, base station information or various information in information acquired by an air pressure sensor connected with the electronic equipment, and can also be a preset value, for example, the preset value is the first floor of the market A. And then, the electronic equipment determines a second position area according to the pose change information of the electronic equipment in the preset period.
The electronic device may use the region where the first position region and the second position region overlap as the position region corresponding to the first image, or may screen the position region corresponding to the first image from the first position region according to the second position region after determining the first position region. For example, the first location area is an area of shops S1 to S5 of the building a, and the second location area is an area of shops S2 to S6 of the building a, and the location area corresponding to the first image is an area of shops S2 to S5 of the building a. For another example, if the first location area is the area of the shop S3 of the building 2 of the a market and the area of the shop S8 of the building 5 of the a market and the second location area is the building 5 of the a market, the location area corresponding to the first image is determined to be the area of the shop S8 of the building 5 of the a market.
By determining the position area corresponding to the first image, only the images located in the position area are searched later, so that the image searching range can be reduced, and the image searching performance is improved.
S404: searching the images of which the corresponding positions are positioned in the position areas in the database, and determining the images to be matched of the first images.
Specifically, all images shot in a preset area (for example, the whole market a) are stored in a database, each image corresponds to a position, the corresponding position of each image is the camera pose when the image is shot, and the camera pose is the pose of the camera in a space map. The space map is a pre-constructed map, and the position area corresponding to the first image is a coordinate range in the space map. From the corresponding position of each image, the images located within the location area can be determined.
And respectively extracting the characteristics of the image with the corresponding position in the position area and the first image, and determining the similarity between the image with the corresponding position in the position area and the first image according to the extracted characteristics, wherein the image with the similarity larger than the preset similarity is used as the image to be matched of the first image. The image with the similarity greater than the preset similarity may be an image with the similarity greater than a set value, or may be an image with the similarity ranked as the top N (e.g. 30) in the images with the corresponding positions located in the position area. The electronic device extracts image features of the image corresponding to the position within the position area and image features of the first image, and determines image descriptors of the image corresponding to the position within the position area and image descriptors of the first image according to the image features. An image descriptor is a vector that reflects the global characteristics of an image. The electronic equipment calculates the similarity of the image descriptor of the image in the position area and the image descriptor of the first image, wherein the similarity is the similarity between the image in the position area and the first image.
In an embodiment, before searching an image located in a preset area in a database, the electronic device further obtains first positioning information of the electronic device, determines a first database identifier according to the first positioning information, uses a database corresponding to the first database identifier as a database for searching, determines an image located in a location area from the database, and searches the image located in the location area. For example, a database corresponding to a plurality of places (e.g., a mall, an office building, a high-speed rail station) is stored in the server. The electronic equipment firstly acquires first positioning information, and downloads a database corresponding to the A market from a server or takes the database corresponding to the A market in the server as a database for subsequent image searching and matching when the electronic equipment is determined to be currently positioned in the A market according to the first positioning information. And then, when the signal intensity is smaller than a preset value, if the user has a positioning requirement, the electronic equipment shoots a first image, determines a position area corresponding to the first image, and searches the image in the position area of the corresponding position in the database corresponding to the market A, so that accurate positioning can still be realized under the condition of poor signal.
The first positioning information may be determined according to any one or more of positioning information of a global positioning system (Global Positioning System, GPS) of the electronic device, WIFI information connected to the electronic device, bluetooth information connected to the electronic device, communication information between the electronic device and a base station, and the like.
In another embodiment, the electronic device may determine a database for image searching based on the user's input information. For example, when the user inputs a positioning instruction, the electronic device outputs building identifications (for example, a mall B, an office building C, and a subway station D) in a preset range for the user to select, and the database corresponding to the building identification selected by the user is used as the database searched by the user. The preset range can be determined according to the last positioning information of the electronic equipment, so that when the electronic equipment cannot be accurately positioned through information such as a network, a base station and the like, accurate pose information of the electronic equipment can be obtained through a visual positioning mode.
S405: and determining the target pose of the electronic equipment according to the first image and the map information corresponding to the image to be matched.
In an embodiment, the map information corresponding to the image to be matched is a position in a space corresponding to part of or all of the feature points in the image to be matched. Extracting image features of the first image to obtain feature points of the first image, and extracting image features of the image to be matched to obtain feature points of the image to be matched. And the electronic equipment matches the characteristic points in the first image with the characteristic points in the image to be matched, and if the description information corresponding to the two characteristic points is consistent, the two characteristic points are matched, and the position of the characteristic points in the image to be matched in space is the position of the characteristic points in the first image in space. The position of all or part of the feature points in the first image in space can be determined by feature matching. The method comprises the steps that a plurality of images to be matched are arranged, corresponding characteristic points in a first image can be matched with characteristic points in the plurality of images, and the positions of the characteristic points in the first image in space are determined according to matching results respectively corresponding to the plurality of images. For example, for a first feature point in a first image, the matching results of multiple images are different, and the spatial positions corresponding to the feature points matched with the first feature point in each image are different, so that the multiple positions are averaged, or one of the spatial positions is selected as the spatial position (the position in space) corresponding to the first feature point according to the distribution situation of the multiple spatial positions. And then, according to the space position corresponding to the characteristic point in the first image and the corresponding relation between the camera coordinate system of the electronic equipment and the map coordinate system, the pose of the electronic equipment can be determined.
In an embodiment, after determining the image to be matched, determining a target matching image including the first identification image from the image to be matched, that is, filtering the image to be matched without the first identification image or without the identification image, and determining the pose of the electronic device according to the first image and map information corresponding to the target matching image, thereby improving the matching efficiency of the subsequent images.
Because the electronic device is located near the first identification image, if the electronic device photographs near the first identification image, the image is likely to include the first identification image, if the image to be matched does not include the first identification image, the difference between the photographing position of the image to be matched and the photographing position of the first image is larger, which means that the contact ratio between the feature point in the image to be matched and the feature point in the first image is smaller, the contribution of feature point matching of the image to be matched and the first image to be positioned is smaller, therefore, the image to be matched which does not include the first identification image and does not include the identification image is filtered, and feature matching is only performed on the image to be matched and the first image which include the first identification image, so that the number of image matching is further reduced, the number of matching times is further reduced, the matching precision is further improved, the time required to be consumed for determining the pose of the electronic device is shortened, and the precision of the target pose calculated later is improved.
In one embodiment, for all images captured in a preset area stored in the database, identification image information of each image is predetermined, where the identification image information includes whether an identification exists in the image and a name of the identification. After the images to be matched are determined, the target matching images can be determined from the images to be matched according to the identification image information corresponding to the images to be matched stored in the database, and therefore screening efficiency is improved. For example, if the first identification image in the first image is an image of "HONOR", the image to be matched with the image of "HONOR" is selected and used as the target matching image.
All images shot in a preset area stored in a database can be input into an identification recognition model to obtain identification image information corresponding to each image output by the identification recognition model.
In another embodiment, after determining the image to be matched, a target matching image including the first identification image and including a portion of the first identification image is determined from the images to be matched. The first identification image and the image to be matched without the identification image are filtered, so that the matching quantity is reduced, and meanwhile, the matching precision of the subsequent images is improved.
In an embodiment, according to the position data corresponding to the first identification image and pose change information of the electronic device in a preset period, it is determined that the position area corresponding to the first image is an area, and the first image and the corresponding target matching image can be directly matched. For example, if the location area corresponding to the first image is an area around the S5 shop of the building 3 of the market a, the first image is directly matched with the image to be matched determined from the location area.
In another embodiment, according to the position data corresponding to the first identification image and pose change information of the electronic device in the preset period, it is determined that the position area corresponding to the first image is a plurality of areas, the target matching image can be determined from the plurality of areas, and the first image and all the target matching images in the plurality of areas are matched. For example, if the location areas corresponding to the first image are near the doorway of the a office building 10, 12, and 20 floors 1205, 2001, then the first image is matched with all the target matching images in these areas.
In another embodiment, according to the position data corresponding to the first identification image and pose change information of the electronic device in the preset period, it is determined that the position area corresponding to the first image is a plurality of areas, the images to be matched can be divided into a plurality of groups according to map information corresponding to the images to be matched, and the target matching image is determined from each group of images to be matched. The map information corresponding to the image to be matched represents the position in the space corresponding to the image to be matched. For example, the first image corresponds to the location areas near the gate of level 10 of the a office building 1001, near the gate of level 12 1205, and near the gate of level 20 2001. The images to be matched determined in the images corresponding to the positions near the gates of the layers 10 and 1001 of the office building a in the database are taken as a group, the images to be matched determined in the images near the gates of the layers 12 and 1205 are taken as a group, and the images to be matched determined in the images near the gates of the layers 20 and 2001 are taken as a group. And then, determining target matching images from each group of images to be matched, and determining candidate poses corresponding to each group of target matching images according to the first images and map information corresponding to each group of target matching images. For example, the feature points in the first image are matched with the feature points in each group of target matching images, and the positions of the feature points in the corresponding first image in space are determined according to the positions of the feature points in each group of target matching images in space, so that the candidate pose of the electronic equipment corresponding to each group of target matching images is obtained. Then, a target pose of the electronic device is determined from the candidate poses. For each candidate pose, the corresponding feature point information is re-projected to a plane where the first image is located, so as to obtain re-projected feature point coordinates, and a difference value between the re-projected feature point coordinates and the coordinates of the first image is determined. And each candidate pose corresponds to a difference value after re-projection, and the candidate pose corresponding to the minimum difference value is taken as the target pose. For example, a confidence level corresponding to each candidate pose may be determined, and the candidate pose with the highest confidence level may be used as the target pose. For example, in the candidate poses determined by the multiple sets of target matching images, if the difference value after re-projection corresponding to the target matching image determined in the images with corresponding positions near the gate of the 12 layers 1205 is the smallest, the candidate pose determined by the set of images to be matched is taken as the target pose.
The target pose is a three-dimensional coordinate, and after the electronic equipment determines the target pose, the electronic equipment can determine and output the position information corresponding to the target pose according to the pre-stored layout information of the building or the road. For example, according to the height component corresponding to the target pose and the height of each floor of the market a, determining that the current position is 3 floors, further determining that the current position is a safe exit of 3 floors according to the horizontal component corresponding to the target pose and layout information of 3 floors (i.e. the position area corresponding to each market, elevator and exit), and outputting by the electronic device: the current location is a 3-floor security exit.
In an embodiment, after determining the target pose, the electronic device may further output a prompt message for inputting the destination, and after receiving the destination input by the user through voice or text, plan a path for the user according to the target pose and the destination, output path information or display the planned path on a map, so as to guide the user to go to the destination.
In an embodiment, if the number of candidate poses with the minimum difference is more than two, or the number of candidate poses with the maximum confidence is more than two, the electronic device outputs prompt information that positioning is unsuccessful, or the electronic device outputs position information corresponding to the two candidate poses for the user to select, and the candidate pose selected by the user is used as the target pose.
In another embodiment, after determining the target matching image, an image with the highest similarity to the first image in the target matching image may be determined, and map information corresponding to the image with the highest similarity is used as the map information of the first image.
In the above embodiment, after the first image captured by the electronic device is obtained, the first identification image in the first image is first identified, the position area corresponding to the first image is determined according to the position data corresponding to the first identification image and the pose change information of the electronic device in the preset period, and then the image in the position area is searched in the database, so that the image to be matched of the first image is obtained, and the searching range of the image can be reduced. And then, according to the map information corresponding to the first image and the image to be matched, determining the target pose of the electronic equipment, thereby shortening the time required for determining the target pose of the electronic equipment.
In one embodiment, a flow for determining a target pose of an electronic device is shown in FIG. 6.
First, first positioning information of the electronic equipment is acquired, and a database corresponding to the first positioning information is determined. And then, acquiring a first image shot by the electronic equipment, and identifying a first identification image in the first image. And coarsely positioning the electronic equipment according to the position data corresponding to the first identification image and the pose change information of the electronic equipment in a preset period of time to obtain a position area corresponding to the first image. And then searching the images in the position area in the database to determine the images to be matched of the first image. And determining a target matching image comprising the first identification image from the images to be matched according to the identification image information corresponding to the images to be matched. And then, carrying out feature extraction on the first image, determining feature points in the first image, carrying out feature matching on the feature points in the first image and the feature points in the target matching image, calculating and outputting the target pose of the electronic equipment according to the result of the feature matching and the map information corresponding to the target matching image. If the target matching images are distributed in a plurality of areas, dividing the target matching images into a plurality of groups according to the areas, calculating candidate poses corresponding to each group of target matching images in a distributed manner, and taking the candidate pose with the minimum difference value after re-projection as the target pose.
For example, when the signal strength is greater than a preset value, the first positioning information of the electronic device is determined according to the positioning information of the GPS, the WIFI information connected to the electronic device, the bluetooth information connected to the electronic device, or the communication information between the electronic device and the base station, and the current position is determined to be the a building according to the first positioning information. Then, the electronic device detects a positioning instruction input by a user, acquires a first image, and recognizes that a first identification image in the first image is an "HONOR" image. Because the positioning is performed for the first time after the first positioning information is acquired, the last target pose does not exist, and the position area corresponding to the first image is directly determined according to the position data corresponding to the first identification image. The electronic device uses a database corresponding to the building a stored in the server as a database for image search, determines that the position data corresponding to the "HONOR" image is the position area corresponding to the first image, which is the position area of the building 2, the building 15, the building 16, and the building 17. Then, the electronic equipment searches images of the building 2, the building 15, the building 16 and the building 17 in the database, and determines that the images to be matched of the first image are images of the building 15, the building 16 and the building 17. Thereafter, a target matching image with an "HONOR" image is determined from the images of the floors 15, 16, and 17. And respectively taking the target matching image corresponding to the 15 th floor, the target matching image corresponding to the 16 th floor and the target matching image corresponding to the 17 th floor as a group, determining candidate poses corresponding to the target matching images of each group, and determining a difference value after re-projection corresponding to each candidate pose. And if the difference value of the candidate poses corresponding to the target matching images of the 15 th floor is minimum, taking the candidate poses corresponding to the 15 th floor as target poses. The electronic equipment records the target pose and acquires pose change information of the electronic equipment in real time.
When the positioning instruction input by the user is detected again, a first image is acquired, and the first identification image in the first image is identified as 'staircase 2'. The position data corresponding to the 'staircase 2' is determined to be 1 building, 2 building, 3 building and 4 building by inquiring the database, and the electronic equipment is determined to be positioned in the 2 building or the 3 building according to the last recorded target pose and the real-time pose change information of the electronic equipment, so that the position area corresponding to the first image is determined to be the 2 building and the 3 building. And then searching the images corresponding to the positions of the building 2 and the building 3 in the database, and determining that the image to be matched of the first image is the image of the building 2. Then, a target matching image with an escalator 2 is determined from the images to be matched. And performing feature matching on the first image and the target matching image, determining the target pose of the electronic equipment, and determining the position of the electronic equipment at the 2-floor safety exit according to the target pose, wherein the current position is displayed on the electronic equipment as the 2-floor safety exit.
The method comprises the steps of firstly obtaining first positioning information, determining a database for visual positioning, then carrying out coarse positioning according to a first identification image in a first image and pose change information of electronic equipment to obtain a position area corresponding to the first image, searching the image in the position area in the database, determining an image to be matched, determining a target matching image comprising the first identification image from the image to be matched, and finally carrying out feature matching on the target matching image and the first image, so that the number of images for image searching and image matching can be reduced, and the computing efficiency of the pose of the electronic equipment can be improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
The software system of the electronic device may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 7 is a software configuration block diagram of an electronic device according to an embodiment of the present application.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 7, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 7, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
By way of example, fig. 8 shows a schematic diagram of an electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also employ different interfacing manners in the above embodiments, or a combination of multiple interfacing manners.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a camera device/electronic apparatus, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
Finally, it should be noted that: the foregoing is merely illustrative of specific embodiments of the present application, and the scope of the present application is not limited thereto, but any changes or substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A positioning method applied to an electronic device, comprising:
acquiring a first image shot by the electronic equipment;
identifying a first identification image in the first image;
determining a position area corresponding to the first image according to the position data corresponding to the first identification image and pose change information of the electronic equipment in a preset period;
searching images in the position area corresponding to the positions in the database, and determining images to be matched of the first image;
and determining the target pose of the electronic equipment according to the first image and the map information corresponding to the image to be matched.
2. The method according to claim 1, wherein the determining the target pose of the electronic device according to the map information corresponding to the first image and the image to be matched includes:
Determining a target matching image from the images to be matched, wherein the target matching image comprises the first identification image;
and determining the target pose of the electronic equipment according to the map information corresponding to the first image and the target matching image.
3. The method of claim 2, wherein the determining a target matching image from the images to be matched comprises:
and determining a target matching image from the images to be matched according to the identification image information corresponding to each image to be matched stored in the database.
4. The method of claim 2, wherein determining the target pose of the electronic device according to the map information corresponding to the first image and the target matching image comprises:
and carrying out feature point matching on the first image and the target matching image, and determining the target pose of the electronic equipment according to the feature point matching result and map information corresponding to the target matching image.
5. The method of any one of claims 1 to 4, wherein prior to searching for images in the database for which the corresponding location is within the location area, the method further comprises:
Acquiring first positioning information of the electronic equipment;
determining a first database identifier corresponding to the first positioning information;
correspondingly, the searching the image of the corresponding position in the database in the position area comprises the following steps:
and searching the image with the corresponding position in the position area in the database corresponding to the first database identifier.
6. The method according to any one of claims 1 to 4, wherein determining the target pose of the electronic device according to the map information corresponding to the first image and the image to be matched includes:
grouping the images to be matched according to the map information corresponding to the images to be matched;
according to the first image and map information corresponding to each group of images to be matched, determining candidate poses corresponding to each group of images to be matched;
and determining the target pose of the electronic equipment from the candidate poses.
7. The method according to any one of claims 1 to 4, wherein the preset period is a period between a first time, which is a time when the target pose of the electronic device was last determined, and a second time, which is a time when the first image captured by the electronic device was last acquired; the determining the location area corresponding to the first image according to the location data corresponding to the first identification image and pose change information of the electronic device within a preset period of time includes:
And determining a position area corresponding to the first image according to the position data corresponding to the first identification image, the last target pose of the electronic equipment and pose change information of the electronic equipment in a preset period.
8. The method of any one of claims 1 to 4, wherein prior to the capturing the first image captured by the electronic device, the method further comprises:
and outputting prompt information of the shooting identification image in response to a positioning instruction input by a user.
9. An electronic device comprising a processor for executing a computer program stored in a memory for implementing a positioning method according to any of claims 1 to 8.
10. A computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the positioning method according to any of claims 1 to 8.
CN202211603109.5A 2022-12-13 2022-12-13 Positioning method, electronic device and computer readable storage medium Active CN116664684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211603109.5A CN116664684B (en) 2022-12-13 2022-12-13 Positioning method, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211603109.5A CN116664684B (en) 2022-12-13 2022-12-13 Positioning method, electronic device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN116664684A true CN116664684A (en) 2023-08-29
CN116664684B CN116664684B (en) 2024-04-05

Family

ID=87722984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211603109.5A Active CN116664684B (en) 2022-12-13 2022-12-13 Positioning method, electronic device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116664684B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784174A (en) * 2019-11-08 2021-05-11 华为技术有限公司 Method, device and system for determining pose
WO2021125578A1 (en) * 2019-12-16 2021-06-24 네이버랩스 주식회사 Position recognition method and system based on visual information processing
CN113112478A (en) * 2021-04-15 2021-07-13 深圳市优必选科技股份有限公司 Pose recognition method and terminal equipment
WO2022017261A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Image synthesis method and electronic device
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN114466128A (en) * 2020-11-09 2022-05-10 华为技术有限公司 Target user focus-following shooting method, electronic device and storage medium
CN114545426A (en) * 2022-01-20 2022-05-27 北京旷视机器人技术有限公司 Positioning method, positioning device, mobile robot and computer readable medium
CN114812381A (en) * 2021-01-28 2022-07-29 华为技术有限公司 Electronic equipment positioning method and electronic equipment
WO2022161386A1 (en) * 2021-01-30 2022-08-04 华为技术有限公司 Pose determination method and related device
CN114998629A (en) * 2022-06-24 2022-09-02 四川腾盾科技有限公司 Satellite map and aerial image template matching method and unmanned aerial vehicle positioning method
CN115205383A (en) * 2022-06-17 2022-10-18 深圳市优必选科技股份有限公司 Camera pose determination method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784174A (en) * 2019-11-08 2021-05-11 华为技术有限公司 Method, device and system for determining pose
WO2021125578A1 (en) * 2019-12-16 2021-06-24 네이버랩스 주식회사 Position recognition method and system based on visual information processing
WO2022017261A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Image synthesis method and electronic device
CN114466128A (en) * 2020-11-09 2022-05-10 华为技术有限公司 Target user focus-following shooting method, electronic device and storage medium
CN114812381A (en) * 2021-01-28 2022-07-29 华为技术有限公司 Electronic equipment positioning method and electronic equipment
WO2022161386A1 (en) * 2021-01-30 2022-08-04 华为技术有限公司 Pose determination method and related device
CN113112478A (en) * 2021-04-15 2021-07-13 深圳市优必选科技股份有限公司 Pose recognition method and terminal equipment
CN114545426A (en) * 2022-01-20 2022-05-27 北京旷视机器人技术有限公司 Positioning method, positioning device, mobile robot and computer readable medium
CN114119758A (en) * 2022-01-27 2022-03-01 荣耀终端有限公司 Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN115205383A (en) * 2022-06-17 2022-10-18 深圳市优必选科技股份有限公司 Camera pose determination method and device, electronic equipment and storage medium
CN114998629A (en) * 2022-06-24 2022-09-02 四川腾盾科技有限公司 Satellite map and aerial image template matching method and unmanned aerial vehicle positioning method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZHAO, DI 等: "Multi-spacecraft collaborative attitude determination of space tumbling target with experimental verification", 《ACTA ASTRONAUTICA》, pages 1 - 13 *
孙延奎;苗菁华;: "分层分区域管理的实时图像跟踪算法", 计算机辅助设计与图形学学报, no. 04, pages 65 - 71 *
苗菁华;孙延奎;: "定位图像匹配尺度与区域的摄像机位姿实时跟踪", 中国图象图形学报, no. 07, pages 99 - 110 *
蒋萌;王尧尧;陈柏;: "基于双目视觉的目标识别与定位研究", 机电工程, no. 04, pages 86 - 91 *

Also Published As

Publication number Publication date
CN116664684B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
WO2020253657A1 (en) Video clip positioning method and apparatus, computer device, and storage medium
CN111476306B (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN109189879B (en) Electronic book display method and device
US11776256B2 (en) Shared augmented reality system
CN110807361A (en) Human body recognition method and device, computer equipment and storage medium
EP2672401A1 (en) Method and apparatus for storing image data
US11854231B2 (en) Localizing an augmented reality device
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
US11915400B2 (en) Location mapping for large scale augmented-reality
US20220262035A1 (en) Method, apparatus, and system for determining pose
CN111339976B (en) Indoor positioning method, device, terminal and storage medium
US11335060B2 (en) Location based augmented-reality system
CN113395542A (en) Video generation method and device based on artificial intelligence, computer equipment and medium
WO2022073417A1 (en) Fusion scene perception machine translation method, storage medium, and electronic device
US20230368417A1 (en) Pose determining method and related device
CN110991491A (en) Image labeling method, device, equipment and storage medium
CN110490186B (en) License plate recognition method and device and storage medium
CN116664684B (en) Positioning method, electronic device and computer readable storage medium
CN113538321A (en) Vision-based volume measurement method and terminal equipment
CN113313966A (en) Pose determination method and related equipment
CN113192072B (en) Image segmentation method, device, equipment and storage medium
CN115937722A (en) Equipment positioning method, equipment and system
CN112163062A (en) Data processing method and device, computer equipment and storage medium
CN111581481B (en) Search term recommendation method and device, electronic equipment and storage medium
CN111068333B (en) Video-based carrier abnormal state detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant