CN110986916A - Indoor positioning method and device, electronic equipment and storage medium - Google Patents

Indoor positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110986916A
CN110986916A CN201911150111.XA CN201911150111A CN110986916A CN 110986916 A CN110986916 A CN 110986916A CN 201911150111 A CN201911150111 A CN 201911150111A CN 110986916 A CN110986916 A CN 110986916A
Authority
CN
China
Prior art keywords
information
reference image
matched
determining
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911150111.XA
Other languages
Chinese (zh)
Inventor
李双涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rajax Network Technology Co Ltd
Original Assignee
Rajax Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rajax Network Technology Co Ltd filed Critical Rajax Network Technology Co Ltd
Priority to CN201911150111.XA priority Critical patent/CN110986916A/en
Publication of CN110986916A publication Critical patent/CN110986916A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention discloses an indoor positioning method and device, electronic equipment and a storage medium. The image information or the video information input by the user is matched with the reference image in the database, and the position information corresponding to the reference image matched with the information input by the user is determined as the user position information. Thus, indoor positioning can be achieved more accurately.

Description

Indoor positioning method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of positioning, in particular to an indoor positioning method and device, electronic equipment and a storage medium.
Background
Indoor positioning refers to achieving position location in an indoor environment. In the prior art, positioning is mainly achieved through a wireless technology, for example, Wi-Fi (wireless internet access), bluetooth, infrared, ultra wide band, RFID (Radio frequency identification), ZigBee (ZigBee), ultrasonic, and the like, and indoor positioning is achieved by analyzing the strength of a wireless signal.
However, indoor positioning by wireless technology often has certain errors and is easily interfered by other signals.
Disclosure of Invention
In view of this, embodiments of the present invention provide an indoor positioning method and apparatus, an electronic device, and a storage medium, which can more accurately implement indoor positioning.
In a first aspect, an embodiment of the present invention provides an indoor positioning method, where the method includes:
acquiring user input information, wherein the user input information is image information or video information;
determining at least one reference image information matched with the user input information in a predetermined database, wherein the database stores a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information; and
and determining user position information according to the matched at least one piece of reference image information.
Preferably, in response to the user input information being image information, determining at least one reference image information in a predetermined database that matches the user input information comprises:
extracting second characteristic information from the image information;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, the determining of the user position information according to the matched at least one reference image information is to determine the position information corresponding to the matched at least one reference image information as the user position information.
Preferably, in response to the user input information being video information, determining at least one reference image information in a predetermined database that matches the user input information comprises:
acquiring a first image in the video information;
extracting second characteristic information from the first image;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, determining the user location information from the matched at least one reference image information comprises:
determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
acquiring a second image in the video information;
acquiring depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and determining user position information according to the depth map and the intermediate position information.
Preferably, the first feature information matching the second feature information is determined in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, or a fast nearest neighbor search package algorithm.
Preferably, the method further comprises:
acquiring a motion track through an inertial sensor; and
and acquiring actual position information according to the user position information and the motion trail.
Preferably, the method further comprises:
acquiring target position information; and
and acquiring navigation information according to the target position information and the actual position information.
In a second aspect, an embodiment of the present invention provides an indoor positioning apparatus, including:
the input unit is used for acquiring user input information, and the user input information is image information or video information;
a matching unit configured to determine at least one reference image information matched with the user input information in a predetermined database in which a plurality of reference image information are stored, each of the reference image information including first feature information and corresponding position information; and
and the positioning unit is used for determining the user position information according to the matched at least one piece of reference image information.
Preferably, the matching unit includes:
the characteristic extraction module is used for extracting second characteristic information from the image information;
the characteristic matching module is used for determining first characteristic information matched with the second characteristic information in the database; and
and the information determining module is used for determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, the positioning unit is configured to determine location information corresponding to the matched at least one reference image information as the user location information.
Preferably, the matching unit includes:
the first image acquisition module is used for acquiring a first image from the video information;
the characteristic extraction module is used for extracting second characteristic information from the first image;
the characteristic matching module is used for determining first characteristic information matched with the second characteristic information in the database; and
and the information determining module is used for determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, the positioning unit includes:
the middle position determining module is used for determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
the second image acquisition module is used for acquiring a second image from the video information;
the depth map acquisition module is used for acquiring the depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and the user position determining module is used for determining the user position information according to the depth map and the middle position information.
Preferably, the feature matching module is configured to determine at least one first feature information matching the second feature information in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, or a fast nearest neighbor search package algorithm.
Preferably, the apparatus further comprises:
the motion track acquisition unit is used for acquiring a motion track through an inertial sensor; and
and the actual position acquiring unit is used for acquiring actual position information according to the user position information and the motion trail.
Preferably, the apparatus further comprises:
a target position acquisition unit for acquiring target position information; and
and the navigation unit is used for acquiring navigation information according to the target position information and the actual position information.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory is used to store one or more computer program instructions, where the one or more computer program instructions are executed by the processor to implement the method according to the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium on which computer program instructions are stored, which when executed by a processor implement the method according to the first aspect.
According to the technical scheme of the embodiment of the invention, the image information or the video information input by the user is matched with the reference image in the database, and the position information corresponding to the reference image matched with the information input by the user is determined as the user position information. Thus, indoor positioning can be achieved more accurately.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an indoor positioning method according to a first embodiment of the present invention;
FIG. 2 is a flow chart of determining matching reference image information according to a first embodiment of the present invention;
fig. 3 is a flowchart of an indoor positioning method according to a second embodiment of the present invention;
FIG. 4 is a flow chart of determining matching reference image information according to a second embodiment of the present invention;
FIG. 5 is a flow chart for determining user location information in accordance with a second embodiment of the present invention;
FIG. 6 is a flow chart of acquiring actual location information and navigation information in accordance with an embodiment of the present invention;
FIG. 7 is a schematic view of an indoor positioning apparatus according to a first embodiment of the present invention;
FIG. 8 is a schematic view of an indoor positioning apparatus of a second embodiment of the invention;
fig. 9 is a schematic diagram of an electronic device of an embodiment of the invention.
Detailed Description
The present disclosure is described below based on examples, but the present disclosure is not limited to only these examples. In the following detailed description of the present disclosure, certain specific details are set forth. It will be apparent to those skilled in the art that the present disclosure may be practiced without these specific details. Well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present disclosure.
Further, those of ordinary skill in the art will appreciate that the drawings provided herein are for illustrative purposes and are not necessarily drawn to scale.
Unless the context clearly requires otherwise, throughout this specification, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
In the description of the present disclosure, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present disclosure, "a plurality" means two or more unless otherwise specified.
According to the embodiment of the invention, a user shoots a picture or a section of video through terminal equipment such as a mobile phone and a tablet personal computer, the terminal equipment obtains a reference image matched with the shot picture or video from a preset database, and the position information of the user is determined according to the reference image. Meanwhile, the motion trail of the user is obtained through an inertial sensor arranged in the terminal equipment, and the actual position information of the user is obtained in real time according to the position information and the motion trail obtained through positioning. And after the user sets the target position, the terminal equipment can acquire the navigation route according to the target position information and the actual position information and generate navigation information.
Fig. 1 is a flowchart of an indoor positioning method according to a first embodiment of the present invention. As shown in fig. 1, the indoor positioning method of the embodiment of the present invention includes the following steps:
and step S110, acquiring image information input by a user.
In this embodiment, the user input information is image information, that is, a user takes a picture through a terminal device. Due to the fact that the image shooting time is short and the occupied memory is small, the terminal device can achieve positioning more quickly.
Step S120, determining at least one reference image information matching the image information in a predetermined database.
In this embodiment, the database stores a plurality of reference image information, each of which includes first feature information and corresponding position information.
Further, data acquisition is carried out on the indoor environment through shooting equipment to obtain a database, wherein the database comprises a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information. Specifically, images of various indoor positions are collected through shooting equipment to serve as reference images, first feature information of each reference image is obtained through an image feature extraction algorithm, position labeling is carried out on each reference image to serve as corresponding position information, and therefore a database can be obtained.
Further, when a reference image in a scene is captured, a person or a moving object (e.g., a vehicle) may appear in the reference image, which may result in a large error in matching. In the embodiment, the reference images of the same area are acquired for multiple times in different time periods, and the reference images of the same area are compared to remove the changed characteristic points. Therefore, the influence of people or moving objects on the reference image can be reduced, and the positioning is more accurate.
Further, automatic equipment can be introduced to acquire images to acquire reference images, for example, the images are acquired through a panoramic shooting trolley or indoor unmanned aerial vehicle equipment, so that the reference images of indoor positions can be acquired more comprehensively.
Further, the first feature information is a feature vector.
Further, the image feature extraction algorithm may adopt various existing extraction algorithms, and the embodiment of the present invention is described by taking an HOG (histogram of Oriented Gradient) algorithm and a SIFT (Scale-invariant features transform) algorithm as examples.
The HOG algorithm is to construct features by calculating and counting the histogram of gradient direction of local area of image. Specifically, the image to be processed is grayed, and the grayed image is normalized by a Gamma (power law transform) correction method so as to adjust the contrast of the image, reduce the influence caused by local shadow and illumination change of the image and inhibit the interference of noise. The gradient (including magnitude and direction) of each pixel of the image is computed to capture contour information while further attenuating the interference of illumination. The image is divided into small regions (e.g., 6 x 6 pixels). And (4) counting the gradient histogram (the number of different gradients) of each small region to form the characteristics of each small region. Each of the small regions is grouped into a block (e.g., 3 × 3 small regions are a block), and the HOG features of the block are obtained by connecting the features of all the regions in the block in series. And connecting HOG characteristics of all blocks in the image in series to obtain the HOG characteristics of the image to be processed. Since the HOG operates on local grid elements of the image, it remains fairly invariant to both geometric and optical distortions of the image.
The SIFT algorithm searches key points (feature points) in different scale spaces and calculates the directions of the key points. The key points searched by the SIFT are some points which are quite prominent and can not be changed by factors such as illumination, affine transformation, noise and the like, such as angular points, edge points, bright points in a dark area, dark points in a bright area and the like. The method specifically comprises the following steps of firstly, simulating multi-scale features of image data, grabbing general appearance features on a large scale, and emphasizing detail features on a small scale. The Gaussian pyramid is constructed to ensure that the image can have corresponding feature points at any scale, namely, the scale invariance is ensured. Secondly, determining whether the image is a key point, comparing the key point with adjacent points in the image with different sigma (gaussian blur) values in the same scale space, if the key point is max (maximum) or min (minimum), then determining the key point, and after finding all the feature points, removing points with low contrast and unstable edge effects and leaving representative key points. Thirdly, in order to realize rotation invariance, a value needs to be assigned to the feature point according to the detected local image structure of the key point, specifically, through a gradient direction histogram, when the histogram is calculated, each sampling point added into the histogram is weighted by using a circular gaussian function, namely, gaussian smoothing is performed, so that the gradient amplitude near the key point has a larger weight, and the instability of the feature point caused by the fact that affine deformation is not considered is partially compensated. And fourthly, generating a key point descriptor, wherein the key point descriptor not only comprises key points, but also comprises pixel points which are around the key points and contribute to the key points. Therefore, key points have more invariant characteristics, and the target matching efficiency is improved. When describing the sub-sampling area, bilinear interpolation is carried out after rotation is considered, and white spots caused by rotation of the image are prevented. Meanwhile, in order to ensure the rotation invariance, the feature point is taken as the center, the theta angle is rotated in the nearby field, and then the gradient histogram of the sampling region is calculated to form an n-dimensional SIFT feature vector (such as 128-SIFT). Fifthly, the characteristic vector is normalized to remove the influence of illumination change. Therefore, SIFT features can be obtained.
Specifically, fig. 2 is a flowchart of determining matched reference image information according to the first embodiment of the present invention. As shown in fig. 2, determining reference image information in the database that matches the user input information includes the steps of:
and step S210, extracting second characteristic information from the image information.
In this embodiment, second feature information of the user input information is acquired through an image feature extraction algorithm.
Further, the second feature information is a feature vector.
Further, the second feature information of the user input information may be acquired according to the HOG algorithm or the SIFT algorithm.
Optionally, in order to improve the accuracy of positioning, the method according to the embodiment of the present invention further includes filtering the second feature information. Specifically, human body features in the image are obtained through an image human body detection technology, and features belonging to the human body are filtered in the second feature information. Therefore, the influence of the changed characteristics on the characteristic matching can be reduced, and the positioning accuracy is improved.
Step S220, determining at least one first feature information matching the second feature information in the database.
In this embodiment, according to the second feature information in the image information input by the user, the reference images in the database are traversed, and each reference image in the database is respectively matched with the information input by the user to obtain a matched reference image. Specifically, the first feature information matched with the second feature information may be determined in the database according to a Brute-Force (BF) algorithm, a K-Nearest Neighbor (KNN) algorithm, or a Fast Library for Approximate Neighbors algorithms (FLANN) algorithm, etc.
The BF algorithm is to compare the feature vectors of two images, calculate the distance between the two vectors, the closer the distance is, the more matched the two images are, and realize the violence matching through the object for violence cracking provided in the OpenCV (open source computer vision library). Specifically, traversing the feature vectors of the two images, then calculating the distance between the vectors, sorting the feature points according to the distance, and displaying the matching results of the first N features under a certain confidence degree. The matching process of the two images is that the Euclidean distances from the characteristic points on one image to all the characteristic points on the other image are calculated, the point with the minimum Euclidean distance is selected as the best matching point, and the best matching points on the two images form a matching point pair. And selecting the image with the most matching points as the optimal matching image.
The KNN matching algorithm is that a sample data set with marks exists, new data without marks are input, then the distance between the feature vector of a new sample and all the sample feature vectors in the existing sample set is calculated, then sorting is carried out according to the calculation result, the first k samples which are closest to the new sample are taken out, and the marks which are most appeared in the k samples are the marks of the new sample.
The FLANN matching algorithm is to select different algorithms to process a data set according to different data. Specifically, two parameters are defined in the FLANN matching algorithm, index parameters and search objects can be determined by self when matching is calculated, and the larger the value is, the longer the matching time is calculated, and the more accurate the matching time is calculated by specifying the number of times that the index tree is to be traversed.
Therefore, the matching result of the second characteristic information and the first characteristic information can be obtained, and the optimal matching result is determined as the first characteristic information matched with the second characteristic information.
Step S230, determining the reference image information corresponding to the matched first feature information as the matched reference image information.
In this embodiment, the reference image information corresponding to the first feature information that matches the second feature information is determined as the matched reference image information.
And step S130, determining user position information according to the matched at least one piece of reference image information.
In this embodiment, after the matched reference image information is acquired, the position information corresponding to the reference image information is determined as the user position information.
The embodiment of the invention matches the image information input by the user with the reference image in the database, and determines the position information corresponding to the reference image matched with the information input by the user as the user position information. Thus, indoor positioning can be achieved more accurately.
Fig. 3 is a flowchart of an indoor positioning method according to a second embodiment of the present invention. As shown in fig. 3, the indoor positioning method according to the embodiment of the present invention includes the following steps:
and step S310, acquiring video information input by a user.
In this embodiment, the user input information is video information. A video can be seen as a plurality of consecutive images containing motion information of the images.
Step S320, determining at least one reference image information matched with the video information in a predetermined database.
In this embodiment, the database stores a plurality of reference image information, each of which includes first feature information and corresponding position information.
Further, data acquisition is carried out on the indoor environment through shooting equipment to obtain a database, wherein the database comprises a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information. Specifically, images of various indoor positions are collected through shooting equipment to serve as reference images, first feature information of each reference image is obtained through an image feature extraction algorithm, position labeling is carried out on each reference image to serve as corresponding position information, and therefore a database can be obtained.
Further, when a reference image in a scene is captured, a person or a moving object (e.g., a vehicle) may appear in the reference image, which may result in a large error in matching. In the embodiment, the reference images of the same area are acquired for multiple times in different time periods, and the reference images of the same area are compared to remove the changed characteristic points. Therefore, the influence of people or moving objects on the reference image can be reduced, and the positioning is more accurate.
Further, automatic equipment can be introduced to acquire images to acquire reference images, for example, the images are acquired through a panoramic shooting trolley or indoor unmanned aerial vehicle equipment, so that the reference images of indoor positions can be acquired more comprehensively.
Specifically, fig. 4 is a flowchart of determining matched reference image information according to the second embodiment of the present invention. As shown in fig. 4, determining reference image information in the database that matches the user input information includes the steps of:
and step S410, acquiring a first image in the video information.
In this embodiment, an image is cut out of the video information as a first image.
And step S420, extracting second characteristic information from the first image.
Further, the first feature information is a feature vector.
Further, the second feature information of the user input information may be acquired according to the HOG algorithm or the SIFT algorithm.
Optionally, in order to improve the accuracy of positioning, the method according to the embodiment of the present invention further includes filtering the second feature information. Specifically, human body features in the image are obtained through an image human body detection technology, and features belonging to the human body are filtered in the second feature information. Therefore, the influence of the changed characteristics on the characteristic matching can be reduced, and the positioning accuracy is improved.
And step S430, determining first characteristic information matched with the second characteristic information in the database.
In this embodiment, according to the second feature information of the first image in the video information, the reference images in the database are traversed, and each reference image in the database is respectively matched with the first image to obtain a matched reference image. Specifically, the first feature information that matches the second feature information may be determined in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, a fast nearest neighbor search packet algorithm, or the like.
Therefore, the matching result of the second characteristic information and the first characteristic information can be obtained, and the optimal matching result is determined as the first characteristic information matched with the second characteristic information.
And step S430, determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
In this embodiment, the reference image information corresponding to the first feature information that matches the second feature information is determined as the matched reference image information.
And step S330, determining user position information according to the matched at least one piece of reference image information.
Specifically, fig. 5 is a flowchart for determining user location information according to a second embodiment of the present invention. As shown in fig. 5, determining the user location information according to the matched at least one reference image information comprises the following steps:
step S510, determining the position information corresponding to the matched at least one reference image information as middle position information.
In this embodiment, after the reference image information matched with the first image is acquired through the above steps S410 to S430, the position information corresponding to the reference image information is determined as the middle position information.
And step S520, acquiring a second image in the video information.
In this embodiment, an image is re-cut as a second image in the video information, and the first image and the second image are not the same image.
Step S530, obtaining the depth maps of the first image and the second image.
In this embodiment, acquiring the disparity maps of the first image and the second image includes the following steps:
and step S531, performing internal reference calibration. The internal reference reflects the projection relation between the terminal equipment coordinate system and the image coordinate system.
Alternatively, the internal reference calibration may be performed by the Zhangyingyou calibration method. Specifically, firstly, a piece of checkerboard paper (with known black-white space) is printed and attached to a flat plate, a plurality of pictures (generally 10-20) are shot aiming at the checkerboard, characteristic points are detected in the pictures, a plurality of internal parameters and a plurality of external parameters are calculated by using an analytic solution estimation method, and an optimization target is designed and the parameters are calibrated according to a maximum likelihood estimation strategy.
And step S532, external parameter calibration is carried out.
The external parameters reflect the rotational and translational relationships between the terminal coordinate system and the world coordinate system.
Optionally, the embodiment of the invention adopts an external reference self-calibration method. Specifically, the first image and the second image are subjected to distortion correction using the obtained internal parameters, matching feature point pairs of the two images subjected to distortion correction are acquired, an intrinsic matrix is solved using the acquired feature point pairs and the internal parameter matrix, and rotation (rotation) and translation (translation), that is, external parameters, between the two images are solved using the intrinsic matrix.
And step S533, performing stereo correction.
In the present embodiment, after obtaining two rotations and translations, the two images are subjected to stereo epipolar line correction, and first the rotation parameter R1, the translation parameter T1, and the perspective projection matrix P1 of the first image, and the rotation parameter R2, the translation parameter T2, and the perspective projection matrix P2 of the second image are obtained according to the above step S532. And performing epipolar line correction operation by using the parameters, and storing a correction result.
And step S534, carrying out stereo matching on the two images.
In this embodiment, after obtaining two stereo-corrected images, matching points are on the same line, and a disparity map is obtained according to a feature matching algorithm. In the embodiment of the invention, a disparity map is obtained by adopting a Semi-Global Block Matching (SGBM) algorithm. The SGBM algorithm is a semi-global matching algorithm for calculating disparity in binocular vision.
And step 535, acquiring a depth map according to the disparity map.
In this embodiment, the depth map includes the distance between the user's actual position information and the intermediate position information. In particular, a depth map is an image or image channel containing information about the distance of the surface of the scene object from the viewpoint, each pixel value being the actual distance of the terminal device from the object.
And S540, determining user position information according to the depth map and the middle position information.
Therefore, the distance from the terminal equipment to the middle position information can be obtained according to the depth map, and the user position information can be further obtained according to the middle position information and the distance from the terminal equipment to the middle position information.
The embodiment of the invention extracts two images from the video information input by the user, obtains the disparity maps of the two images and carries out positioning through the disparity maps. Therefore, the error distance between the actual position of the user and the position represented by the characteristic point in the shot image can be reduced, and indoor positioning can be realized more accurately.
Further, in order to enable the user to view the location in real time, the method according to the embodiment of the present invention further includes acquiring the actual location information of the user, as shown in fig. 6 specifically, including the following steps:
and step S610, acquiring a motion track through an inertial sensor.
In this embodiment, the terminal device obtains the motion trajectory through its own inertial sensor.
In particular, inertial sensors are important components for detecting and measuring acceleration, tilt, shock, vibration, rotation, and multiple degrees of freedom (DoF) motion, and for addressing navigation, orientation, and motion carrier control. The embodiment acquires the motion track in real time through the inertial sensor.
And S620, acquiring actual position information according to the user position information and the motion trail.
In this embodiment, the position information of the user can be obtained through the above steps S110 to S130, and meanwhile, the motion trajectory of the user can be obtained according to the above step S140, so as to obtain the actual position information of the user in real time according to the position information of the user and the motion trajectory.
Optionally, the movement trajectory is displayed in the form of a map together with the actual location information of the user.
Therefore, the motion trail and the actual position information can be provided for the user in real time through the terminal equipment.
Further, for a building scene with a complex environment, in order to enable a user to accurately reach a destination, the method of the embodiment of the present invention further includes:
and step S630, acquiring target position information.
In this embodiment, a user inputs target location information for characterizing a destination desired to be reached through a terminal device.
Alternatively, the user may input the target location through an input interface of the terminal device, or the user may select the corresponding target location through a map provided by the terminal device.
Further, the map may be an online map or an offline map.
And step S640, acquiring navigation information according to the target position information and the actual position information.
In this embodiment, target position information is obtained, planning is performed according to actual position information of a user and the target position information of the user, one or more pieces of route planning information are obtained, and corresponding navigation information is generated according to the route planning information.
Optionally, the navigation information is voice information, and voice prompt is performed on the walking route, the current position and the like according to the route planning information. Therefore, the user can accurately acquire the walking route according to the voice prompt, and the target position can be quickly reached.
According to the embodiment of the invention, the current position of the user is determined according to the image information or the video information input by the user, and the route planning information is determined according to the target position so as to prompt the walking route, so that the user can accurately obtain the walking route, and the user can quickly reach the target position.
Further, since there is a certain error when the actual position information is obtained in real time by the inertial sensor, the position information needs to be corrected during the navigation process.
Specifically, the user re-acquires the user input information through the terminal device, and performs positioning according to steps S110 to S130 to correct the current actual position information, and re-acquires the navigation information according to the corrected actual position information to perform navigation, so that the position offset can be corrected, and the navigation is more accurate.
The embodiment of the invention matches the image information or video information input by the user with the reference image in the database, and determines the position information corresponding to the reference image matched with the information input by the user as the user position information. Thus, indoor positioning can be achieved more accurately.
Fig. 7 is a schematic view of an indoor positioning apparatus according to a first embodiment of the present invention. As shown in fig. 7, the indoor positioning device of the embodiment of the present invention includes an input unit 71, a matching unit 72, and a positioning unit 73. The input unit 71 is configured to obtain user input information, where the user input information is image information. The matching unit 72 is configured to determine at least one reference image information matching the user input information in a predetermined database. The positioning unit 73 is configured to determine user position information according to the matched at least one reference image information.
Preferably, the matching unit 72 includes a feature extraction module 72a, a feature matching module 72b, and an information determination module 72 c. The feature extraction module 72a is configured to extract second feature information from the image information. The feature matching module 72b is configured to determine first feature information matching the second feature information in the database. The information determining module 72c is configured to determine the reference image information corresponding to the matched first feature information as the matched reference image information.
Preferably, the positioning unit 73 is configured to determine position information corresponding to the matched at least one reference image information as the user position information.
Preferably, the feature matching module 72b is configured to determine at least one first feature information matching the second feature information in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, or a fast nearest neighbor search package algorithm.
Preferably, the apparatus further comprises:
the motion track acquisition unit is used for acquiring a motion track through an inertial sensor; and
and the actual position acquiring unit is used for acquiring actual position information according to the user position information and the motion trail.
Preferably, the apparatus further comprises:
a target position acquisition unit for acquiring target position information; and
and the navigation unit is used for acquiring navigation information according to the target position information and the actual position information.
The embodiment of the invention matches the image information input by the user with the reference image in the database, and determines the position information corresponding to the reference image matched with the information input by the user as the user position information. Thus, indoor positioning can be accurately achieved.
Fig. 8 is a schematic view of an indoor positioning apparatus according to a second embodiment of the present invention. As shown in fig. 8, the indoor positioning device of the embodiment of the present invention includes an input unit 81, a matching unit 82, and a positioning unit 83. The input unit 81 is configured to obtain user input information, where the user input information is video information. The matching unit 82 is configured to determine at least one reference image information matching the user input information in a predetermined database. The positioning unit 83 is configured to determine user location information according to the matched at least one reference image information.
Preferably, the matching unit includes a first image acquisition module 82a, a feature extraction module 82b, a feature matching module 82c, and an information determination module 82 d. The first image obtaining module 82a is configured to obtain a first image from the video information. The feature extraction module 82b is configured to extract second feature information in the first image. The feature matching module 82c is configured to determine first feature information matching the second feature information in the database. The information determining module 82d is configured to determine the reference image information corresponding to the matched first feature information as the matched reference image information.
Preferably, the positioning unit comprises an intermediate position determining module 83a, a second image acquisition module 83b, a depth map acquisition module 83c and a user position determining module 83 d. The intermediate position determining module 83a is configured to determine position information corresponding to the matched at least one reference image information as intermediate position information. The second image obtaining module 83b is configured to obtain a second image from the video information. The depth map obtaining module 83c is configured to obtain depth maps of the first image and the second image, where the depth maps include distances between user actual position information and intermediate position information. The user position determining module 83d is configured to determine user position information according to the depth map and the intermediate position information.
Preferably, the feature matching module is configured to determine at least one first feature information matching the second feature information in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, or a fast nearest neighbor search package algorithm.
Preferably, the apparatus further comprises:
the motion track acquisition unit is used for acquiring a motion track through an inertial sensor; and
and the actual position acquiring unit is used for acquiring actual position information according to the user position information and the motion trail.
Preferably, the apparatus further comprises:
a target position acquisition unit for acquiring target position information; and
and the navigation unit is used for acquiring navigation information according to the target position information and the actual position information.
The embodiment of the invention extracts two images from the video information input by the user, obtains the disparity maps of the two images and carries out positioning through the disparity maps. Therefore, the error distance between the actual position of the user and the position represented by the characteristic point in the shot image can be reduced, and indoor positioning can be realized more accurately.
Fig. 9 is a schematic diagram of an electronic device of an embodiment of the invention. As shown in fig. 9, the electronic device: at least one processor 91; and a memory 92 communicatively coupled to the at least one processor 91; and a communication component 93 communicatively coupled to the scanning device, the communication component 93 receiving and transmitting data under control of the processor 91; wherein the memory 92 stores instructions executable by the at least one processor 91 to perform the following:
acquiring user input information, wherein the user input information is image information or video information;
determining at least one reference image information matched with the user input information in a predetermined database, wherein the database stores a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information; and
and determining user position information according to the matched at least one piece of reference image information.
Preferably, in response to the user input information being image information, determining at least one reference image information in a predetermined database that matches the user input information comprises:
extracting second characteristic information from the image information;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, the determining of the user position information according to the matched at least one reference image information is to determine the position information corresponding to the matched at least one reference image information as the user position information.
Preferably, in response to the user input information being video information, determining at least one reference image information in a predetermined database that matches the user input information comprises:
acquiring a first image in the video information;
extracting second characteristic information from the first image;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
Preferably, determining the user location information from the matched at least one reference image information comprises:
determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
acquiring a second image in the video information;
acquiring depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and determining user position information according to the depth map and the intermediate position information.
Preferably, the first feature information matching the second feature information is determined in the database according to a storm algorithm, a K-nearest neighbor matching algorithm, or a fast nearest neighbor search package algorithm.
Preferably, the method further comprises:
acquiring a motion track through an inertial sensor; and
and acquiring actual position information according to the user position information and the motion trail.
Preferably, the method further comprises:
acquiring target position information; and
and acquiring navigation information according to the target position information and the actual position information.
Specifically, the electronic device includes: one or more processors 91 and a memory 92, with one processor 91 being an example in fig. 9. The processor 91 and the memory 92 may be connected by a bus or other means, and fig. 9 illustrates the connection by the bus as an example. Memory 92, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The processor 91 executes various functional applications and data processing of the device, i.e. implements the above-mentioned indoor positioning method, by running non-volatile software programs, instructions and modules stored in the memory 92.
The memory 92 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, memory 92 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 92 may optionally include memory located remotely from the processor 91, and such remote memory may be connected to an external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory 92, which when executed by the one or more processors 91 perform the indoor positioning method of any of the above-described method embodiments.
The product can execute the method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the method provided by the embodiment of the application without detailed technical details in the embodiment.
Another embodiment of the invention is directed to a non-transitory storage medium storing a computer-readable program for causing a computer to perform some or all of the above method embodiments.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The embodiment of the invention discloses A1 and an indoor positioning method, which comprises the following steps:
acquiring user input information, wherein the user input information is image information or video information;
determining at least one reference image information matched with the user input information in a predetermined database, wherein the database stores a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information; and
and determining user position information according to the matched at least one piece of reference image information.
A2, the method of A1, wherein the determining at least one reference image information matching the user input information in a predetermined database includes:
extracting second characteristic information from the image information;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
A3, the method of A2, wherein the determining the user location information according to the matched at least one reference image information is determining the location information corresponding to the matched at least one reference image information as the user location information.
A4, the method of A1, wherein the determining at least one reference image information in a predetermined database that matches the user input information comprises, in response to the user input information being video information:
acquiring a first image in the video information;
extracting second characteristic information from the first image;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
A5, the method of A4, wherein the determining user location information from the matched at least one reference image information comprises:
determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
acquiring a second image in the video information;
acquiring depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and determining user position information according to the depth map and the intermediate position information.
A6, the method as defined in a2 or a4, wherein the first feature information matching the second feature information is determined in the database according to a storm algorithm, a K-nearest neighbor matching algorithm or a fast nearest neighbor search package algorithm.
A7, the method of a1, the method further comprising:
acquiring a motion track through an inertial sensor; and
and acquiring actual position information according to the user position information and the motion trail.
A8, the method of a7, the method further comprising:
acquiring target position information; and
and acquiring navigation information according to the target position information and the actual position information.
The embodiment of the invention also discloses B1 and an indoor positioning device, which comprises:
the input unit is used for acquiring user input information, and the user input information is image information or video information;
a matching unit configured to determine at least one reference image information matched with the user input information in a predetermined database in which a plurality of reference image information are stored, each of the reference image information including first feature information and corresponding position information; and
and the positioning unit is used for determining the user position information according to the matched at least one piece of reference image information.
B2, the device as B1, the matching unit includes:
the characteristic extraction module is used for extracting second characteristic information from the image information;
the characteristic matching module is used for determining first characteristic information matched with the second characteristic information in the database; and
and the information determining module is used for determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
B3, the device as B2, the positioning unit is configured to determine the position information corresponding to the matched at least one reference image information as the user position information.
B4, the device as B1, the matching unit includes:
the first image acquisition module is used for acquiring a first image from the video information;
the characteristic extraction module is used for extracting second characteristic information from the first image;
the characteristic matching module is used for determining first characteristic information matched with the second characteristic information in the database; and
and the information determining module is used for determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
B5, the device as B4, wherein the positioning unit comprises:
the middle position determining module is used for determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
the second image acquisition module is used for acquiring a second image from the video information;
the depth map acquisition module is used for acquiring the depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and the user position determining module is used for determining the user position information according to the depth map and the middle position information.
B6, the apparatus according to B2 or B4, wherein the feature matching module is configured to determine at least one first feature information matching the second feature information in the database according to a storm algorithm, a K-nearest neighbor matching algorithm or a fast nearest neighbor search package algorithm.
B7, the apparatus of B1, further comprising:
the motion track acquisition unit is used for acquiring a motion track through an inertial sensor; and
and the actual position acquiring unit is used for acquiring actual position information according to the user position information and the motion trail.
B8, the apparatus of B7, further comprising:
a target position acquisition unit for acquiring target position information; and
and the navigation unit is used for acquiring navigation information according to the target position information and the actual position information.
The embodiment of the invention also discloses C1, an electronic device, comprising a memory and a processor, wherein the memory is used for storing one or more computer program instructions, and the processor executes the one or more computer program instructions to realize the method according to any one of A1-A8.
Embodiments of the invention also disclose D1, a computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method as described in any of a1-a 8.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. An indoor positioning method, characterized in that the method comprises:
acquiring user input information, wherein the user input information is image information or video information;
determining at least one reference image information matched with the user input information in a predetermined database, wherein the database stores a plurality of reference image information, and each reference image information comprises first characteristic information and corresponding position information; and
and determining user position information according to the matched at least one piece of reference image information.
2. The method of claim 1, wherein determining at least one reference image information in a predetermined database that matches the user input information comprises:
extracting second characteristic information from the image information;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
3. The method according to claim 2, wherein determining user location information according to the matched at least one reference image information is determining location information corresponding to the matched at least one reference image information as the user location information.
4. The method of claim 1, wherein in response to the user input information being video information, determining at least one reference image information in a predetermined database that matches the user input information comprises:
acquiring a first image in the video information;
extracting second characteristic information from the first image;
determining first feature information matched with the second feature information in the database; and
and determining the reference image information corresponding to the matched first characteristic information as the matched reference image information.
5. The method of claim 4, wherein determining user location information from the matched at least one reference image information comprises:
determining the position information corresponding to the matched at least one piece of reference image information as middle position information;
acquiring a second image in the video information;
acquiring depth maps of the first image and the second image, wherein the depth maps comprise the distance between the actual position information of the user and the middle position information;
and determining user position information according to the depth map and the intermediate position information.
6. The method according to claim 2 or 4, characterized in that the first feature information matching the second feature information is determined in the database according to a storm algorithm, a K-nearest neighbor matching algorithm or a fast nearest neighbor search package algorithm.
7. The method of claim 1, further comprising:
acquiring a motion track through an inertial sensor; and
and acquiring actual position information according to the user position information and the motion trail.
8. An indoor positioning device, the device comprising:
the input unit is used for acquiring user input information, and the user input information is image information or video information;
a matching unit configured to determine at least one reference image information matched with the user input information in a predetermined database in which a plurality of reference image information are stored, each of the reference image information including first feature information and corresponding position information; and
and the positioning unit is used for determining the user position information according to the matched at least one piece of reference image information.
9. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-7.
10. A computer-readable storage medium on which computer program instructions are stored, which, when executed by a processor, implement the method of any one of claims 1-7.
CN201911150111.XA 2019-11-21 2019-11-21 Indoor positioning method and device, electronic equipment and storage medium Pending CN110986916A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911150111.XA CN110986916A (en) 2019-11-21 2019-11-21 Indoor positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911150111.XA CN110986916A (en) 2019-11-21 2019-11-21 Indoor positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110986916A true CN110986916A (en) 2020-04-10

Family

ID=70085576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911150111.XA Pending CN110986916A (en) 2019-11-21 2019-11-21 Indoor positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110986916A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654666A (en) * 2020-05-19 2020-09-11 河南中烟工业有限责任公司 Tray cigarette material residue identification system and identification method
CN112985419A (en) * 2021-05-12 2021-06-18 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
CN105225240A (en) * 2015-09-25 2016-01-06 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated
CN105371847A (en) * 2015-10-27 2016-03-02 深圳大学 Indoor live-action navigation method and system
CN105953801A (en) * 2016-07-18 2016-09-21 乐视控股(北京)有限公司 Indoor navigation method and device
CN106647742A (en) * 2016-10-31 2017-05-10 纳恩博(北京)科技有限公司 Moving path planning method and device
US20180039276A1 (en) * 2016-08-04 2018-02-08 Canvas Technology, Inc. System and methods of determining a geometric pose of a camera based on spatial and visual mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046686A (en) * 2015-06-19 2015-11-11 奇瑞汽车股份有限公司 Positioning method and apparatus
CN105225240A (en) * 2015-09-25 2016-01-06 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated
CN105371847A (en) * 2015-10-27 2016-03-02 深圳大学 Indoor live-action navigation method and system
CN105953801A (en) * 2016-07-18 2016-09-21 乐视控股(北京)有限公司 Indoor navigation method and device
US20180039276A1 (en) * 2016-08-04 2018-02-08 Canvas Technology, Inc. System and methods of determining a geometric pose of a camera based on spatial and visual mapping
CN106647742A (en) * 2016-10-31 2017-05-10 纳恩博(北京)科技有限公司 Moving path planning method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111654666A (en) * 2020-05-19 2020-09-11 河南中烟工业有限责任公司 Tray cigarette material residue identification system and identification method
CN112985419A (en) * 2021-05-12 2021-06-18 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium
CN112985419B (en) * 2021-05-12 2021-10-01 中航信移动科技有限公司 Indoor navigation method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108256574B (en) Robot positioning method and device
KR101722803B1 (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
CN109145803B (en) Gesture recognition method and device, electronic equipment and computer readable storage medium
CN111179358A (en) Calibration method, device, equipment and storage medium
CN111382613B (en) Image processing method, device, equipment and medium
CN104885098A (en) Mobile device based text detection and tracking
CN108345821B (en) Face tracking method and device
CN107423306B (en) Image retrieval method and device
JP7147753B2 (en) Information processing device, information processing method, and program
CN112598922B (en) Parking space detection method, device, equipment and storage medium
KR101879855B1 (en) Digital map generating system for performing spatial modelling through a distortion correction of image
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN110675426A (en) Human body tracking method, device, equipment and storage medium
CN110986916A (en) Indoor positioning method and device, electronic equipment and storage medium
CN110505398A (en) A kind of image processing method, device, electronic equipment and storage medium
US9947106B2 (en) Method and electronic device for object tracking in a light-field capture
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN108369739B (en) Object detection device and object detection method
CN110991306B (en) Self-adaptive wide-field high-resolution intelligent sensing method and system
CN113610967B (en) Three-dimensional point detection method, three-dimensional point detection device, electronic equipment and storage medium
CN111046831B (en) Poultry identification method, device and server
CN113137958A (en) Lofting control method and system for RTK host and storage medium
CN110651274A (en) Movable platform control method and device and movable platform
CA3001653A1 (en) Improvements in and relating to missile targeting
KR101879858B1 (en) Spatial modelling system for modelling spatial data by extracting common feature using comparison process between coordinate information of target object extracted from corrected image and coordinate information of arial photograph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200410