CN114812381A - Electronic equipment positioning method and electronic equipment - Google Patents

Electronic equipment positioning method and electronic equipment Download PDF

Info

Publication number
CN114812381A
CN114812381A CN202110121715.2A CN202110121715A CN114812381A CN 114812381 A CN114812381 A CN 114812381A CN 202110121715 A CN202110121715 A CN 202110121715A CN 114812381 A CN114812381 A CN 114812381A
Authority
CN
China
Prior art keywords
electronic device
pose
electronic equipment
camera
electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110121715.2A
Other languages
Chinese (zh)
Other versions
CN114812381B (en
Inventor
朱应成
毛春静
曾以亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110121715.2A priority Critical patent/CN114812381B/en
Publication of CN114812381A publication Critical patent/CN114812381A/en
Application granted granted Critical
Publication of CN114812381B publication Critical patent/CN114812381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates

Abstract

The embodiment of the application is applicable to the technical field of positioning, and provides a positioning method of electronic equipment and the electronic equipment, wherein the method comprises the following steps: the method comprises the steps that a first electronic device builds an environment map of a current environment; the first electronic device carries out self-positioning based on the environment map, and an initial pose of the first electronic device in the environment map is obtained; the first electronic equipment identifies second electronic equipment with a camera in the current environment; the first electronic device determines a to-be-fused pose of the first electronic device by adopting the second electronic device; and the first electronic equipment fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment. According to the method, the second electronic equipment in the current environment is adopted to assist in positioning the first electronic equipment, so that the positioning accuracy of the first electronic equipment can be improved.

Description

Electronic equipment positioning method and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of positioning, in particular to a positioning method of electronic equipment and the electronic equipment.
Background
In recent years, the development of Virtual Reality (VR), Augmented Reality (AR), and other technologies has been advanced, and the technologies have been increasingly applied to the fields of education, training, medical care, and the like. One of the core algorithms of VR and AR technologies is simultaneous localization and mapping (SLAM). SLAM refers to an electronic device equipped with a specific sensor, which builds a model of the environment during the motion process without environment prior information, and estimates the motion of the device by self-positioning.
Generally, to reduce errors in self-positioning of electronic devices and improve robustness of positioning to the environment, computer vision can be applied in self-positioning to form a visual odometer. The visual odometer can utilize sequence images collected by the camera to obtain motion pose parameters of the electronic equipment and map information around the electronic equipment through feature tracking and relative motion estimation.
In the process of tracking the motion pose by using the SLAM, the electronic equipment needs to utilize the characteristic information of each object in a real scene. Due to the limitation of the field of view (FoV) of the camera carried by the electronic device, the size of the scene within the field of view is often fixed. In some scenarios, for example, if the electronic device faces a large area with weak texture, such as a white wall and a ground, the positioning accuracy of the SLAM is poor, and a large jump offset is likely to occur in the positioning result, which seriously affects the use of the electronic device by the user.
Disclosure of Invention
The positioning method of the electronic equipment and the electronic equipment provided by the embodiment of the application are used for solving the problem that the positioning accuracy of the electronic equipment is poor in some scenes in the prior art.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, a positioning method for an electronic device is provided, including:
the method comprises the steps that a first electronic device builds an environment map of a current environment;
the first electronic device carries out self-positioning based on the environment map, and an initial pose of the first electronic device in the environment map is obtained;
the first electronic equipment identifies second electronic equipment with a camera in the current environment;
the first electronic device determines a to-be-fused pose of the first electronic device by adopting the second electronic device;
and the first electronic equipment fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment.
The positioning method of the electronic equipment provided by the embodiment of the application has the following beneficial effects: according to the embodiment of the application, the second electronic equipment in the current environment is adopted to assist the first electronic equipment in positioning, so that the positioning accuracy and robustness of the first electronic equipment can be improved.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, the to-be-fused pose of the first electronic device by using the second electronic device includes: the first electronic device determines a first pose of the camera in the environment map; the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and the first electronic equipment generates a pose to be fused of the first electronic equipment according to the first pose and the second pose. The embodiment of the application can solve the problem that the first electronic equipment cannot be positioned when facing scenes such as a white wall, a ground and the like without textures or with weak textures by utilizing other cameras with different directions in the current environment instead of only using the camera of the first electronic equipment.
In one possible implementation manner of the first aspect, the determining, by the first electronic device, a first pose of the camera in the environment map includes: the first electronic equipment acquires an image of the second electronic equipment; the first electronic device determines a first pose of the camera in the environment map according to the image of the second electronic device.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a first pose of the camera in the environment map further includes: the first electronic equipment acquires a positioning signal sent by the second electronic equipment; the first electronic device determines a first pose of the camera in the environment map according to the positioning signal.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera includes: the first electronic equipment controls the camera to collect image information containing the first electronic equipment; and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera according to the image information includes: the first electronic equipment extracts a plurality of feature points in the image information; the first electronic equipment matches the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment, wherein the feature dictionary is obtained by performing feature extraction on an image of the first electronic equipment by the first electronic equipment and/or second electronic equipment; and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature point.
In a possible implementation manner of the first aspect, the determining, by the first electronic device, a second pose of the first electronic device in a coordinate system corresponding to the camera includes: the first electronic equipment controls the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and the first electronic equipment receives the second pose sent by the second electronic equipment. According to the embodiment of the application, when the second pose is calculated, the calculation can be carried out on the second electronic equipment, and then the second electronic equipment transmits the calculated second pose to the first electronic equipment, so that the transmission quantity of data is reduced in the whole process, and the occupation of resources of the first electronic equipment in the positioning process is reduced.
In a possible implementation manner of the first aspect, the second electronic device includes a plurality of electronic devices having the cameras, and the pose to be fused includes a plurality of poses determined by using the plurality of second electronic devices; correspondingly, the fusing the initial pose and the pose to be fused by the first electronic device to obtain the target pose of the first electronic device includes: the first electronic equipment processes a plurality of poses to be fused to obtain target poses to be fused; and the first electronic equipment fuses the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment. The embodiment of the application can utilize a plurality of second electronic devices to assist in positioning the first electronic device. The camera based on each second electronic device can obtain a second pose, the first electronic device can process the second poses in a weighting summation mode and the like, and then the processing result and the initial pose are fused, so that the positioning precision and robustness can be further improved.
In a second aspect, a positioning apparatus for an electronic device is provided, including:
the environment map building module is used for building an environment map of the current environment;
an initial pose calculation module, configured to perform self-positioning based on the environment map, and obtain an initial pose of the first electronic device in the environment map;
the electronic equipment identification module is used for identifying second electronic equipment with a camera in the current environment;
the to-be-fused pose calculation module is used for determining the to-be-fused pose of the first electronic equipment by adopting the second electronic equipment;
and the pose fusion module is used for fusing the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is specifically configured to: determining a first pose of the camera in the environment map; determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and generating a pose to be fused of the first electronic equipment according to the first pose and the second pose.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is further specifically configured to: acquiring an image of the second electronic device; and determining a first pose of the camera in the environment map according to the image of the second electronic equipment.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is further specifically configured to: acquiring a positioning signal sent by the second electronic equipment; and determining a first pose of the camera in the environment map according to the positioning signal.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is further specifically configured to: controlling the camera to acquire image information containing the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is further specifically configured to: extracting a plurality of feature points in the image information; matching the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature point, wherein the feature dictionary is obtained by performing feature extraction on the image of the first electronic equipment by the first electronic equipment and/or the second electronic equipment.
In a possible implementation manner of the second aspect, the pose to be fused calculation module is further specifically configured to: controlling the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and receiving the second pose sent by the second electronic equipment.
In a possible implementation manner of the second aspect, the second electronic device includes a plurality of electronic devices having the cameras, and the to-be-fused pose includes a plurality of poses determined by using the plurality of second electronic devices;
correspondingly, the pose fusion module is specifically configured to: processing the plurality of poses to be fused to obtain the poses of the targets to be fused; and fusing the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
In a third aspect, an electronic device is provided, which may be the first electronic device in any one of the above first aspects, and includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the computer program is executed by the processor, the positioning method of the electronic device in any one of the above first aspects is implemented.
In a fourth aspect, a computer storage medium is provided, in which computer instructions are stored, which, when run on an electronic device, cause the electronic device to perform the relevant method steps to implement the positioning method of the electronic device in any of the first aspect.
In a fifth aspect, a computer program product is provided, which when run on a computer causes the computer to perform the relevant steps to implement the positioning method of the electronic device in any of the above first aspects.
In a sixth aspect, a positioning system is provided, comprising the first electronic device and the second electronic device in any one of the above first aspects.
In a seventh aspect, a chip is provided, where the chip includes a processor, and the processor may be a general-purpose processor or a special-purpose processor. The processor is configured to support the electronic device to perform relevant steps, so as to implement the positioning method of the electronic device in any one of the above first aspects.
It is to be understood that, the beneficial effects of the second to seventh aspects may be referred to the relevant description of the first aspect, and are not repeated herein.
Drawings
Fig. 1 is an illustration of a prior art four-eye VR helmet;
fig. 2(a) is an application scenario diagram of a positioning method of an electronic device according to an embodiment of the present application;
fig. 2(b) is an application scenario diagram of a positioning method of an electronic device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart illustrating steps of a positioning method for an electronic device according to an embodiment of the present application;
fig. 6(a) is a schematic flowchart illustrating steps of a positioning method for an electronic device according to an embodiment of the present application;
fig. 6(b) is a schematic flowchart illustrating a step of a positioning method for an electronic device according to an embodiment of the present application;
fig. 7 is a block diagram of a positioning apparatus of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same or similar items having substantially the same function and action. For example, the first electronic device, the second electronic device, and the like are only for distinguishing different electronic devices, and the number and execution order thereof are not limited.
It should be noted that in the embodiments of the present application, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The service scenario described in the embodiment of the present application is for more clearly illustrating the technical solution of the embodiment of the present application, and does not form a limitation on the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that with the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
The steps involved in the positioning method of the electronic device provided by the embodiment of the present application are only examples, and not all the steps are necessarily executed steps, or the content in each information or message is not necessarily required, and may be increased or decreased as needed in the use process.
The same steps or technical features having the same functions in the embodiments of the present application may be referred to with each other between different embodiments.
As described above, the size of a scene in the field of view of a camera carried by an electronic device is often fixed due to the limitation of the field angle. When facing a large area with some textures, such as a white wall, a floor, etc., the electronic device uses the SLAM for positioning with poor accuracy. Take electronic equipment as the VR helmet as an example, when the user used the VR helmet, if SLAM positioning accuracy was relatively poor, jump skew by a wide margin would appear. Therefore, in order to improve the positioning accuracy of the VR headset in such a scene, some researchers have proposed increasing the number of cameras to improve the field angle of the VR headset. For example, positioning schemes using binocular, or even four-purpose VR helmets have emerged in the prior art.
As shown in fig. 1, which is an exemplary diagram of a prior art four-eye VR headset, the VR headset 100 carries four cameras ( cameras 101, 102, 103, and 104 in fig. 1). The robustness of the system can be improved by tracking and positioning the texture features in the four-eye image. However, the cameras on the VR headset 100 are still oriented in the same direction, whether in a binocular or a quad positioning scheme. When facing weak texture areas such as white walls and the ground, images acquired by the multiple cameras may all be weak texture images. Thus, it may still cause failure of SLAM positioning, causing drift jump in VR scenario.
In order to solve the above problem, an embodiment of the present application provides a method for positioning an electronic device, which uses other cameras in a scene in a combined manner, identifies and positions a pose of the electronic device by using a visual method, and fuses the pose obtained by SLAM positioning of the electronic device itself, so as to improve tracking accuracy and robustness in scenes such as weak texture.
Fig. 2(a) is a schematic view of an application scenario of a positioning method for an electronic device according to an embodiment of the present application, where the scenario is an indoor scenario. In the indoor scene shown in fig. 2(a), the first electronic device 21 and the second electronic device 22a are included, the second electronic device 22a has a camera 221a, and the angle of view of the camera 221a is V1. Exemplarily, the first electronic device 21 in fig. 2(a) may be a VR headset, and the second electronic device 22a may be a mobile phone. In a specific application, the first electronic device 21 may perform self-positioning based on SLAM, and obtain an initial pose in the current environment. Because the positioning accuracy of the first electronic device 21 for self-positioning based on SLAM may be low, the first electronic device 21 may perform auxiliary positioning in combination with the camera 221a of the second electronic device 22a to obtain the to-be-fused pose of the first electronic device 21. Finally, the first electronic device 21 can obtain a target pose with higher accuracy by fusing the initial pose and the pose to be fused, so that the positioning accuracy and robustness of the first electronic device 21 are improved.
In a possible implementation manner of the embodiment of the present application, with respect to an indoor scenario shown in fig. 2(a), fig. 2(b) is a schematic view of an application scenario of another positioning method for an electronic device provided in the embodiment of the present application. In this scenario, in addition to the second electronic device 22a, a second electronic device 22b is also included, the second electronic device 22b has a camera 222b thereon, and the field angle of the camera 222b is V2. The second electronic device 22b may be, for example, a television set having a camera function. When the first electronic device 21 is performing positioning, the initial pose and the pose to be fused obtained by the camera 222b of the second electronic device 22b may also be fused; alternatively, the first electronic device 21 may first process the pose to be fused obtained by the second electronic device 22a and the pose to be fused obtained by the second electronic device 22b to obtain the target pose to be fused, and then fuse the initial pose and the target pose to be fused to obtain the target pose of the first electronic device 21. The number of the second electronic devices is not limited in the embodiments of the present application.
In this embodiment, the first electronic device or the second electronic device may be an electronic device with a camera, such as a mobile phone, a tablet computer, a wearable device, an in-vehicle device, an AR/VR device, a notebook computer, a Personal Computer (PC), a netbook, and a Personal Digital Assistant (PDA). The embodiment of the present application does not limit the specific type of the first electronic device or the second electronic device.
For example, fig. 3 shows a schematic structural diagram of an electronic device 300. The structure of the first electronic device and the second electronic device may refer to the structure of the electronic device 300.
The electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a Universal Serial Bus (USB) interface 330, a charge management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2, a mobile communication module 350, a wireless communication module 360, an audio module 370, a speaker 370A, a receiver 370B, a microphone 370C, a headset interface 370D, a sensor module 380, keys 390, a motor 391, an indicator 392, a camera 393, a display 394, and a Subscriber Identification Module (SIM) card interface 395, and the like. Among them, the sensor module 380 may include a pressure sensor 380A, a gyro sensor 380B, an air pressure sensor 380C, a magnetic sensor 380D, an acceleration sensor 380E, a distance sensor 380F, a proximity light sensor 380G, a fingerprint sensor 380H, a temperature sensor 380J, a touch sensor 380K, an ambient light sensor 380L, a bone conduction sensor 380M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 300. In some embodiments of the present application, the electronic device 300 may include more or fewer components than illustrated, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 310 may include one or more processing units. For example, the processor 310 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 310 for storing instructions and data. In some embodiments of the present application, the memory in the processor 310 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 310. If the processor 310 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 310, thereby increasing the efficiency of the system.
In some embodiments of the present application, the processor 310 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a bus, a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments of the present application, the processor 310 may include multiple sets of I2C buses. The processor 310 may be coupled to the touch sensor 380K, charger, flash, camera 393, etc. through different I2C bus interfaces. For example, the processor 310 may be coupled to the touch sensor 380K via an I2C interface, such that the processor 310 and the touch sensor 380K communicate via an I2C bus interface to implement touch functionality of the electronic device 300.
The I2S interface may be used for audio communication. In some embodiments of the present application, the processor 310 may include multiple sets of I2S buses. The processor 310 may be coupled to the audio module 370 via an I2S bus to enable communication between the processor 310 and the audio module 370. In some embodiments of the present application, the audio module 370 may transmit the audio signal to the wireless communication module 360 through the I2S interface, so as to implement the function of answering a call through the bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments of the present application, the audio module 370 and the wireless communication module 360 may be coupled through a PCM bus interface. In some embodiments of the present application, the audio module 370 may also transmit an audio signal to the wireless communication module 360 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments of the present application, a UART interface is generally used to connect the processor 310 and the wireless communication module 360. For example, the processor 310 communicates with a bluetooth module in the wireless communication module 360 through a UART interface to implement a bluetooth function. In some embodiments of the present application, the audio module 370 may transmit an audio signal to the wireless communication module 360 through a UART interface, so as to realize a function of playing music through a bluetooth headset.
A MIPI interface may be used to connect processor 310 with peripheral devices such as display 394, camera 393, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like.
In some embodiments of the present application, processor 310 and camera 393 communicate via a CSI interface to implement the capture functionality of electronic device 300. The processor 310 and the display screen 394 communicate via the DSI interface to implement the display functions of the electronic device 300.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments of the present application, a GPIO interface may be used to connect processor 310 with camera 393, display 394, wireless communication module 360, audio module 370, sensor module 380, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 330 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 330 may be used to connect a charger to charge the electronic device 300, and may also be used to transmit data between the electronic device 300 and peripheral devices. The USB interface 330 may also be used to connect to a headset through which audio may be played. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 300. In other embodiments of the present application, the electronic device 300 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 340 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 340 may receive charging input from a wired charger via the USB interface 330. In some wireless charging embodiments, the charging management module 340 may receive a wireless charging input through a wireless charging coil of the electronic device 300. The charging management module 340 may also supply power to the electronic device through the power management module 341 while charging the battery 342.
The power management module 341 is configured to connect the battery 342, the charging management module 340 and the processor 310. The power management module 341 receives input from the battery 342 and/or the charge management module 340, and provides power to the processor 310, the internal memory 321, the display 394, the camera 393, the wireless communication module 360, and the like. The power management module 341 may also be configured to monitor parameters such as battery capacity, battery cycle count, and battery state of health (leakage, impedance).
In other embodiments, the power management module 341 may also be disposed in the processor 310. In other embodiments, the power management module 341 and the charging management module 340 may be disposed in the same device.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 300 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example, the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 350 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 300. The mobile communication module 350 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 350 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 350 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave.
In some embodiments of the present application, at least some of the functional modules of the mobile communication module 350 may be disposed in the processor 310. In some embodiments of the present application, at least some of the functional modules of the mobile communication module 350 may be disposed in the same device as at least some of the modules of the processor 310.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 370A, the receiver 370B, etc.) or displays images or video through the display 394.
In some embodiments of the present application, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 310, and may be disposed in the same device as the mobile communication module 350 or other functional modules.
The wireless communication module 360 may provide a solution for wireless communication applied to the electronic device 300, including Wireless Local Area Networks (WLANs), such as wireless fidelity (Wi-Fi) networks, Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 360 may be one or more devices integrating at least one communication processing module. The wireless communication module 360 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 310. The wireless communication module 360 may also receive a signal to be transmitted from the processor 310, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments of the present application, the antenna 1 of the electronic device 300 is coupled to the mobile communication module 350 and the antenna 2 is coupled to the wireless communication module 360, such that the electronic device 300 can communicate with a network and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, among others. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 300 implements display functions via the GPU, the display screen 394, and the application processor, among other things. The GPU is an image processing microprocessor coupled to a display 394 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 394 is used to display images, video, and the like. The display screen 394 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments of the present application, the electronic device 300 may include 1 or N display screens 394, N being a positive integer greater than 1.
Electronic device 300 may implement a capture function via the ISP, camera 393, video codec, GPU, display 394, application processor, etc.
The ISP is used to process the data fed back by the camera 393. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments of the present application, the ISP may be located in camera 393.
Camera 393 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV or other format. In some embodiments of the present application, electronic device 300 may include 1 or N cameras 393, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 300 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. In this way, the electronic device 300 can play or record video in a variety of encoding formats, such as Moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 300, for example, image recognition, face recognition, voice recognition, text understanding, and the like, may be implemented by the NPU.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 300. The external memory card communicates with the processor 310 through the external memory interface 320 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 321 may be used to store computer-executable program code, which includes instructions. The internal memory 321 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phone book, etc.) created during use of the electronic device 300, and the like.
In addition, the internal memory 321 may include a high-speed random access memory and may also include a nonvolatile memory. Such as at least one magnetic disk storage device, flash memory device, Universal Flash Storage (UFS), etc.
The processor 310 executes various functional applications of the electronic device 300 and data processing by executing instructions stored in the internal memory 321 and/or instructions stored in a memory provided in the processor.
The electronic device 300 may implement audio functions through the audio module 370, the speaker 370A, the receiver 370B, the microphone 370C, the earphone interface 370D, and the application processor, etc. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments of the present application, the audio module 370 may be disposed in the processor 310, or some functional modules of the audio module 370 may be disposed in the processor 310.
The speaker 370A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic device 300 can listen to music through the speaker 370A or listen to a hands-free conversation.
The receiver 370B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic device 300 receives a call or voice information, it can receive voice by placing the receiver 370B close to the ear of the person.
The microphone 370C, also called "microphone", is used to convert a sound signal into an electrical signal. When making a call or transmitting voice information, the user can input a voice signal into the microphone 370C by speaking the user's mouth near the microphone 370C. The electronic device 300 may be provided with at least one microphone 370C. In other embodiments, the electronic device 300 may be provided with two microphones 370C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 300 may further include three, four, or more microphones 370C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
The headphone interface 370D is used to connect wired headphones. The headset interface 370D may be the USB interface 330, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 380A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 380A may be disposed on the display screen 394. The pressure sensor 380A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, or the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 380A, the capacitance between the electrodes changes. The electronic device 300 determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 394, the electronic apparatus 300 detects the intensity of the touch operation according to the pressure sensor 380A. The electronic apparatus 300 may also calculate the touched position from the detection signal of the pressure sensor 380A.
In some embodiments of the present application, touch operations that are applied to the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation having a touch operation intensity smaller than a first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 380B may be used to determine the motion pose of the electronic device 300. In some embodiments of the present application, the angular velocity of the electronic device 300 about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensor 380B. The gyro sensor 380B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 380B detects the shake angle of the electronic device 300, calculates the distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 300 through a reverse movement, thereby achieving anti-shake. The gyro sensor 380B may also be used for navigation, body sensing game scenes.
The air pressure sensor 380C is used to measure air pressure. In some embodiments of the present application, the electronic device 300 calculates altitude, assisted positioning, and navigation from barometric pressure values measured by barometric pressure sensor 380C.
The magnetic sensor 380D includes a hall sensor. The electronic device 300 may detect the opening and closing of the flip holster using the magnetic sensor 380D. In some embodiments of the present application, when the electronic device 300 is a flip cover machine, the electronic device 300 may detect the opening and closing of the flip cover according to the magnetic sensor 380D, and further set the automatic unlocking of the flip cover according to the detected opening and closing state of the holster or the detected opening and closing state of the flip cover.
The acceleration sensor 380E may detect the magnitude of acceleration of the electronic device 300 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 300 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 380F for measuring distance. The electronic device 300 may measure the distance by infrared or laser. In some embodiments of the present application, such as shooting a scene, the electronic device 300 may utilize the range sensor 380F to range for fast focus.
The proximity light sensor 380G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 300 emits infrared light to the outside through the light emitting diode. The electronic device 300 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 300. When insufficient reflected light is detected, the electronic device 300 may determine that there are no objects near the electronic device 300. The electronic device 300 can utilize the proximity light sensor 380G to detect that the user holds the electronic device 300 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 380G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 380L is used to sense the ambient light level. The electronic device 300 may adaptively adjust the brightness of the display 394 based on the perceived ambient light level. The ambient light sensor 380L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 380L may also cooperate with the proximity light sensor 380G to detect whether the electronic device 300 is in a pocket for preventing inadvertent touches.
The fingerprint sensor 380H is used to capture a fingerprint. The electronic device 300 may utilize the collected fingerprint characteristics to implement fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, and the like.
The temperature sensor 380J is used to detect temperature. In some embodiments of the present application, the electronic device 300 implements a temperature processing strategy using the temperature detected by the temperature sensor 380J. For example, when the temperature reported by the temperature sensor 380J exceeds a threshold, the electronic device 300 performs a reduction in performance of a processor located near the temperature sensor 380J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 300 heats the battery 342 when the temperature is below another threshold to avoid the low temperature causing the electronic device 300 to shut down abnormally. In other embodiments, when the temperature is below a further threshold, the electronic device 300 performs a boost on the output voltage of the battery 342 to avoid an abnormal shutdown due to low temperature.
The touch sensor 380K is also referred to as a "touch device". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is used to detect a touch operation applied thereto or thereabout. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided via the display 394. In other embodiments, the touch sensor 380K can be disposed on a surface of the electronic device 300 at a different location than the display 394.
The bone conduction sensor 380M can acquire a vibration signal. In some embodiments of the present application, the bone conduction transducer 380M can acquire a vibration signal of the vibrating bone mass of the vocal part of the human body. The bone conduction sensor 380M may also contact the human body pulse to receive the blood pressure pulsation signal.
In some embodiments of the present application, the bone conduction transducer 380M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 370 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 380M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 380M, so that the heart rate detection function is realized.
Keys 390 include a power-on key, a volume key, etc. The keys 390 may be mechanical keys or touch keys. The electronic device 300 may receive a key input, and generate a key signal input related to user setting and function control of the electronic device 300.
Motor 391 may generate a vibration cue. The motor 391 may be used for both incoming call vibration prompting and touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 391 may also respond to different vibration feedback effects in response to touch operations applied to different areas of the display screen 394. Different application scenarios (e.g., time reminders, received messages, alarms, games, etc.) may also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 392 may be an indicator light that may be used to indicate a change in charge status, charge level, or may be used to indicate a message, missed call, notification, etc.
The SIM card interface 395 is for connecting a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 300 by being inserted into and pulled out of the SIM card interface 395. The electronic device 300 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 395 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 395 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 395 may also be compatible with different types of SIM cards. The SIM card interface 395 may also be compatible with an external memory card. The electronic device 300 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments of the present application, the electronic device 300 employs esims (i.e., embedded SIM cards). The eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300.
The software system of the electronic device 300 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the application adopts a layered architecture
Figure BDA0002922271000000111
The system is an example illustrating a software structure of the electronic device 300.
Fig. 4 is a block diagram of a software structure of an electronic device 300 according to an embodiment of the present application.
The layered architecture divides the software into several layers, each layerHave clear roles and division of labor. The layers communicate with each other through a software interface. In some embodiments of the present application, the
Figure BDA0002922271000000121
The system is divided into four layers, namely an application program layer, an application program framework layer, a network layer and a network layer from top to bottom,
Figure BDA0002922271000000122
Operation (A)
Figure BDA0002922271000000123
runtime) and system libraries, and kernel layer.
The application layer may include a series of application packages.
As shown in fig. 3, the application packages may include camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc. applications.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, and judge whether a status bar, a lock screen, a capture screen and the like exist.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and answered, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 300. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so forth.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. Such as prompting for text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
Figure BDA0002922271000000124
The Runtime comprises a core library and a virtual machine.
Figure BDA0002922271000000125
runtime is responsible for
Figure BDA0002922271000000126
Scheduling and management of the system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is
Figure BDA0002922271000000127
The core library of (1).
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used to perform the functions of object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
The system library may include a plurality of functional modules. Such as surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and so forth.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports playback and recording in a variety of commonly used audio and video formats, as well as still image files, and the like. The media library may support a variety of audio-video encoding formats, e.g., MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following embodiments take an electronic device with the above hardware structure/software structure as an example, and describe the positioning method of the electronic device provided in the embodiments of the present application.
Referring to fig. 5, a schematic step diagram of a positioning method for an electronic device provided in an embodiment of the present application is shown, where the method specifically includes the following steps:
s501, the first electronic device constructs an environment map of the current environment.
In the embodiment of the present application, the first electronic device may refer to an electronic device that currently needs to perform self-positioning. At least one camera should be configured on the first electronic device. The current environment may refer to a scene in which the first electronic device is currently located, and the scene may be an indoor scene or an outdoor scene. When the first electronic device performs self-positioning in the current environment, an environment map of the current environment may be first constructed, and the environment map may be a three-dimensional map.
In a possible implementation manner of the embodiment of the application, the first electronic device may call a camera of the first electronic device to shoot the current environment, and an environment map of the current environment is constructed by identifying an image obtained by shooting.
S502, self-positioning is carried out on the first electronic device based on the environment map, and the initial pose of the first electronic device in the environment map is obtained.
The self-positioning means that the electronic equipment establishes an environment model in a motion process under the condition of no environment prior information, and estimates the motion of the electronic equipment by self-positioning.
In this embodiment of the application, the first electronic device may perform self-positioning based on the environment map constructed in S501, and determine an initial pose of the first electronic device in the environment map.
It should be noted that the pose refers to a position and a posture, and the initial pose of the first electronic device refers to a position and a posture where the first electronic device is currently located. The "initial pose" is used in this step only to distinguish the current position and posture of the first electronic device from the position and posture in the subsequent step, and there is no other special meaning in this embodiment of the present application.
In a possible implementation manner of the embodiment of the present application, the first electronic device may be configured with a SLAM function. Therefore, after the first electronic device is placed in the current environment, the SLAM can start working, automatically perform positioning and mapping, and obtain the initial pose of the first electronic device.
S503, the first electronic equipment identifies second electronic equipment with a camera in the current environment.
In the embodiment of the present application, the second electronic device may be the same type of electronic device as the first electronic device, or may be a different type of electronic device from the first electronic device. The second electronic device should have a camera or the like capable of image acquisition.
Illustratively, the first electronic device may be a VR headset. Therefore, the second electronic device may be another VR headset, or the second electronic device may be an electronic device with a camera, such as a mobile phone, a notebook computer, or the like. The embodiments of the present application do not limit this.
In a possible implementation manner of the embodiment of the application, the first electronic device may identify the second electronic device in the current environment through deep learning or a visual manner. For example, the first electronic device may capture a current environment, and then process the captured image to identify whether a second electronic device having a camera exists in the current environment.
In another possible implementation manner of the embodiment of the present application, the first electronic device may also determine whether a connectable second electronic device with a camera exists in the current environment in a wired or wireless manner. Illustratively, the first electronic device may send the connection request to the outside in a bluetooth pairing manner. If there are other connectable electronic devices, the first electronic device may determine whether the electronic device has a camera in a message interaction manner after the electronic device completes the bluetooth connection.
In the embodiment of the application, after recognizing that a second electronic device with a camera exists in the current environment, the first electronic device can apply for the second electronic device to call the camera thereof for image shooting. After the first electronic device obtains the authority to call the camera of the second electronic device, S504 may be executed, and the second electronic device is adopted to determine the to-be-fused pose of the first electronic device.
And S504, the first electronic device determines the to-be-fused pose of the first electronic device by adopting the second electronic device.
In this embodiment of the application, the to-be-fused pose of the first electronic device may be a pose obtained by processing some data of the first electronic device, and the to-be-fused pose may be used for being fused with an initial pose, so as to obtain a target pose of the first electronic device in a current environment. Some of the data may include image data captured by a camera of the first electronic device and a camera of the second electronic device.
In a possible implementation manner of the embodiment of the application, when the first electronic device determines the to-be-fused pose by using the second electronic device, the first pose of the camera of the second electronic device in the environment map may be determined first.
In one example, the first electronic device may adopt a visual method to position a camera of the second electronic device, and obtain a first pose of the camera of the second electronic device in the environment map. For example, the first electronic device may acquire an image of the second electronic device by using its own camera, and then determine a first pose of the camera of the second electronic device in the environment map according to the acquired image.
In another example, the first electronic device may also determine the first pose of the camera of the second electronic device in the environment map by means of wireless positioning. For example, the first electronic device may obtain a positioning signal sent by the second electronic device, and then determine, according to the positioning signal, a first pose of the second electronic device and a camera thereof in the environment map.
In this embodiment of the application, when the first electronic device determines the to-be-fused pose by using the second electronic device, the second pose of the first electronic device in a coordinate system corresponding to a camera of the second electronic device may also be determined.
It should be noted that the process of determining the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device may be performed at the first electronic device side or at the second electronic device side.
In an example, taking the process of determining the second posture as an example performed at the first electronic device side, after the first electronic device obtains the right to call the camera of the second electronic device, the camera may be controlled to collect image information including the first electronic device. For example, the first electronic device may send a control instruction to the second electronic device, instruct a camera of the second electronic device to perform image capturing, and obtain image information including the first electronic device. Then, the image information can be sent to the first electronic device by the second electronic device, the first electronic device processes the image information, and according to the image information, the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device is determined.
In a possible implementation manner of the embodiment of the present application, when determining, according to the received image information, a second pose of the first electronic device in a coordinate system corresponding to a camera of the second electronic device, the first electronic device may first extract a plurality of feature points in the image information. These feature points may be feature points extracted based on any feature extraction algorithm such as feature From Acquired Segment Test (FAST), oriented FAST rotation feature transform (ORB), or scale-invariant feature transform (SIFT), and their corresponding feature descriptors may use BRIEF or the like. Then, the first electronic device may match the plurality of feature points with a preset feature dictionary to obtain target feature points for characterizing the first electronic device. The first electronic device can determine a second pose of the first electronic device in a coordinate system corresponding to the camera of the second electronic device according to the target feature points obtained through matching. The feature dictionary may be obtained by feature extraction performed by the first electronic device and/or the second electronic device on the image of the first electronic device, and for example, the first electronic device and/or the second electronic device may capture an ORB feature point in the image containing the first electronic device in advance and train to obtain a feature dictionary that can be used for characterizing the first electronic device. It should be noted that, when the first electronic device extracts feature points from the image information, the feature extraction algorithm and the descriptor used in the first electronic device should be consistent with the feature extraction algorithm and the descriptor used in the first electronic device when constructing the feature dictionary.
When the first electronic device matches the extracted feature points with the feature dictionary, if a certain feature point is matched with a feature point in the feature dictionary, the feature point can be considered as belonging to a target feature point which can be used for representing the first electronic device.
In this embodiment of the application, the first electronic device may determine whether the two feature points match by calculating a euclidean distance or a hamming distance between descriptors of the feature points. For example, if the euclidean distance between the descriptor of a certain feature point extracted from the image information and the descriptor of a certain feature point in the feature dictionary is smaller than a certain threshold, the first electronic device may determine that the two match. The first electronic device may take the feature point as a target feature point.
According to the target feature points obtained by matching, the first electronic device can determine the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device by using a geometric method or an optimization solution mode.
In another example, taking the process of determining the second pose as an example performed at the second electronic device side, after obtaining the right to invoke the camera of the second electronic device, the first electronic device may send a control instruction to the second electronic device, instructing the second electronic device to calculate the second pose of the first electronic device in the coordinate system corresponding to the camera of the second electronic device. The second electronic device completes the calculation process, and after the information of the second pose is obtained, the information of the second pose can be sent to the first electronic device.
It should be noted that, similarly to the process of calculating the second pose by the first electronic device, the process of calculating the second pose by the second electronic device may be referred to the foregoing description.
In this embodiment of the application, after determining a first pose of a camera of a second electronic device in an environment map and a second pose of the first electronic device in a coordinate system corresponding to the camera of the second electronic device, the first electronic device may generate a pose to be fused of the first electronic device according to the first pose and the second pose.
And S505, the first electronic equipment fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment.
In the embodiment of the application, the initial pose is a pose obtained by the first electronic device during self-positioning, and the pose to be fused is a pose calculated by combining other cameras in the current environment. The first electronic device can obtain the target pose of the first electronic device by fusing the initial pose and the pose to be fused.
In one example, if the first electronic device faces a weak texture area such as a white wall or a ground, the initial pose obtained by positioning through an image acquired by a camera of the first electronic device may not be accurate. The first electronic equipment obtains the pose to be fused by using other cameras in the current environment for auxiliary positioning, so that the initial pose and the pose to be fused are fused, the target pose with higher accuracy can be obtained, and the positioning accuracy and robustness are improved.
It should be noted that the definition of the weak texture region may be different according to the algorithm used in practice. For example, for a region where the extractable feature point is smaller than a certain value, it can be regarded as a weak texture region; alternatively, the weak texture region may be determined from the gradient, and a region having a gradient average value in a certain interval may be regarded as the weak texture region, which is not limited in the embodiment of the present application.
In a possible implementation manner of the embodiment of the present application, the second electronic device in the current environment may include a plurality of electronic devices, that is, there are a plurality of electronic devices in the current environment, and each electronic device has a camera and can be used to assist in positioning the first electronic device. Therefore, when the first electronic device determines the target pose, each camera can be used for assisting in positioning, and the positioning accuracy and robustness are further improved.
In this embodiment of the application, for each second electronic device, the first electronic device may perform auxiliary positioning by using a camera on the electronic device, respectively, to obtain a pose to be fused. Therefore, the second electronic devices can obtain a plurality of poses to be fused. It should be noted that, for a specific processing procedure for obtaining one pose to be fused for each electronic device, reference may be made to the descriptions of the foregoing steps, and details are not described here again.
Then, the first electronic device can process the multiple poses to be fused to obtain the target poses to be fused. For example, for a plurality of poses to be fused, the first electronic device may perform processing in a weighted summation manner to obtain target poses to be fused.
The first electronic equipment can fuse the initial pose and the pose of the target to be fused to obtain the target pose with higher accuracy.
For convenience of understanding, the following describes a positioning method of an electronic device according to an embodiment of the present application with reference to a specific example.
Fig. 6(a) is a schematic flowchart illustrating a step of a positioning method for an electronic device according to an embodiment of the present application. Taking the first electronic device as a VR headset and the second electronic device as a mobile phone as an example, the positioning method of the electronic device includes the following steps:
in S601a, the VR headset SLAM works to perform positioning and mapping, and an initial pose is obtained.
In this step, the VR helmet is configured with an SLAM function, and after the VR helmet is started, the SLAM starts working, and the VR helmet can obtain an initial pose. The initial pose is obtained by automatically positioning the VR headset in the current environment and constructing an environment map (i.e., SLAM map) of the current environment.
In S602a, the VR headset identifies an electronic device in the current environment that includes a camera.
In this step, the electronic device having the camera in the current environment may be a mobile phone in the environment. The VR helmet can identify the electronic equipment comprising the camera in the environment through a deep learning or visual identification mode.
In S603a, the VR headset acquires the authority of the environmental camera.
In this step, for the identified mobile phone, the VR headset can send information to the mobile phone, requesting to obtain the right to adjust the mobile phone camera (environmental camera) for assisting in positioning the VR headset.
In S604a, the VR headset records a first pose of the environmental camera in the SLAM map.
In the step, the VR helmet can adopt a visual method, a camera of the VR helmet is used for collecting images of an environment camera, and then a first pose of the environment camera in an SLAM map is determined according to the collected images; alternatively, the VR headset can also determine the first pose of the environment camera in the SLAM map by using a wireless positioning manner. It should be noted that, when the VR headset determines the first pose of the environment camera in the SLAM map by using a visual method, it should be ensured that the environment camera is within the visual field of the VR headset.
In S605a, the VR headset acquires image information of the environmental camera.
In this step, the VR helmet can send a control instruction to the mobile phone, instruct the mobile phone to call its camera to take a picture of the VR helmet, and obtain image information containing the VR helmet image. Above-mentioned image information can be sent to the VR helmet by the cell-phone, and subsequent processing procedure will also be accomplished in the VR helmet.
In S606a, the VR headset extracts feature points in the image information.
In this step, the VR headset may extract feature points of the received image information by using an arbitrary feature point extraction algorithm. E.g., ORB, FAST, etc., the feature descriptors may be BRIEF.
In S607a, the feature points of the VR headset are collected offline, and a feature dictionary is trained and saved.
In this step, the algorithm and descriptor used for acquiring the feature points of the VR headset offline should be consistent with the algorithm and descriptor used in S606 a. The offline acquisition of the feature points of the VR headset may be performed after the acquisition of the image of the VR headset. The image of VR helmet can be shot by the cell-phone in above-mentioned each step and obtain and transmit to the VR helmet, also can be transmitted to the VR helmet after gathering by other equipment. The purpose of training the feature dictionary is to match feature points in subsequent steps, so as to find target feature points which can be used for characterizing the VR helmet.
In S608a, the VR headset matches the feature points.
In this step, the VR headset may match the feature points extracted in S606a with the feature dictionary trained in S607 a. For the feature point hit by the match, it can be regarded as a target feature point for characterizing the VR headset.
In S609a, a second pose of the VR headset in the environmental camera coordinate system is calculated.
In this step, the VR headset can perform pose calculation on an image of the VR headset captured by the environment camera by using a geometric method or an optimization solution method, so as to obtain a second pose of the VR headset in the environment camera coordinate system.
In S610a, the VR headset calculates a pose to be fused in combination with the first pose and the second pose.
In this step, the VR headset may process the first pose obtained in S604a and the second pose obtained in S609a to obtain a pose to be fused. The pose to be fused can be regarded as the pose of the VR helmet obtained by utilizing the environment camera for auxiliary calculation.
In S611a, the initial pose and the pose to be fused are fused to obtain the target pose of the VR headset.
In the step, the VR helmet can fuse the initial pose and the pose to be fused, so that a target pose with higher accuracy is obtained, and the positioning accuracy and robustness are improved.
In the positioning method of the electronic device shown in fig. 6(a), the electronic device with the environment camera transmits the image acquired by the environment camera to the VR headset, and the processing of each step is performed in the VR headset. That is, each of the above steps, including S605a-S609a, is performed by a VR headset.
In the embodiment of the application, the current pose of the VR helmet is calculated in a combined manner by transmitting the image acquired by the camera in the environment to the VR helmet, so that the positioning accuracy and robustness of the VR helmet can be improved; through utilizing other cameras of direction difference in the environment to acquire the image, rather than only using VR helmet self's camera, can solve the VR helmet when not having the texture or the weak white wall of texture, scenes such as ground, can't fix a position the problem.
In a possible implementation manner of the embodiment of the application, after the camera of the mobile phone acquires the image information of the VR helmet, the pose can also be directly calculated in the mobile phone, so as to obtain the second pose. And then the mobile phone transmits the calculated second posture to the VR helmet for processing. Taking the first electronic device as a VR headset and the second electronic device as a mobile phone as an example, referring to fig. 6(b), a schematic flow chart of steps of a positioning method of another electronic device provided in the embodiment of the present application is shown, where the method includes the following steps:
in S601b, the VR headset SLAM works to perform positioning and mapping, and an initial pose is obtained.
In this step, the VR helmet is configured with an SLAM function, and after the VR helmet is started, the SLAM starts working, and the VR helmet can obtain an initial pose. The initial pose is obtained by the VR headset automatically positioning in the current environment and constructing an environment map (SLAM map) of the current environment.
In S602b, the VR headset identifies an electronic device in the current environment that includes a camera.
In this step, the electronic device having the camera in the current environment may be a mobile phone in the environment. The VR helmet can identify the electronic equipment comprising the camera in the environment through a deep learning or visual identification mode.
In S603b, the VR headset acquires the authority of the environmental camera.
In this step, for the identified mobile phone, the VR headset can send information to the mobile phone, requesting to obtain the right to adjust the mobile phone camera (environmental camera) for assisting in positioning the VR headset.
In S604b, the VR headset records a first pose of the environmental camera in the SLAM map.
In the step, the VR helmet can adopt a visual method, a camera of the VR helmet is used for collecting images of an environment camera, and then a first pose of the environment camera in an SLAM map is determined according to the collected images; alternatively, the VR headset can also determine the first pose of the environment camera in the SLAM map by using a wireless positioning manner. It should be noted that, when the VR headset determines the first pose of the environment camera in the SLAM map by using a visual method, it should be ensured that the environment camera is within the visual field of the VR headset.
In S605b, the cellular phone acquires image information of the environment camera.
In this step, the VR headset can send a control instruction to the mobile phone, instruct the mobile phone to call the camera thereof to take a picture of the VR headset, and calculate a second pose of the VR headset in the environment camera coordinate system.
In S606b, the mobile phone extracts feature points in the image information.
In this step, the mobile phone may extract feature points of the acquired image information by using an arbitrary feature point extraction algorithm. E.g., ORB, FAST, etc., the feature descriptors may be BRIEF.
In S607b, the feature points of the VR headset are collected offline, and a feature dictionary is trained and saved.
In this step, the algorithm and descriptor used for acquiring the feature points of the VR headset offline should be consistent with the algorithm and descriptor used in S606 b. The offline acquisition of the feature points of the VR headset may be performed after the acquisition of the image of the VR headset. For example, a plurality of images of the VR helmet can be captured by using the environment camera, and then feature point extraction and training are performed on the plurality of images by using the mobile phone, so as to obtain a feature dictionary which can be used for representing the VR helmet. The purpose of training the feature dictionary is to match feature points in subsequent steps, so as to find target feature points which can be used for characterizing the VR helmet.
In S608b, the cell phone matches the feature points.
In this step, the mobile phone can match the feature points extracted in S606b with the feature dictionary trained in S607 b. For the feature point hit by the match, it can be regarded as a target feature point for characterizing the VR headset.
In S609b, a second pose of the VR headset in the environmental camera coordinate system is calculated.
In this step, the mobile phone may perform pose calculation on the image of the VR headset captured by the environment camera by using a geometric method or an optimization solution method, so as to obtain a second pose of the VR headset in the environment camera coordinate system.
It should be noted that the positioning method of the electronic device shown in fig. 6(b) is used by the mobile phone to calculate the second pose. That is, the above-mentioned S605b-S609b are all completed in the mobile phone having the environment camera.
After the mobile phone calculates the second position, the second position can be transmitted to the VR helmet and further processed by the VR helmet.
In S610b, the VR headset calculates a pose to be fused in combination with the first pose and the second pose.
In this step, after receiving the information of the second pose transmitted by the mobile phone, the VR headset may process the second pose and the first pose obtained in S604b to obtain a pose to be fused. The pose to be fused can be regarded as the pose of the VR helmet obtained by utilizing the environment camera for auxiliary calculation.
In S611b, the initial pose and the pose to be fused are fused to obtain the target pose of the VR headset.
In the step, the VR helmet can fuse the initial pose and the pose to be fused, so that a target pose with higher accuracy is obtained, and the positioning accuracy and robustness are improved.
In the embodiment of the application, the images are acquired by using other cameras with different directions in the environment, the pose of the VR helmet in the environment camera coordinate system is calculated according to the images, and the calculated pose is transmitted to the VR helmet, so that the data transmission amount in the positioning process is reduced, and the occupation of VR helmet resources in the pose calculation process is reduced.
In the embodiment of the present application, the electronic device may be divided into the functional modules according to the above method examples, for example, each functional module may be divided for each function, or one or more functions may be integrated into one functional module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. The following description will be given taking as an example that each function module is divided for each function.
Corresponding to the foregoing embodiments, referring to fig. 7, a block diagram of a positioning apparatus for an electronic device according to an embodiment of the present application is shown, where the apparatus may be applied to a first electronic device in the foregoing embodiments, and the apparatus may specifically include the following modules: an environment map building module 701, an initial pose calculation module 702, an electronic device identification module 703, a pose to be fused calculation module 704, and a pose fusion module 705, wherein:
the environment map building module is used for building an environment map of the current environment;
an initial pose calculation module, configured to perform self-positioning based on the environment map, and obtain an initial pose of the first electronic device in the environment map;
the electronic equipment identification module is used for identifying second electronic equipment with a camera in the current environment;
the to-be-fused pose calculation module is used for determining the to-be-fused pose of the first electronic equipment by adopting the second electronic equipment;
and the pose fusion module is used for fusing the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment.
In an embodiment of the present application, the pose to be fused calculation module is specifically configured to: determining a first pose of the camera in the environment map; determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and generating a pose to be fused of the first electronic equipment according to the first pose and the second pose.
In this embodiment of the application, the pose to be fused calculation module is further specifically configured to: acquiring an image of the second electronic device; and determining a first pose of the camera in the environment map according to the image of the second electronic equipment.
In this embodiment of the application, the pose to be fused calculation module is further specifically configured to: acquiring a positioning signal sent by the second electronic equipment; and determining a first pose of the camera in the environment map according to the positioning signal.
In this embodiment of the application, the pose to be fused calculation module is further specifically configured to: controlling the camera to acquire image information containing the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
In this embodiment of the application, the pose to be fused calculation module is further specifically configured to: extracting a plurality of feature points in the image information; matching the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment; and determining a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature point, wherein the feature dictionary is obtained by performing feature extraction on the image of the first electronic equipment by the first electronic equipment and/or the second electronic equipment.
In this embodiment of the application, the pose to be fused calculation module is further specifically configured to: controlling the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera; and receiving the second pose sent by the second electronic equipment.
In an embodiment of the application, the second electronic device includes a plurality of electronic devices having the cameras, and the to-be-fused poses include a plurality of poses determined by using the plurality of second electronic devices;
correspondingly, the pose fusion module is specifically configured to: processing the plurality of poses to be fused to obtain the poses of the targets to be fused; and fusing the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
It should be noted that all relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The present application further provides an electronic device, which may be the first electronic device in the foregoing embodiments, and the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the positioning method of the electronic device in the foregoing embodiments is implemented.
The embodiment of the present application further provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on an electronic device, the computer instruction causes the electronic device to execute the above related method steps to implement the positioning method of the electronic device in the above embodiments.
The embodiments of the present application further provide a computer program product, which when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the positioning method for an electronic device in the foregoing embodiments.
The embodiment of the present application further provides a positioning system, which includes the first electronic device and the second electronic device in the above embodiments.
The embodiment of the present application further provides a chip, where the chip includes a processor, and the processor may be a general-purpose processor or a special-purpose processor. The processor is configured to support the electronic device to perform the related steps, so as to implement the positioning method of the electronic device in the foregoing embodiments.
Optionally, the chip further includes a transceiver, where the transceiver is configured to receive control of the processor, and is configured to support the electronic device to perform the relevant steps, so as to implement the positioning method for the electronic device in the foregoing embodiments.
Optionally, the chip may further include a storage medium.
It should be noted that the chip may be implemented by using the following circuits or devices: one or more Field Programmable Gate Arrays (FPGAs), Programmable Logic Devices (PLDs), controllers, state machines, gate logic, discrete hardware components, any other suitable circuitry, or any combination of circuitry capable of performing the various functions described throughout this application.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application.

Claims (12)

1. A method for locating an electronic device, comprising:
the method comprises the steps that a first electronic device builds an environment map of a current environment;
the first electronic device carries out self-positioning based on the environment map, and an initial pose of the first electronic device in the environment map is obtained;
the first electronic equipment identifies second electronic equipment with a camera in the current environment;
the first electronic device determines a to-be-fused pose of the first electronic device by adopting the second electronic device;
and the first electronic equipment fuses the initial pose and the pose to be fused to obtain a target pose of the first electronic equipment.
2. The method of claim 1, wherein the first electronic device determining, with the second electronic device, the pose to be fused of the first electronic device comprises:
the first electronic device determines a first pose of the camera in the environment map;
the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera;
and the first electronic equipment generates a pose to be fused of the first electronic equipment according to the first pose and the second pose.
3. The method of claim 2, wherein the first electronic device determines a first pose of the camera in the environmental map, comprising:
the first electronic equipment acquires an image of the second electronic equipment;
the first electronic device determines a first pose of the camera in the environment map according to the image of the second electronic device.
4. The method of claim 2, wherein the first electronic device determines a first pose of the camera in the environmental map, comprising:
the first electronic equipment acquires a positioning signal sent by the second electronic equipment;
the first electronic device determines a first pose of the camera in the environment map according to the positioning signal.
5. The method of any one of claims 2-4, wherein the first electronic device determining a second pose of the first electronic device in a coordinate system corresponding to the camera comprises:
the first electronic equipment controls the camera to collect image information containing the first electronic equipment;
and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the image information.
6. The method of claim 5, wherein the first electronic device determines, according to the image information, a second pose of the first electronic device in a coordinate system corresponding to the camera, and the determining comprises:
the first electronic equipment extracts a plurality of feature points in the image information;
the first electronic equipment matches the plurality of feature points with a preset feature dictionary to obtain target feature points for representing the first electronic equipment, wherein the feature dictionary is obtained by performing feature extraction on an image of the first electronic equipment by the first electronic equipment and/or second electronic equipment;
and the first electronic equipment determines a second pose of the first electronic equipment in a coordinate system corresponding to the camera according to the target feature point.
7. The method of any one of claims 2-4, wherein the first electronic device determining a second pose of the first electronic device in a coordinate system corresponding to the camera comprises:
the first electronic equipment controls the second electronic equipment to calculate a second pose of the first electronic equipment in a coordinate system corresponding to the camera;
and the first electronic equipment receives the second pose sent by the second electronic equipment.
8. The method according to any one of claims 1-7, wherein the second electronic device comprises a plurality of electronic devices having the cameras, and the pose to be fused comprises a plurality of poses determined using the plurality of second electronic devices;
correspondingly, the fusing the initial pose and the pose to be fused by the first electronic device to obtain the target pose of the first electronic device includes:
the first electronic equipment processes a plurality of poses to be fused to obtain target poses to be fused;
and the first electronic equipment fuses the initial pose and the target pose to be fused to obtain the target pose of the first electronic equipment.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the positioning method of the electronic device according to any of claims 1-8 when executing the computer program.
10. A positioning system comprising a first electronic device and a second electronic device as claimed in any one of claims 1-8.
11. A computer storage medium comprising computer instructions which, when run on an electronic device, perform a positioning method of the electronic device according to any one of claims 1-8.
12. A computer program product, characterized in that, when the computer program product is run on a computer, the computer performs the positioning method of the electronic device according to any of claims 1-8.
CN202110121715.2A 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment Active CN114812381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110121715.2A CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110121715.2A CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Publications (2)

Publication Number Publication Date
CN114812381A true CN114812381A (en) 2022-07-29
CN114812381B CN114812381B (en) 2023-07-18

Family

ID=82526127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110121715.2A Active CN114812381B (en) 2021-01-28 2021-01-28 Positioning method of electronic equipment and electronic equipment

Country Status (1)

Country Link
CN (1) CN114812381B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684A (en) * 2022-12-13 2023-08-29 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310310A1 (en) * 2014-04-25 2015-10-29 Google Technology Holdings LLC Electronic device localization based on imagery
US20160260251A1 (en) * 2015-03-06 2016-09-08 Sony Computer Entertainment Inc. Tracking System for Head Mounted Display
US20170192232A1 (en) * 2016-01-01 2017-07-06 Oculus Vr, Llc Non-overlapped stereo imaging for virtual reality headset tracking
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium
CN109087359A (en) * 2018-08-30 2018-12-25 网易(杭州)网络有限公司 Pose determines method, pose determining device, medium and calculates equipment
CN109949422A (en) * 2018-10-15 2019-06-28 华为技术有限公司 Data processing method and equipment for virtual scene
US20200097770A1 (en) * 2018-09-26 2020-03-26 Apple Inc. Localization For Mobile Devices
US20200105015A1 (en) * 2018-09-28 2020-04-02 Apple Inc. Localization and mapping using images from multiple devices
CN110956571A (en) * 2019-10-10 2020-04-03 华为终端有限公司 SLAM-based virtual-real fusion method and electronic equipment
WO2020127185A1 (en) * 2018-12-21 2020-06-25 Koninklijke Kpn N.V. Cloud-based camera calibration
CN111338474A (en) * 2020-02-19 2020-06-26 Oppo广东移动通信有限公司 Virtual object pose calibration method and device, storage medium and electronic equipment
CN111442722A (en) * 2020-03-26 2020-07-24 达闼科技成都有限公司 Positioning method, positioning device, storage medium and electronic equipment
US10748302B1 (en) * 2019-05-02 2020-08-18 Apple Inc. Multiple user simultaneous localization and mapping (SLAM)
CN111862213A (en) * 2020-07-29 2020-10-30 Oppo广东移动通信有限公司 Positioning method and device, electronic equipment and computer readable storage medium
CA3137709A1 (en) * 2019-05-21 2020-11-26 Microsoft Technology Licensing, Llc Image-based localization

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310310A1 (en) * 2014-04-25 2015-10-29 Google Technology Holdings LLC Electronic device localization based on imagery
US20160260251A1 (en) * 2015-03-06 2016-09-08 Sony Computer Entertainment Inc. Tracking System for Head Mounted Display
US20170192232A1 (en) * 2016-01-01 2017-07-06 Oculus Vr, Llc Non-overlapped stereo imaging for virtual reality headset tracking
CN108734736A (en) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 Camera posture method for tracing, device, equipment and storage medium
CN110544280A (en) * 2018-05-22 2019-12-06 腾讯科技(深圳)有限公司 AR system and method
CN109087359A (en) * 2018-08-30 2018-12-25 网易(杭州)网络有限公司 Pose determines method, pose determining device, medium and calculates equipment
US20200097770A1 (en) * 2018-09-26 2020-03-26 Apple Inc. Localization For Mobile Devices
US20200105015A1 (en) * 2018-09-28 2020-04-02 Apple Inc. Localization and mapping using images from multiple devices
CN109949422A (en) * 2018-10-15 2019-06-28 华为技术有限公司 Data processing method and equipment for virtual scene
WO2020127185A1 (en) * 2018-12-21 2020-06-25 Koninklijke Kpn N.V. Cloud-based camera calibration
US10748302B1 (en) * 2019-05-02 2020-08-18 Apple Inc. Multiple user simultaneous localization and mapping (SLAM)
CN111880644A (en) * 2019-05-02 2020-11-03 苹果公司 Multi-user instant location and map construction (SLAM)
CA3137709A1 (en) * 2019-05-21 2020-11-26 Microsoft Technology Licensing, Llc Image-based localization
CN110956571A (en) * 2019-10-10 2020-04-03 华为终端有限公司 SLAM-based virtual-real fusion method and electronic equipment
CN111338474A (en) * 2020-02-19 2020-06-26 Oppo广东移动通信有限公司 Virtual object pose calibration method and device, storage medium and electronic equipment
CN111442722A (en) * 2020-03-26 2020-07-24 达闼科技成都有限公司 Positioning method, positioning device, storage medium and electronic equipment
CN111862213A (en) * 2020-07-29 2020-10-30 Oppo广东移动通信有限公司 Positioning method and device, electronic equipment and computer readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684A (en) * 2022-12-13 2023-08-29 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium
CN116664684B (en) * 2022-12-13 2024-04-05 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN114812381B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN110495819B (en) Robot control method, robot, terminal, server and control system
CN114119758B (en) Method for acquiring vehicle pose, electronic device and computer-readable storage medium
CN115589051B (en) Charging method and terminal equipment
CN114650363A (en) Image display method and electronic equipment
CN112087649B (en) Equipment searching method and electronic equipment
CN115619858A (en) Object reconstruction method and related equipment
WO2022007707A1 (en) Home device control method, terminal device, and computer-readable storage medium
CN112584037B (en) Method for saving image and electronic equipment
CN111249728B (en) Image processing method, device and storage medium
CN114842069A (en) Pose determination method and related equipment
CN110058729B (en) Method and electronic device for adjusting sensitivity of touch detection
CN115032640B (en) Gesture recognition method and terminal equipment
CN114812381B (en) Positioning method of electronic equipment and electronic equipment
CN114995715B (en) Control method of floating ball and related device
CN113536834A (en) Pouch detection method and device
CN114283195B (en) Method for generating dynamic image, electronic device and readable storage medium
CN112541861A (en) Image processing method, device, equipment and computer storage medium
CN113489895B (en) Method for determining recommended scene and electronic equipment
CN115437601A (en) Image sorting method, electronic device, program product, and medium
CN113380240B (en) Voice interaction method and electronic equipment
WO2022033344A1 (en) Video stabilization method, and terminal device and computer-readable storage medium
CN117009005A (en) Display method, automobile and electronic equipment
CN111982037B (en) Height measuring method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant