WO2022011517A1 - Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge - Google Patents

Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge Download PDF

Info

Publication number
WO2022011517A1
WO2022011517A1 PCT/CN2020/101725 CN2020101725W WO2022011517A1 WO 2022011517 A1 WO2022011517 A1 WO 2022011517A1 CN 2020101725 W CN2020101725 W CN 2020101725W WO 2022011517 A1 WO2022011517 A1 WO 2022011517A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
points
isosceles triangle
image
object image
Prior art date
Application number
PCT/CN2020/101725
Other languages
English (en)
Chinese (zh)
Inventor
冯涛
宋来喜
余扬
Original Assignee
深圳盈天下视觉科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳盈天下视觉科技有限公司 filed Critical 深圳盈天下视觉科技有限公司
Priority to PCT/CN2020/101725 priority Critical patent/WO2022011517A1/fr
Priority to CN202080001234.1A priority patent/CN112136158A/zh
Publication of WO2022011517A1 publication Critical patent/WO2022011517A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the application belongs to the technical field of virtual reality and positioning, and in particular relates to an infrared positioning method, an infrared positioning device and an infrared positioning system.
  • Virtual Reality (VR) technology is a computer simulation system that can create and experience virtual worlds. It uses computers to generate a virtual environment, and the virtual environment provides users with a three-dimensional dynamic vision of multi-source information fusion and interaction. Immerse the user in this virtual environment.
  • virtual reality applications have also been widely used. For example, obtaining the position of the car in real time in simulated virtual driving, or obtaining the heat map of the position of the human body in the exhibition hall, or obtaining the relative position of the physical object in an interactive game, in these virtual reality applications, it is necessary to locate according to the real position of the object. A virtual system is developed to match this real location.
  • Some existing positioning methods applied to virtual reality can only locate and identify objects, but cannot determine the direction of movement of objects, which is difficult to meet the immersion and interaction needs of users in virtual reality applications.
  • One of the purposes of the embodiments of the present application is to provide an infrared positioning method, infrared positioning and infrared positioning system, which can accurately locate and obtain the moving direction of an object in a virtual reality application, so as to satisfy the user's immersion in the virtual reality application. sense and interaction needs.
  • an infrared positioning method includes:
  • the infrared point identification processing is performed on the object to be located based on the object image, and infrared point information corresponding to the object to be located is obtained, wherein the infrared point information includes coordinate data of three infrared points, and After the step of distributing the three infrared points in an isosceles triangle, it includes:
  • the position of the center of gravity of the isosceles triangle is calculated according to the principle of triangle geometry, and the position of the center of gravity of the isosceles triangle is determined as the position of the object to be positioned.
  • the infrared point identification processing is performed on the object to be located based on the object image, and infrared point information corresponding to the object to be located is obtained, wherein the infrared point information includes coordinate data of three infrared points, and
  • the steps that the three infrared points are distributed in an isosceles triangle include:
  • the coordinate data of the three infrared points are acquired.
  • the method includes:
  • the camera image opened through the CCV computer vision library is angled according to the preset adjustment rule Adjust, and re-acquire the second object image based on the adjusted camera image;
  • the coordinate data of the three infrared points are acquired.
  • the method includes:
  • three infrared points distributed in an isosceles triangle are selected from all infrared points identified from the first object image , and obtain the coordinate data of the three infrared points.
  • an infrared positioning device in a second aspect, and the infrared positioning device includes:
  • the acquisition module is configured to perform infrared point identification processing on the object to be located based on the object image, and acquire infrared point information corresponding to the object to be located, wherein the infrared point information includes coordinate data of three infrared points, and the three The infrared points are distributed in an isosceles triangle;
  • the generating module is used to identify the vertex and base midpoint of the isosceles triangle according to the coordinate data of the three infrared points, and generate a direction vector pointing to the vertex according to the vertex and base midpoint of the isosceles triangle , the direction vector represents the movement direction of the object to be positioned;
  • the calculation module is configured to calculate the vector angle between the direction vector of the object to be positioned and the preset coordinate system of the virtual environment, and determine the moving direction of the object to be positioned according to the vector angle.
  • an infrared positioning system is provided.
  • the infrared positioning system is used to implement the steps of the infrared positioning method according to any one of the above-mentioned first aspects, and includes a camera end, a detection end and a positioning calculation end, wherein:
  • the camera terminal is used to capture an image of the object to be positioned and obtain infrared point information from the image, wherein the infrared point information contains coordinate data of the infrared point;
  • the detection end is configured to receive the infrared point information obtained by the camera end, and identify three infrared points corresponding to the object to be located from the obtained infrared point information, wherein the three infrared points are Isosceles triangle distribution;
  • the positioning calculation terminal is used to combine the coordinate point data of three infrared points distributed in an isosceles triangle, determine the position of the object to be located in the virtual reality environment by calculating the position of the center of gravity of the isosceles triangle, and characterize the undetermined by calculation.
  • the vector angle between the direction vector of the moving direction of the object and the preset coordinate system of the virtual environment determines the moving direction of the object to be positioned in the virtual reality environment, wherein the direction vector representing the moving direction of the object to be positioned is determined by etc.
  • the midpoint of the base of the waist triangle points to the vertex of the isosceles triangle.
  • the positioning computing terminal may also be used to perform verification processing on the position and movement direction of the object to be positioned in the virtual reality determined by calculation, and if the verification result fails, the The verification result is fed back to the detection terminal and triggers the detection terminal to re-identify the three infrared points corresponding to the object to be located from the acquired infrared point information, so that the positioning calculation terminal can recalculate the to-be-determined object. position and movement direction of the object in virtual reality until the verification result is passed.
  • the positioning calculation terminal adopts the working mode of a queue to perform positioning calculation, by acquiring multiple sets of infrared point data in real time and performing the calculation on the object to be located determined by the calculation according to the multiple sets of infrared point data.
  • Verification processing is performed on the position and movement direction in virtual reality, wherein each group of infrared point data includes coordinate data of three infrared points distributed in an isosceles triangle, and the verification processing includes performing a verification process on the multiple groups of infrared point data.
  • the detection end regularly receives the infrared point information obtained by the camera end through a preset data template, wherein the data type in the preset data module is configured as a list data type.
  • an electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the first method when the processor executes the computer program The steps of the infrared positioning method described in the aspect.
  • a fifth aspect provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the steps of the infrared positioning method according to any one of the first aspect .
  • the embodiment of the present application has the beneficial effect that: by carrying infrared lamps distributed in an isosceles triangle on the object, the infrared point identification processing of the object to be located is performed based on the image of the object, and the three infrared lamps corresponding to the three infrared lamps are identified. The position of the three infrared points distributed in an isosceles triangle in the image, and the coordinate data of the three infrared points is obtained according to the position.
  • the vertices and the midpoints of the bases of the isosceles triangle are identified, and the direction vectors pointing to the vertices are generated according to the vertices and the midpoints of the bases of the isosceles triangles.
  • the current movement direction of the object to be positioned is determined by the vector angle.
  • the motion direction of the object to be positioned is determined in the virtual reality application, so that the user can still identify the direction when immersed in the virtual environment, which enhances the user's sense of immersion in the virtual environment, and by identifying the direction in the virtual environment It can also realize the interaction between people and people and between people and things in the virtual environment, so as to meet the interaction needs of users in the virtual environment.
  • FIG. 1 is a schematic flowchart of a basic method of an infrared positioning method provided by an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for obtaining infrared point information corresponding to an object in the infrared positioning method provided by the embodiment of the present application;
  • FIG. 3 is a schematic flowchart of another method for obtaining infrared point information corresponding to an object in the infrared positioning method provided by the embodiment of the present application;
  • FIG. 4 is a schematic structural diagram of an infrared positioning device according to an embodiment of the present application.
  • FIG. 5 is a basic schematic block diagram of an infrared positioning system provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an electronic device implementing an infrared positioning method according to an embodiment of the present application.
  • FIG. 1 is a schematic flowchart of a basic method of an infrared positioning method provided by an embodiment of the present application. Details are as follows:
  • step S101 infrared point identification processing is performed on the object to be located based on the object image, and infrared point information corresponding to the object to be located is obtained, wherein the infrared point information includes coordinate data of three infrared points, and the three The infrared points are distributed in an isosceles triangle.
  • the object to be positioned is a VR (Virtual Reality, virtual reality) device used in a virtual reality application, such as VR glasses, a VR helmet, and the like.
  • VR Virtual Reality, virtual reality
  • Three infrared lamps are mounted on the object to be positioned, and when the infrared lamps are mounted, the three infrared lamps are distributed in an isosceles triangle.
  • an image of the object to be positioned can be captured by a camera, and then infrared point recognition processing of the object to be positioned based on the image is performed, for example, by configuring an infrared filter on the camera end , filter the image captured by the camera to identify the positions of the three infrared lights mounted on the object to be positioned corresponding to the three infrared points in the image, and the combination of the positions of the three infrared points is distributed in an isosceles triangle.
  • the coordinate data of the three infrared points can be acquired based on the positions of the three infrared points in combination with the coordinate system preset in the virtual environment.
  • step S102 the vertex and the midpoint of the base of the isosceles triangle are identified, and a direction vector pointing to the vertex is generated according to the vertex of the isosceles triangle and the midpoint of the base, and the direction vector represents the object to be positioned. direction of movement;
  • the isosceles triangle can be obtained by identifying the length of each side of the isosceles triangle After identifying the long side of the isosceles triangle, the length of the long side is divided into two equal parts, and the point corresponding to the division of the two equal parts is the midpoint of the base of the isosceles triangle.
  • step S103 the vector angle between the direction vector of the object to be positioned and the preset coordinate system of the virtual environment is calculated, and the movement direction of the object to be positioned in the virtual environment is determined according to the vector angle.
  • the preset coordinate system of the virtual environment is a two-dimensional coordinate system set in the virtual reality application based on the top view of the camera, and the two-dimensional coordinate system and the direction vector of the object are located on the same plane.
  • the origin of the two-dimensional coordinate system can be specified in the virtual environment constructed by the virtual reality application according to the calculation requirements, and the specified principle is for the convenience of calculation.
  • the X-axis direction and the Y-axis direction of the two-dimensional coordinate system can be determined according to the four directions of east, west, northwest, for example, the east direction is the positive direction of the X axis, and the north direction is the positive direction of the Y axis.
  • the moving direction of the object to be positioned in the virtual environment can be determined.
  • the moving direction of the object when determining the moving direction of the object to be positioned in the virtual environment by calculating the vector angle between the direction vector of the object to be positioned and the positive direction of the X-axis in the two-dimensional coordinate system, the moving direction of the object can be preset.
  • the determination rule is: for the vector angle between the direction vector of the object and the positive direction of the X axis in the two-dimensional coordinate system, in the counterclockwise direction, the object is determined when the angle is in the interval of 0° ⁇ vector angle ⁇ 90°
  • the direction of movement is east-north + vector angle; when the angle is 90° ⁇ vector angle ⁇ 180°, the motion direction of the object is determined to be north-west + (vector angle -90°); the angle is 180° ⁇ vector
  • the angle is less than 270°
  • the moving direction of the object is determined to be west by south + (vector angle -180°); when the angle is within the range of 270° ⁇ vector angle ⁇ 360°, the object movement direction is south by east + (vector angle included angle -270°).
  • the vector angle between the direction vector of the object to be positioned and the positive direction of the X-axis in the two-dimensional coordinate system is calculated to be 30°, then it can be determined that the movement direction of the object to be positioned is eastward. 30° north direction.
  • the moving direction of the object to be positioned in the virtual environment or the direction of the object to be positioned is determined by calculating the vector angle between the direction vector of the object to be positioned and the positive direction of the Y-axis in the two-dimensional coordinate system.
  • the vector angle between the vector and the positive direction of the X-axis in the two-dimensional coordinate system and the vector angle between the direction vector of the object to be positioned and the positive direction of the Y-axis in the two-dimensional coordinate system determine the object to be positioned
  • the determination rule for the movement direction of the object to be positioned can be preset, and then the movement direction of the object to be positioned in the virtual environment is determined based on the determination rule.
  • the setting of the determination rule of the movement direction is similar to the above example, and will not be repeated here.
  • the infrared lamps distributed in an isosceles triangle are mounted on the object, and infrared point recognition processing is performed on the object to be located based on the image of the object, and three infrared lamps are identified corresponding to the isosceles triangle distribution in the image. position of the three infrared points, and obtain the coordinate data of the three infrared points according to the position.
  • the vertices and the midpoints of the bases of the isosceles triangle are identified, and the direction vectors pointing to the vertices are generated according to the vertices and the midpoints of the bases of the isosceles triangles.
  • the current movement direction of the object to be positioned is determined by the vector angle.
  • the motion direction of the object to be positioned is determined in the virtual reality application, so that the user can still identify the direction when immersed in the virtual environment, which enhances the user's sense of immersion in the virtual environment, and by identifying the direction in the virtual environment It can also realize the interaction between people and people and between people and things in the virtual environment, so as to meet the interaction needs of users in the virtual environment.
  • the position of the center of gravity of the isosceles triangle can be calculated according to the principle of triangle geometry in combination with the coordinate data of the three infrared points, thereby The position of the center of gravity of the isosceles triangle is determined as the position of the object to be positioned. Specifically, according to the principle of triangle geometry, the intersection of the midlines of each side is the center of gravity of an isosceles triangle.
  • FIG. 2 is a schematic flowchart of a method for acquiring infrared point information corresponding to an object in the infrared positioning method provided by the embodiments of the present application. Details are as follows:
  • step S201 the camera picture is opened through the CCV computer vision library, and the first object image containing the object to be positioned is obtained from the picture captured by the camera;
  • step S202 infrared point identification processing is performed on the first object image, and all infrared points in the first object image are identified;
  • step S203 the number of all infrared points in the first object image is statistically processed to obtain the number of infrared points identified from the first object image;
  • step S204 if the number of infrared points identified from the first object image is three, verify whether the three infrared points are distributed in an isosceles triangle;
  • step S205 if the three infrared points are distributed in an isosceles triangle, the coordinate data of the three infrared points are acquired.
  • CCV computer vision library cached computer vision library
  • CCV computer vision library is a modern computer vision library with cache based on C language. It has a built-in cache mechanism, simple functional interface, transparent cache and image preprocessing.
  • infrared lamps are mounted on the object to be located, and then the infrared points corresponding to the object to be located in the image of the first object are obtained by means of CCV and infrared recognition, making full use of the CCV computer vision library for processing
  • CCV and infrared recognition makes full use of the CCV computer vision library for processing
  • the accuracy of the image data and at the same time, by cooperating with infrared recognition, it solves the defects of CCV being susceptible to external interference and low discrimination of designated objects.
  • other infrared points other than the infrared lamp may be generated because the first object image may be disturbed by external factors.
  • the number of infrared points identified in the object image is counted to determine whether there are and only three infrared points identified in the first object image. If the number of infrared points obtained by statistics is three, verify the Whether the three infrared points are distributed in an isosceles triangle. It should be noted that when three infrared lamps are mounted on the object to be positioned, the three infrared lamps are distributed in an isosceles triangle. Therefore, the infrared points corresponding to the three infrared lamps identified from the image of the first object should also be in the form of an isosceles triangle. Isosceles triangle distribution.
  • the three infrared points in the first object image is three, it is further verified whether the three infrared points are three infrared points corresponding to the three infrared lamps mounted on the object to be positioned. Specifically, verification can be performed according to the characteristics of an isosceles triangle. For example, the distance between the three infrared points is calculated according to the coordinate data of the three infrared points, the lengths of the three sides of the triangle are obtained, and the length of the three sides is determined.
  • the coordinate data obtained here is the coordinate data in the camera coordinate system based on the camera configuration.
  • a custom coordinate system that is convenient for calculation can also be customized in the virtual environment, and then, The obtained coordinate data in the camera coordinate system is converted into the coordinate data in the coordinate system of the custom configuration of the virtual environment by mapping in the coordinates.
  • FIG. 3 is a schematic flowchart of another method for acquiring infrared point information corresponding to an object in the infrared positioning method provided by the embodiments of the present application. Details are as follows:
  • step S301 if the number of infrared points is less than three or the number of infrared points is equal to three but is not distributed in an isosceles triangle, then according to the preset adjustment rule, the information opened by the CCV computer vision library is processed. The angle of the camera image is adjusted, and the second object image is re-acquired based on the adjusted camera image;
  • step S302 infrared point identification processing is performed on the second object image, and all infrared points in the second object image are identified;
  • step S303 the number of all infrared points in the second object image is statistically processed to obtain the number of infrared points identified from the second object image;
  • step S304 if the number of infrared points identified from the second object image is three, verify whether the three infrared points are distributed in an isosceles triangle;
  • step S305 if the three infrared points are distributed in an isosceles triangle, the coordinate data of the three infrared points are acquired.
  • the image captured by the camera may fail to normally capture the position where the object to be positioned is equipped with the infrared camera, which may cause the number of infrared points in the image of the first object captured by the camera to be less than three or less than the number of infrared points. Equal to three but not distributed in an isosceles triangle.
  • infrared points other than the infrared lamp are generated, and at this time, there will be a large number of infrared points identified from the first object image. in three cases.
  • all infrared points in the first object image can be identified first, and then filtered through the set filtering rules, such as the feature of an isosceles triangle (there are only two equal side), that is, by drawing a triangle for every three infrared points, based on the coordinate data of the infrared points, it is determined whether there are and only two sides of the triangle are equal, so that three infrared points distributed in an isosceles triangle can be screened out.
  • FIG. 4 is a schematic structural diagram of an infrared positioning device provided by an embodiment of the present application, and the details are as follows:
  • the infrared positioning device includes: an acquisition module 401 , a generation module 402 , and a ratio and calculation module 403 .
  • the acquisition module 401 is configured to perform infrared point identification processing on the object to be located based on the object image, and acquire infrared point information corresponding to the object to be located, wherein the infrared point information includes coordinate data of three infrared points, And the three infrared points are distributed in an isosceles triangle.
  • the generating module 402 is configured to identify the vertices and the midpoints of the bases of the isosceles triangle according to the coordinate data of the three infrared points, and generate the vertices pointing to the vertices according to the vertices and the midpoints of the bases of the isosceles triangles.
  • a direction vector where the direction vector represents the moving direction of the object to be positioned.
  • the calculation module 403 is configured to calculate the vector angle between the direction vector of the object to be positioned and the preset coordinate system of the virtual environment, and determine the moving direction of the object to be positioned according to the vector angle.
  • the infrared positioning device corresponds to the above-mentioned infrared positioning method one by one, and details are not repeated here.
  • FIG. 5 is a basic schematic block diagram of an infrared positioning system provided by an embodiment of the present application.
  • the infrared positioning system 50 includes a camera end 51 , a detection end 52 and a positioning calculation end 53 . in:
  • the camera end 51 is used to capture an image of the object to be positioned and obtain infrared point information from the image, wherein the infrared point information contains coordinate data of the infrared point.
  • the camera end 51 uses infrared combined with the open-source CCV computer vision library to perform object recognition, making full use of the accuracy of the CCV computer vision library for processing image data, and at the same time solving the problem that CCV is susceptible to external interference by cooperating with infrared recognition. Defects with low discrimination for specified objects.
  • the camera terminal is started by connecting, and then the CCV computer vision library is used to open the camera terminal screen, and then the parameters of the CCV computer vision library are adjusted to adjust the camera terminal screen, and the object image is obtained from the camera terminal screen. the infrared point.
  • the recognition area of the object image can also be set to avoid the interference of external factors, and the infrared point information can be transmitted to the detection terminal 52 in real time.
  • the detection end 52 is configured to receive the infrared point information obtained by the camera end, and identify three infrared points corresponding to the object to be located from the obtained infrared point information, wherein the three infrared points are equal to each other. Waist triangle distribution.
  • the detection terminal 52 uses the tuio protocol to receive infrared point information from the camera terminal 51 . Specifically, after receiving the infrared point information from the camera end 51, the infrared point information is firstly counted to confirm the number of infrared points obtained from the object image.
  • the number of isosceles triangles it is determined whether the number of isosceles triangles (three) can be formed, and if so, the coordinate data of the three infrared points are sent to the positioning calculation terminal 53 for calculation. Otherwise, if the number of infrared points is more than three, it can be filtered by setting a filtering rule, for example, by drawing a triangle for every three infrared points, and based on the coordinate data of the infrared points to determine whether there are and only two sides in the triangle Therefore, three infrared points distributed in an isosceles triangle are screened out, and then the coordinate data of the three infrared points are sent to the positioning calculation terminal 53 for calculation. If the infrared points are less than three or equal to three but not distributed in an isosceles triangle, the detection operation is ended and the tuio information is monitored, waiting to receive the next infrared point information from the camera end 51 .
  • a filtering rule for example, by drawing a triangle for
  • the detection end 52 may receive infrared point information from the camera end 51 through a preset data template.
  • a data template is designed to have the function of converting standard tuio protocol data into a list data type that is easy to parse, and the data template is associated with a timer, so that the data template is based on the time interval set by the timer.
  • the timer is set to 20ms, that is, 50 sets of infrared point information are acquired per second, which realizes low data delay between the actual environment and the virtual environment, and avoids the actual environment and virtual environment data in virtual reality applications. asymmetrical problem.
  • the positioning calculation terminal 53 is used to combine the coordinate point data of the three infrared points distributed in an isosceles triangle, determine the position of the object to be positioned in the virtual reality environment by calculating the position of the center of gravity of the isosceles triangle, and characterize the position to be positioned by calculating The vector angle between the direction vector of the moving direction of the object and the preset coordinate system of the virtual environment determines the moving direction of the object to be positioned in the virtual reality environment, wherein the direction vector representing the moving direction of the object to be positioned is determined by isosceles.
  • the midpoint of the base of the triangle points to the vertex of the isosceles triangle.
  • the positioning calculation terminal 53 may also perform verification processing on the position and movement direction of the object to be positioned in the virtual reality determined by calculation. If the verification result fails, the verification result is fed back to the detection terminal 52 and Trigger the detection terminal 52 to re-identify the three infrared points corresponding to the object to be located from the acquired infrared point information, and then recalculate the position and movement direction of the object to be located in the virtual reality by the positioning calculation terminal 53, until the verification result passes until.
  • the positioning computing terminal 53 uses a queue-first-in-first-out working mode to perform data calculation, and during the calculation process, n groups of infrared point data are stored in real time, wherein each group of infrared point data contains the coordinate data of three infrared points distributed in an isosceles triangle. Then, the expected value and variance value of the n groups (for example, 10 groups) of data are calculated in real time, so as to verify the position and movement direction of the object to be located in virtual reality determined by calculation, with the help of the expected value and variance value Excluding unexpected data makes the positioning information of the object to be positioned more accurate and reliable.
  • FIG. 6 is a schematic diagram of an electronic device implementing an infrared positioning method according to an embodiment of the present application.
  • the electronic device 6 of this embodiment includes: a processor 61 , a memory 62 , and a computer program 63 stored in the memory 62 and executable on the processor 61 , such as an infrared positioning program.
  • the processor 61 executes the computer program 62, the steps in each of the foregoing infrared positioning method embodiments are implemented.
  • the processor 61 executes the computer program 63, the functions of the modules/units in the foregoing device embodiments are implemented.
  • the computer program 63 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 62 and executed by the processor 61 to complete the this application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 63 in the electronic device 6 .
  • the computer program 63 can be divided into:
  • the acquisition module is configured to perform infrared point identification processing on the object to be located based on the object image, and acquire infrared point information corresponding to the object to be located, wherein the infrared point information includes coordinate data of three infrared points, and the three The infrared points are distributed in an isosceles triangle;
  • the generating module is used to identify the vertex and base midpoint of the isosceles triangle according to the coordinate data of the three infrared points, and generate a direction vector pointing to the vertex according to the vertex and base midpoint of the isosceles triangle , the direction vector represents the movement direction of the object to be positioned;
  • the calculation module is configured to calculate the vector angle between the direction vector of the object to be positioned and the preset coordinate system of the virtual environment, and determine the moving direction of the object to be positioned according to the vector angle.
  • the electronic device may include, but is not limited to, the processor 61 and the memory 62 .
  • FIG. 6 is only an example of the electronic device 6, and does not constitute a limitation on the electronic device 6, and may include more or less components than the one shown, or combine some components, or different components
  • the electronic device may further include an input and output device, a network access device, a bus, and the like.
  • the so-called processor 61 may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuits) Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 62 may be an internal storage unit of the electronic device 6 , such as a hard disk or a memory of the electronic device 6 .
  • the memory 62 may also be an external storage device of the electronic device 6, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) equipped on the electronic device 6 card, Flash Card, etc.
  • the memory 62 may also include both an internal storage unit of the electronic device 6 and an external storage device.
  • the memory 62 is used to store the computer program and other programs and data required by the electronic device.
  • the memory 62 may also be used to temporarily store data that has been output or is to be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing the relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium, and the computer When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signal telecommunication signal and software distribution medium, etc.
  • the content contained in the computer-readable media may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, the computer-readable media Excluded are electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/terminal device and method may be implemented in other manners.
  • the apparatus/terminal device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un procédé de positionnement infrarouge, un appareil de positionnement infrarouge et un système de positionnement infrarouge. Le procédé comporte les étapes consistant à: effectuer, d'après une image d'objet, un traitement de reconnaissance de points infrarouges sur un objet à positionner pour obtenir des informations de points infrarouges correspondant audit objet, les informations de points infrarouges comportant des données de coordonnées de trois points infrarouges, et les trois points infrarouges étant répartis sous la forme d'un triangle isocèle (S101); reconnaître les sommets du triangle isocèle et le milieu du côté inférieur d'après les données de coordonnées de les trois points infrarouges, et générer, d'après les sommets du triangle isocèle et le milieu du côté inférieur, un vecteur de direction orienté vers les sommets, le vecteur de direction représentant la direction de mouvement de l'objet à positionner (S102); et calculer un angle inclus de vecteurs entre le vecteur de direction de l'objet à positionner et un système de coordonnées prédéfini dans un environnement virtuel, et déterminer, d'après l'angle inclus de vecteurs, la direction actuelle de mouvement de l'objet à positionner (S103). Au moyen du procédé, un utilisateur peut discerner la direction même lors d'une immersion dans l'environnement virtuel, renforçant ainsi l'immersion de l'utilisateur dans l'environnement virtuel; et l'interaction entre une personne et une autre personne ou entre une personne et un objet dans l'environnement virtuel est réalisée, satisfaisant ainsi le besoin d'interaction de l'utilisateur dans l'environnement virtuel.
PCT/CN2020/101725 2020-07-13 2020-07-13 Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge WO2022011517A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/101725 WO2022011517A1 (fr) 2020-07-13 2020-07-13 Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge
CN202080001234.1A CN112136158A (zh) 2020-07-13 2020-07-13 红外定位方法、红外定位装置及红外定位系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/101725 WO2022011517A1 (fr) 2020-07-13 2020-07-13 Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge

Publications (1)

Publication Number Publication Date
WO2022011517A1 true WO2022011517A1 (fr) 2022-01-20

Family

ID=73852112

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/101725 WO2022011517A1 (fr) 2020-07-13 2020-07-13 Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge

Country Status (2)

Country Link
CN (1) CN112136158A (fr)
WO (1) WO2022011517A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112835449A (zh) * 2021-02-03 2021-05-25 青岛航特教研科技有限公司 一种基于虚拟现实与体感设备交互的安全体感教育系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420171A1 (fr) * 2010-08-20 2012-02-22 LG Electronics, Inc. Appareil de nettoyage
CN102442248A (zh) * 2010-10-11 2012-05-09 现代自动车株式会社 与驾驶员视角方向结合的用于警告前方碰撞危险的系统和方法及使用所述系统和方法的车辆
CN104052987A (zh) * 2013-03-11 2014-09-17 佳能株式会社 图像显示设备及图像显示方法
CN104792312A (zh) * 2014-01-20 2015-07-22 广东工业大学 以定距三球为视觉标志物的室内自动运输车定位系统
CN105094127A (zh) * 2014-05-15 2015-11-25 Lg电子株式会社 吸尘器及其控制方法
CN108510545A (zh) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 空间定位方法、空间定位设备、空间定位系统及计算机可读存储介质
CN111028266A (zh) * 2019-12-16 2020-04-17 洛阳语音云创新研究院 畜禽盘点方法、装置、电子设备和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108917758B (zh) * 2018-02-24 2021-10-01 石化盈科信息技术有限责任公司 一种基于ar的导航方法及系统
US11526568B2 (en) * 2018-05-25 2022-12-13 Yellcast, Inc. User interfaces and methods for operating a mobile computing device for location-based transactions
CN109529318A (zh) * 2018-11-07 2019-03-29 艾葵斯(北京)科技有限公司 虚拟视觉系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2420171A1 (fr) * 2010-08-20 2012-02-22 LG Electronics, Inc. Appareil de nettoyage
CN102442248A (zh) * 2010-10-11 2012-05-09 现代自动车株式会社 与驾驶员视角方向结合的用于警告前方碰撞危险的系统和方法及使用所述系统和方法的车辆
CN104052987A (zh) * 2013-03-11 2014-09-17 佳能株式会社 图像显示设备及图像显示方法
CN104792312A (zh) * 2014-01-20 2015-07-22 广东工业大学 以定距三球为视觉标志物的室内自动运输车定位系统
CN105094127A (zh) * 2014-05-15 2015-11-25 Lg电子株式会社 吸尘器及其控制方法
CN108510545A (zh) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 空间定位方法、空间定位设备、空间定位系统及计算机可读存储介质
CN111028266A (zh) * 2019-12-16 2020-04-17 洛阳语音云创新研究院 畜禽盘点方法、装置、电子设备和存储介质

Also Published As

Publication number Publication date
CN112136158A (zh) 2020-12-25

Similar Documents

Publication Publication Date Title
US11842438B2 (en) Method and terminal device for determining occluded area of virtual object
WO2020119684A1 (fr) Procédé, appareil et dispositif de mise à jour de carte sémantique de navigation 3d
CN107820593B (zh) 一种虚拟现实交互方法、装置及系统
WO2018119889A1 (fr) Procédé et dispositif de positionnement de scène tridimensionnelle
CN112927363B (zh) 体素地图构建方法及装置、计算机可读介质和电子设备
WO2019196745A1 (fr) Procédé de modélisation de visage et produit associé
CN106325509A (zh) 三维手势识别方法及系统
CN111414879B (zh) 人脸遮挡程度识别方法、装置、电子设备及可读存储介质
CN108363995A (zh) 用于生成数据的方法和装置
US20220414910A1 (en) Scene contour recognition method and apparatus, computer-readable medium, and electronic device
CN112927362A (zh) 地图重建方法及装置、计算机可读介质和电子设备
EP3855386A2 (fr) Procédé, appareil, dispositif et support de stockage pour transformation de coiffure et produit programme informatique
CN109582122A (zh) 增强现实信息提供方法、装置及电子设备
CN112766027A (zh) 图像处理方法、装置、设备及存储介质
CN113362445B (zh) 基于点云数据重建对象的方法及装置
US20240303921A1 (en) Virtual-camera-based image acquisition method and related apparatus
KR20220100813A (ko) 자율주행 차량 정합 방법, 장치, 전자 기기 및 차량
WO2022011517A1 (fr) Procédé de positionnement infrarouge, appareil de positionnement infrarouge et système de positionnement infrarouge
CN108573192A (zh) 匹配人脸的眼镜试戴方法和装置
CN112258647B (zh) 地图重建方法及装置、计算机可读介质和电子设备
CN115994944A (zh) 三维关键点预测方法、训练方法及相关设备
CN116245731A (zh) 一种扫描数据拼接方法、装置、设备及介质
CN110097061A (zh) 一种图像显示方法及装置
KR102534449B1 (ko) 이미지 처리 방법, 장치, 전자 장치 및 컴퓨터 판독 가능 저장 매체
CN112464753A (zh) 图像中关键点的检测方法、检测装置及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945486

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20945486

Country of ref document: EP

Kind code of ref document: A1