WO2019024498A1 - 一种定位系统以及定位方法 - Google Patents

一种定位系统以及定位方法 Download PDF

Info

Publication number
WO2019024498A1
WO2019024498A1 PCT/CN2018/077742 CN2018077742W WO2019024498A1 WO 2019024498 A1 WO2019024498 A1 WO 2019024498A1 CN 2018077742 W CN2018077742 W CN 2018077742W WO 2019024498 A1 WO2019024498 A1 WO 2019024498A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical pattern
target
sub
area
projection device
Prior art date
Application number
PCT/CN2018/077742
Other languages
English (en)
French (fr)
Inventor
王丙福
刘辰辰
李靖
张军平
林海波
曾重阳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP18841158.1A priority Critical patent/EP3660452B1/en
Publication of WO2019024498A1 publication Critical patent/WO2019024498A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/343Calculating itineraries, i.e. routes leading from a starting point to a series of categorical destinations using a global route restraint, round trips, touristic trips
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Definitions

  • the present invention relates to the field of computers, and in particular, to a positioning system and a positioning method.
  • Precise positioning technology is the premise of information processing, big data and intelligent production organization process, which can be applied to various fields such as industry, virtual reality, sports and robotics.
  • the accurate positioning of personnel, materials or vehicle locations in real time is an important component of Industry 4.0 (the fourth industrial revolution).
  • Industry 4.0 the fourth industrial revolution.
  • the location information of people, objects and vehicles will be accurately reflected to the control center, which can facilitate material inquiry, inventory, vehicle monitoring, etc., and improve the management level of the enterprise through reasonable scheduling.
  • GPS positioning can not be used in indoor environments such as factories and warehouses, and GPS positioning accuracy can not meet the needs of industrial production
  • wireless positioning can be used for indoor positioning, for example, angular positioning in wireless positioning, through measurement An angle of arrival (AoA) or an angle of departure (AoD) directionality information between the target to be located and the anchor point, and the relative orientation or angle between the target to be positioned and the anchor point is obtained. Therefore, it is determined that the target to be positioned is on a straight line passing through a known direction of the anchor point, and the intersection of the plurality of straight lines is the position of the target to be positioned.
  • AoA angle of arrival
  • AoD angle of departure
  • the deployment location and placement posture of the anchor point need to be carefully designed and deployed, and the deployment process is complicated.
  • the maintenance of the anchor point and the subsequent maintenance of the anchor point require a large amount of human resources, thereby improving the positioning cost.
  • the embodiment of the invention provides a positioning system and a positioning method, which can reduce the positioning cost and effectively save manpower investment.
  • a first aspect of the embodiments of the present invention provides a positioning system, including at least one projection device, at least one camera, and a positioning engine;
  • the positioning engine is connected to any one of the at least one projection device, and the positioning engine is further connected to any one of the at least one camera;
  • the positioning system shown in the present application is applied to an open indoor environment such as a warehouse logistics, a factory floor, etc.
  • the object to be positioned may be a person, a material, a vehicle, etc.
  • the camera and the projection device may be hung in the indoor A roof in an environment that is the ground in an indoor environment.
  • the positioning engine is configured to send a projection instruction to a projection device.
  • the projection instruction is used to instruct the projection device to project a first optical pattern to the target area, and the first optical pattern projected to the target area includes a plurality of sub-areas, specifically, the projection instruction The pattern of each sub-area included in the first optical pattern, the area of each sub-area, and the like are indicated.
  • the projection device is configured to project the first optical pattern to a target area according to the projection instruction.
  • the projection device can be implemented by Liquid Crystal on Silicon (LCOS), high temperature polysilicon liquid crystal penetrating projection technology, Digital Micro-mirror device (DMD) or galvanometer projection technology. Achieving the first optical pattern is projected onto the target area.
  • LCOS Liquid Crystal on Silicon
  • DMD Digital Micro-mirror device
  • galvanometer projection technology Achieving the first optical pattern is projected onto the target area.
  • the at least one camera is configured to collect and send a second picture to the positioning engine, the second picture comprising the first optical pattern and an object to be located within the first optical pattern.
  • the positioning engine is configured to receive the second picture, and determine, according to a location of the object to be located in the second picture in the first optical pattern, that the object to be located is in the target area. position.
  • the positioning engine can determine the position of the object to be positioned in the first optical pattern by using the second picture collected by the camera, so that the positioning engine can be based on the object to be located.
  • the position in the first optical pattern is used to locate the object to be located in the target area, and the positioning engine is controlled to adaptively control the projection device and the camera according to the application scenario and the positioning to achieve The precise positioning of the object to be positioned can be seen, and no excessive manpower investment is required in the positioning process, which saves a lot of labor costs.
  • the at least one camera is further configured to acquire a first picture, where the first picture includes the first optical pattern projected by the at least one projection device to the target area;
  • the at least one camera is further configured to send the first picture to the positioning engine
  • the positioning engine is further configured to receive the first picture and calibrate the target area according to the first picture.
  • the projection device projects a first optical pattern in the target area, and the first optical pattern is used to calibrate the second picture, so that the calibrated first optical pattern can effectively guarantee the object to be positioned Precise positioning, and the positioning engine, projection device, and camera are simple to deploy, reducing deployment costs.
  • the system includes two projection devices, which are a first projection device and a second projection device, respectively;
  • the first projection device and the second projection device are configured to respectively project the first optical pattern to the target area;
  • the at least one camera is configured to collect two of the first pictures
  • the two first images are respectively: a first picture including the first optical pattern projected by the first projection device to the target area, and a second projection device including the second projection device projected to the a first picture of the first optical pattern of the target area;
  • the at least one camera is configured to send the collected two first pictures to the positioning engine
  • the positioning engine is configured to receive two of the first pictures
  • the positioning engine is configured to respectively identify a plurality of sub-regions included in the first optical pattern from two of the first pictures;
  • the positioning engine is configured to send the adjustment information to the first projection device and/or the second projection device;
  • the first projection device and/or the second projection device are adjusted according to the adjustment information, so that sub-regions of the first optical patterns of the two first pictures are aligned, and the adjustment information is used. Aligning each sub-region of the first optical pattern that controls two of the first pictures.
  • the positioning system shown in this aspect is capable of adaptively adjusting the first optical pattern projected onto the target area, thereby enabling the positioning engine to acquire the first optical pattern aligned with each sub-area, Thereby, the accuracy of calibrating the target area by the first optical pattern is effectively ensured, thereby effectively ensuring accurate positioning of the object to be positioned.
  • the positioning engine is configured to determine a target sub-area, where the target sub-area is a sub-area in which the to-be-positioned object is located in multiple sub-areas of the first optical pattern;
  • the positioning engine is further configured to send control signaling to the at least one projection device, where the control signaling is used to control the at least one projection device to project a second optical pattern to the target sub-region;
  • the second optical pattern includes a plurality of sub-regions, and an area of the sub-region included in the second optical pattern is smaller than an area of the sub-region included in the first optical pattern;
  • the at least one projection device is configured to receive the control signaling and to project the second optical pattern to the target sub-area according to the control instruction.
  • the positioning system shown in this aspect can further provide positioning accuracy for positioning the object to be positioned, and the embodiment can perform rough positioning on the object to be positioned by using the first optical pattern, and determine the to-be-positioned Where the target positioning point of the object is located in the first optical pattern, the positioning engine may control the projection device to project a finer second optical pattern in the first optical pattern to avoid projection accuracy on the entire target area.
  • the higher second optical pattern not only achieves more accurate positioning, but also avoids the complexity of positioning the object to be positioned, thereby improving the efficiency of positioning the object to be positioned.
  • the second image further includes the second optical pattern and an object to be positioned within the second optical pattern
  • the positioning engine is further configured to determine a position of the object to be located in the target area according to a position of the object to be positioned in the second image in the second image.
  • the positioning system of the present invention can accurately position the object to be positioned according to the position of the object to be positioned in the second optical pattern, and reduce the complexity of positioning the object to be positioned. Thereby improving the efficiency of positioning the object to be positioned.
  • the positioning engine is further configured to acquire a spacing of the target sub-areas
  • the target sub-region is a sub-region in which the object to be located is located among the plurality of sub-regions included in the target optical pattern;
  • the target sub-area shown in this implementation manner is a sub-area included in the first optical pattern
  • the target sub-area shown in this implementation manner is a sub-area included in the second optical pattern
  • the positioning engine further determines a position of the object to be located in the target area according to a position of the target sub-area in the second picture and a spacing of the target sub-area.
  • the at least one camera is further configured to collect height information, where the height information is used to indicate a height of the object to be located within the target area;
  • the positioning engine is further configured to acquire the height information collected by the at least one camera.
  • the positioning system shown in this aspect can also realize three-dimensional positioning of the object to be positioned, thereby achieving more comprehensive positioning of the object to be positioned.
  • the positioning engine is further configured to acquire a distance d2 of the target sub-area
  • the target sub-region is a sub-region in which the object to be located is located among the plurality of sub-regions included in the target optical pattern;
  • the target optical pattern shown in this implementation manner is the first optical pattern
  • the target optical pattern shown in this implementation manner is the second optical pattern
  • the positioning engine is configured to acquire a distance d1 of a sub-area projected by the at least one projection device on a surface of the object to be positioned, where the object to be located is located in the target optical pattern;
  • L is the height of the at least one projection device from the target area.
  • the positioning system shown in this aspect can also realize three-dimensional positioning of the object to be positioned, thereby achieving more comprehensive positioning of the object to be positioned.
  • the at least one projection device is further configured to project a target light
  • the target ray is one of a plurality of rays projected by the at least one projection device
  • the positioning engine is further configured to acquire a position of a projection point projected by the target light on the target area;
  • the target optical pattern shown in this implementation manner is the first optical pattern, if the object to be positioned is located based on the first optical pattern;
  • the target optical pattern shown in this implementation manner is the second optical pattern
  • the set position is a position where a virtual image formed in the target optical pattern after the reflected light enters one of the at least one camera;
  • the reflected light is a light generated by the target light reflected by the surface of the object to be positioned
  • the positioning engine is also used according to a formula Obtaining a height h of the object to be located in the target area;
  • L1 is a spacing between a position of the projection point and the set position
  • L2 is a spacing of the at least one projection device and the camera in a horizontal direction at the same horizontal plane, wherein the L is The height of the at least one projection device from the target area.
  • the positioning system shown in this aspect can also realize three-dimensional positioning of the object to be positioned, thereby achieving more comprehensive positioning of the object to be positioned.
  • a second aspect of the embodiments of the present invention provides a positioning method, including:
  • Step A The positioning engine acquires a second picture collected by at least one camera.
  • the second picture includes a first optical pattern projected by the at least one projection device toward the target area and an object to be positioned within the first optical pattern, the first optical pattern including a plurality of sub-areas.
  • Step B The positioning engine determines a position of the object to be located in the target area according to a position of the object to be positioned in the second image in the second picture.
  • the positioning engine can determine the position of the object to be located in the first optical pattern by using the second picture collected by the camera, so that the positioning engine can be based on the object to be located.
  • the position in the first optical pattern is used to locate the object to be located in the target area, and the positioning engine is controlled to adaptively control the projection device and the camera according to the application scenario and the positioning to achieve The precise positioning of the object to be positioned can be seen, and no excessive manpower investment is required in the positioning process, which saves a lot of labor costs.
  • Step A01 Acquire a first picture collected by the at least one camera
  • the first picture includes the first optical pattern projected by the at least one projection device to the target area;
  • Step A02 calibrate the target area according to the first picture.
  • the projection device projects a first optical pattern in the target area, and the first optical pattern is used to calibrate the second picture, so that the calibrated first optical pattern can effectively guarantee the object to be positioned Precise positioning, and the positioning engine, projection device, and camera are simple to deploy, reducing deployment costs.
  • the step A02 specifically includes the following steps:
  • the two first pictures are collected by the at least one camera, and the two first pictures are respectively: the first film that is projected by the first projection device to the target area a first picture of the optical pattern, and a first picture of the first optical pattern projected by the second projection device onto the target area;
  • the positioning method shown in this aspect can adaptively adjust the first optical pattern projected onto the target area, so that the positioning engine can collect the first optical pattern aligned with each sub-area, Thereby, the accuracy of calibrating the target area by the first optical pattern is effectively ensured, thereby effectively ensuring accurate positioning of the object to be positioned.
  • Step A11 determining a target sub-area
  • the target sub-region is a sub-region where the object to be located is located in a plurality of sub-regions of the first optical pattern
  • Step A12 Send control signaling to the at least one projection device.
  • the control signaling is configured to control the at least one projection device to project a second optical pattern to the target sub-region, the second optical pattern includes a plurality of sub-regions, and the sub-regions included in the second optical pattern The area is smaller than the area of the sub-area included in the first optical pattern.
  • the positioning method shown in this aspect can further provide the positioning accuracy of positioning the object to be positioned, and the embodiment can perform rough positioning on the object to be positioned by using the first optical pattern, and determine the to-be-positioned Where the target positioning point of the object is located in the first optical pattern, the positioning engine may control the projection device to project a finer second optical pattern in the first optical pattern to avoid projection accuracy on the entire target area.
  • the higher second optical pattern not only achieves more accurate positioning, but also avoids the complexity of positioning the object to be positioned, thereby improving the efficiency of positioning the object to be positioned.
  • the step B specifically includes:
  • the precise positioning of the object to be positioned can be achieved according to the position of the object to be positioned in the second optical pattern, and the complexity of positioning the object to be positioned is reduced. Thereby improving the efficiency of positioning the object to be positioned.
  • the step B specifically includes:
  • Step B11 Obtain a spacing of the target sub-areas
  • the target sub-region is a sub-region in which the object to be located is located among the plurality of sub-regions included in the target optical pattern;
  • the target sub-area shown in this implementation manner is a sub-area included in the first optical pattern
  • the target sub-area shown in this implementation manner is a sub-area included in the second optical pattern
  • Step B12 Obtain a spacing of the target sub-areas
  • Step B13 Determine a location of the object to be located in the target area according to a location of the target sub-area in the second picture and a spacing of the target sub-area.
  • the method further includes:
  • the positioning engine acquires height information collected by the at least one camera, and the height information is used to indicate a height of the object to be located within the target area.
  • the positioning method shown in this aspect can also achieve three-dimensional positioning of the object to be positioned, thereby achieving a more comprehensive positioning of the object to be positioned.
  • the step B specifically includes the following steps:
  • Step B21 the positioning engine acquires the distance d2 of the target sub-area
  • the target sub-region is a sub-region in which the object to be located is located among the plurality of sub-regions included in the target optical pattern;
  • the target optical pattern shown in this implementation manner is the first optical pattern
  • the target optical pattern shown in this implementation manner is the second optical pattern
  • Step B22 in a case where the object to be located is located in the target optical pattern, the positioning engine acquires a distance d1 of a sub-area projected by the at least one projection device on a surface of the object to be positioned;
  • L is the height of the at least one projection device from the target area.
  • the positioning system shown in this aspect can also realize three-dimensional positioning of the object to be positioned, thereby achieving more comprehensive positioning of the object to be positioned.
  • the step B specifically includes:
  • Step B31 The positioning engine acquires a position of a projection point projected by the target ray projected by the at least one projection device on the target area;
  • the target ray is one of a plurality of rays projected by the at least one projection device
  • Step B32 in a case where the object to be located is located in the target optical pattern, acquiring a set position
  • the target optical pattern shown in this implementation manner is the first optical pattern, if the object to be positioned is located based on the first optical pattern;
  • the target optical pattern shown in this implementation manner is the second optical pattern
  • the set position is a position where a virtual image formed in the target optical pattern after the reflected light enters one of the at least one camera, and the reflected light is the target light passing through the object to be positioned.
  • the light generated after the surface is reflected;
  • Step B33 the positioning engine according to the formula Obtaining a height h of the object to be located in the target area
  • L1 is a spacing between a position of the projection point and the set position
  • L2 is a spacing of the at least one projection device and the camera in a horizontal direction at the same horizontal plane, wherein the L is The height of the at least one projection device from the target area.
  • the projection device projects a first optical pattern in the target area
  • the first optical pattern is used to calibrate the second picture
  • the positioning engine can be collected by the camera
  • the second picture determines a position of the object to be positioned in the first optical pattern, so that the positioning engine can position the object to be positioned in the target area according to the position of the object to be positioned in the first optical pattern
  • the deployment of the positioning engine, the projection device, and the camera is simple, thereby reducing the deployment cost, and the positioning engine is controlled to adaptively control the projection device and the camera according to the application scenario and the positioning to implement the positioning.
  • the positioning of the object can be seen, and no excessive manpower investment is required in the positioning process, which saves a lot of labor costs.
  • FIG. 1 is a schematic structural view of an embodiment of a positioning system shown in the present application.
  • FIG. 3 is a schematic structural view of an embodiment of a first optical pattern included in a target area shown in the present application;
  • FIG. 4 is a schematic structural view of another embodiment of a first optical pattern included in a target area shown in the present application;
  • FIG. 5 is a flow chart of steps of another embodiment of the positioning method shown in the present application.
  • FIG. 6 is a schematic structural view of an embodiment of a first optical pattern included in a target area shown in the present application
  • FIG. 7 is a schematic structural diagram of another embodiment of a first optical pattern included in a target area shown in the present application.
  • FIG. 8 is a schematic structural view of another embodiment of a first optical pattern included in a target area shown in the present application.
  • FIG. 9 is a schematic diagram of an embodiment of acquiring a spacing of a target sub-area in a target area according to the present application.
  • FIG. 10 is a schematic diagram of another embodiment of acquiring a spacing of a target sub-area in a target area according to the present application.
  • FIG. 11 is a schematic diagram of another embodiment of acquiring a spacing of a target sub-area in a target area according to the present application.
  • FIG. 13 is a schematic structural view of an embodiment of a second optical pattern included in a target area shown in the present application.
  • FIG. 14 is a schematic structural view of another embodiment of a second optical pattern included in the target area shown in the present application.
  • the present application first provides a positioning system capable of positioning a target to be positioned.
  • the specific structure of the positioning system is described below in conjunction with FIG. 1 :
  • the positioning system includes a positioning engine 101, at least one camera 103, and at least one projection device 102.
  • the at least one camera 103 shown in this embodiment is connected to the positioning engine 101 by wire or wirelessly, and the at least one projection device 102 is connected to the positioning engine 101 by wire or wirelessly.
  • the positioning engine 101 shown in the present application may be integrally disposed in the camera 103 on a physical entity, and the camera 103 and the projection device 102 may be connected by wire or wirelessly.
  • the positioning engine 101 shown in the present application may be integrally disposed in the projection device 102 on a physical entity, and the projection device 102 and the camera 103 may be connected by wire or wirelessly.
  • the wired manner shown in the present application may be a wired manner such as Ethernet, cable cable, twisted pair or optical fiber.
  • the wireless method can be wirelessly fidelity (WI-FI), or Bluetooth, Internet of Things (IoT) and other wireless methods.
  • WI-FI wirelessly fidelity
  • IoT Internet of Things
  • the at least one projection device 102 is configured to perform projection of the optical pattern 100 to the target area under the control of the positioning engine 101;
  • the at least one camera 103 is configured to photograph a target area under the control of the positioning engine 101 to collect a picture including the optical pattern 100.
  • the positioning engine 101 can analyze the picture to achieve positioning of the object to be located in the target area.
  • the object to be positioned may be a person, a material, a vehicle, etc., and the camera 103 and the projection device 102 may be hung in the A roof in the indoor environment, the target area being a ground in an indoor environment.
  • the application field of the positioning system is exemplified and not limited.
  • the positioning system can also be applied to the field of virtual reality and the like.
  • the positioning system may further include a computer device 104, the positioning engine 101 may send related information to the computer device 104, the related information may be positioning information, and the positioning information is determined by the positioning engine 101. Position information of the object to be located in the target area, so that the computer device 104 can facilitate the query, inventory, and monitoring of the object to be located according to the location information, and then perform reasonable scheduling on the object to be located. arrangement.
  • the related information may also be deployment information, which is used to indicate location information and the like of the camera 103 and the projection device 102 deployed in a factory warehouse, thereby facilitating the computer device 104 to the camera 103 and the The projection device 102 performs statistics, management, and the like.
  • the first optical pattern projected by the projection device to the target area needs to be corrected, so that the corrected first optical pattern can calibrate the first picture collected by the camera.
  • the positioning system shown in FIG. 1 a specific process of how to correct the first optical pattern projected by the projection device into the target area will be described below in conjunction with the embodiment shown in FIG. 2.
  • Step 201 The positioning engine sends a projection instruction to the projection device.
  • the projection device shown in this step is any one of a plurality of projection devices included in the positioning system.
  • the projection instruction shown in this embodiment is used to instruct the projection device to project a first optical pattern to the target area, and the first optical pattern projected to the target area includes a plurality of sub-areas.
  • the positioning engine shown in this embodiment may predetermine the pattern of each sub-area included in the first optical pattern.
  • the pattern of each sub-area shown in this embodiment may be a diamond, a circle, a sector, a hexagon, or Square.
  • the pattern of each sub-area is exemplified as a square as shown in FIG. 1 , and the projection instruction generated by the positioning engine is used to instruct the projection device to project to the target area.
  • the pattern of the sub-area is square.
  • the description of the projection instruction in this embodiment is an optional example, which is not limited.
  • the projection instruction may also be used to indicate the area and the like of the sub-area projected by the projection device.
  • the positioning engine can determine the area of different sub-areas according to different application scenarios. For example, if a high-precision positioning of the object to be positioned is required, the area of the sub-area indicated by the projection instruction may be relatively small, and if the positioning object is to be roughly positioned, the projection instruction indicates The area of the sub-area will be relatively large.
  • Step 202 The projection device projects a first optical pattern to the target area according to the projection instruction.
  • the projection device may project a first optical pattern to the target area, and the first optical image projected by the projection device The pattern of the pattern is identical to the pattern indicated by the projection command.
  • the first optical pattern includes a plurality of sub-regions that enable calibration of the target region, thereby enabling the positioning engine to implement positioning of the object to be positioned on the target region that has been calibrated.
  • the projection device shown in this embodiment may adopt a liquid crystal on silicon (LCOS), a high temperature polysilicon liquid crystal penetrating projection technology, or a digital micro mirror device (Digital Micro-mirror Device, DMD) or galvanometer projection techniques or the like to achieve projection of the first optical pattern to the target area.
  • LCOS liquid crystal on silicon
  • DMD Digital Micro-mirror Device
  • the light projected by the projection device shown in this embodiment may be visible light.
  • the projection device shown in this embodiment when the projection device shown in this embodiment is deployed on a roof in an indoor environment, the projection device that projects visible light can replace the illumination lamp in the indoor environment, and can simultaneously realize the indoor photo.
  • the positioning process of the object to be positioned shown in the present application saves the deployment of the illumination lamp and saves the cost.
  • the light projected by the projection device in the embodiment may be invisible light, such as infrared light or ultraviolet light. If the light projected by the projection device is infrared light, the embodiment is The illustrated camera is required to have the ability to acquire an infrared image such that a camera having the ability to acquire an infrared image is capable of capturing a first optical pattern projected by the projection device through infrared light.
  • the projection device shown in this embodiment may project a dark line with a brightness less than a preset threshold to the target area, and the target area is illuminated by a large area to present a bright effect, and the projection device projects A dark line may form the first optical pattern on the target area.
  • Step 203 The camera collects and sends the first picture.
  • the positioning engine may control the camera to capture the target area, and the camera may acquire the first picture, where the first picture includes the at least one projection device projected onto the target area.
  • the first optical pattern may be controlled to control the camera to capture the target area, and the camera may acquire the first picture, where the first picture includes the at least one projection device projected onto the target area.
  • the one or more cameras included in the positioning system shown in this embodiment may simultaneously capture the target area to acquire the first picture.
  • Step 204 The positioning engine receives the first picture.
  • the positioning engine shown in this embodiment may receive multiple first pictures, and the plurality of first pictures may be continuously captured by the camera included in the positioning system to be collected by the camera. Or the plurality of first pictures may be separately captured by the plurality of cameras included in the positioning system to be collected by the target area.
  • the first picture may be interfered by other information such as noise points, and the positioning engine cannot detect the first optical pattern collected by the camera when analyzing the first picture, and the positioning engine may The detection is performed one by one in the acquisition of the plurality of first pictures until the first optical pattern included in the first picture is successfully detected.
  • the positioning engine shown in this embodiment successfully detects the first optical pattern in multiple first pictures
  • the positioning engine may successfully detect the first optical pattern.
  • a first picture including the first optical pattern is randomly selected, or the positioning engine may select the first picture with the clearest first optical pattern, specifically in the present
  • the embodiment is not limited.
  • Step 205 The positioning engine closes the projection device.
  • the first picture captured by the camera may be included in the first picture.
  • the object to be located in the first optical pattern, the first picture at this time is the second picture described in the present application.
  • the positioning engine may identify attribute information such as a pattern, a position, and a direction of the first optical pattern.
  • the positioning engine can store the attribute information.
  • the positioning engine may send a closing instruction to the projection device, and the closing instruction is used to indicate the location
  • the projection device is turned off, and the projection device can be turned off when receiving the shutdown command.
  • the positioning system includes a plurality of projection devices as an example for example. If the step 205 is performed, the step 201 may be returned to the positioning system until the positioning engine is used to the positioning system.
  • the plurality of projection devices included are polled through steps 201 to 205, thereby enabling the positioning engine to store the attribute information respectively corresponding to each projection device.
  • Step 206 The positioning engine determines whether the first optical pattern meets a target condition. If not, step 207 is performed.
  • the target condition shown in this embodiment is that any two adjacent sub-regions of the first optical pattern projected by the projection device to the target region are aligned with each other, and the patterns of any two adjacent sub-regions are the same. .
  • the pattern is a pattern of each sub-area and an area of each sub-area.
  • the positioning engine may determine, according to attribute information of each projection device, whether the first optical pattern meets the target condition, and if not, the positioning engine may perform a first optical pattern projected by each projection device. According to the analysis, if the patterns of the first optical patterns projected by the different projection devices are inconsistent, the positioning engine may control the pattern of the first optical pattern projected by the projection device by adjusting the information until all the projection devices project the first The pattern of an optical pattern is identical.
  • the positioning engine may perform binarization on the first picture, that is, the positioning engine resets the brightness value of the pixel on the first picture, so that the brightness value of the first optical pattern and the first The brightness values of the background of a picture are different, so that the positioning engine can extract the first optical pattern in the first picture, and the positioning engine can determine the first projection by different projection devices Whether the pattern of the optical pattern is consistent.
  • the description of the process for the location engine to extract the first optical pattern in the first picture is an optional example, which is not limited. In a specific application, the positioning engine can successfully extract the The first optical pattern is sufficient.
  • the positioning engine may control the projection device to perform translation, rotation, zooming, zooming, etc. by adjusting information until the adjacent two The first optical patterns are aligned together.
  • the positioning engine 101 determines the first optical pattern 301 and the first optical pattern 302 according to the attribute information sent by the first projection device L1 and the second projection device L2.
  • the first projection device may be any projection device included in the positioning system, and the first optical pattern 301 is an optical pattern projected by the first projection device L1 included in the positioning system,
  • the first optical pattern 302 is an optical pattern projected by the second projection device L2 included in the positioning system, and the first optical pattern 301 and the first optical pattern 302 are adjacent in the target region.
  • the positioning engine may determine, according to the attribute information, that the first optical pattern 301 and any two sub-regions included in the first optical pattern 302 have the same style, but the first The optical pattern 301 and the first optical pattern 302 are not aligned together, and the mutual alignment of the first optical pattern 301 and the first optical pattern 302 can be achieved by performing step 208.
  • the positioning engine determines, according to the attribute information, that the first optical pattern 301 and any two sub-regions included in the first optical pattern 302 have the same pattern, and the first optical pattern 301 and the first The alignment of certain optical patterns 302 indicates that there is no need to correct the first optical pattern projected by the projection device.
  • Step 207 The positioning engine sends the adjustment information to the projection device.
  • the adjustment information shown in this embodiment includes: indicating that the projection device selects a pattern of the first optical pattern, and performs translation, rotation, scaling, and the like on the projected first optical pattern.
  • the adjustment information shown in this embodiment may be used to indicate a distance that the first projection device translates along the X axis, and a distance that translates along the Y axis.
  • Step 208 The projection device adjusts the first optical pattern according to the adjustment information.
  • the first projection device shown in this embodiment translates the first optical pattern projected by the first projection device along the X axis and the Y axis according to the adjustment information, so as to be translated.
  • the first optical pattern 301 cast by the first projection device L1 and the first optical pattern 302 projected by the second projection device L2 are aligned.
  • Steps 206 to 208 shown in this embodiment may be repeatedly performed a plurality of times, so that the first optical patterns projected by the projection devices into the target area can be more accurately aligned and ensured to be located in the target area.
  • the pattern of any two of the plurality of first optical patterns is the same.
  • steps 201 through 208 of the present embodiment illustrates a specific process of correcting the first optical pattern projected into the target area when the positioning system includes a plurality of projection devices.
  • the positioning system shown in the present application may also include only one projection device, and in the case where the positioning system includes one projection device, in the process of performing step 207 shown in this embodiment.
  • the target condition may be that the patterns of any two sub-regions in the first optical pattern are the same.
  • the positioning engine may send adjustment information to the projection device to enable the projection device to change the first optical The pattern of the pattern until the pattern of any two sub-regions in the first optical pattern is the same.
  • the positioning engine sequentially sends the projection instruction to a plurality of the projection devices as an example for exemplary explanation:
  • the positioning engine may also send the projection instruction to a plurality of the projection devices, and the positioning engine may distinguish the first optical pattern projected by each of the projection devices. Controlling, by the configured projection instruction, different projection devices to project different first optical patterns.
  • the projection instruction is used to indicate that different projection devices project different first optical patterns of different colors, for example,
  • the projection instruction is used to indicate that different projection devices project a first optical pattern with different brightness.
  • the projection instruction is used to indicate that different projection devices project a first optical pattern with different blinking frequencies.
  • the positioning engine distinguishes the first optical pattern projected by the different projection devices
  • the positioning engine can obtain the attribute information of the first optical pattern projected by different projection devices, and the specific description of the attribute information is detailed. As shown in the above, the details are not described in detail.
  • the positioning engine performs the correction of the first picture according to the attribute information of each of the projection devices. Please refer to the above description, and details are not described herein.
  • the positioning engine sends the projection instruction to a plurality of the projection devices at the same time, the time consumed by the process of correcting the first optical pattern can be greatly reduced, thereby improving the correction of the first optical pattern. s efficiency.
  • the embodiment shown in FIG. 2 does not need to perform the embodiment shown in FIG. 2 every time the positioning object is positioned in the process of positioning the positioning object to be positioned, only when the positioning environment changes, for example. If the positioning system includes a camera or a projection device, or if the camera or the projection device is re-deployed, the embodiment shown in FIG. 2 needs to be executed. When the positioning environment does not change, the method shown in FIG. 2 may not be performed. In the embodiment, of course, if the positioning of the object to be positioned is performed in an effective manner, the embodiment shown in FIG. 2 is executed once, and is not limited in the present application.
  • the first optical pattern can be corrected, and in the process of correcting, the positioning engine can be adaptively controlled to perform correction of the first optical pattern, and the correction is performed.
  • the latter first optical pattern enables accurate calibration of the first picture, thereby improving the accuracy of positioning the object to be positioned.
  • the calibration process does not require manual intervention, which greatly saves labor costs, and the positioning engine, the projection device, and the camera are simple and convenient to deploy, eliminating the need to deploy expensive equipment, saving the cost of positioning system deployment, and thus being suitable for large-scale industrial applications. .
  • the positioning engine can be re-corrected, thereby effectively ensuring the accuracy of positioning the subsequent object to be positioned, and saving a lot of manpower and reducing the deployment cost.
  • Step 501 The camera collects and sends a second picture to the positioning engine.
  • the second picture includes the first optical pattern and an object to be positioned within the first optical pattern.
  • the second picture 600 includes a first optical pattern 601 projected by a projection device to a target area, wherein the first optical pattern shown in this embodiment is corrected in the embodiment shown in FIG. 2.
  • the first optical pattern is corrected in the embodiment shown in FIG. 2.
  • the positioning engine may control the camera to collect the second image, and specifically, the positioning engine sends an acquisition instruction to the camera, thereby The camera captures the target area to collect the second picture when receiving the acquisition instruction.
  • the positioning engine may analyze the collected second image. If the second image includes the object to be located 602 located in the first optical pattern 601 as shown in FIG. 6, the subsequent steps may be performed. Perform the positioning process of the object to be positioned.
  • the user may input the positioning indication information to the positioning engine, and the positioning engine may receive the positioning indication information input by the user, where the positioning engine is In the case that the positioning indication information is received, step 501 shown in this embodiment may be performed.
  • the positioning engine may directly receive the positioning indication operation input by the user, for example, the positioning engine may generate an operation interface, the operation interface is configured to receive the positioning indication operation input by a user, or the positioning engine may The positioning indication operation input by the user is received by an external device connected to the positioning engine, and the external device may be the computer device 104 shown in FIG. 1 or a terminal device connected to the positioning engine, such as a smart phone. .
  • Step 502 The positioning engine receives the second picture.
  • multiple second pictures may be collected by at least one camera, and the positioning engine may analyze multiple second pictures, so that the positioning engine successfully detects the first picture.
  • a second picture is randomly selected, or the positioning engine may select a first optical pattern and a second picture that is the clearest object to be located, specifically in the implementation. There is no limit in the example.
  • Step 503 The positioning engine determines a target sub-area where the object to be located is located.
  • the first optical pattern includes a plurality of sub-regions, and the sub-region in which the object to be located is located in the plurality of sub-regions is a target sub-region.
  • the following is an exemplary illustration of how the positioning engine determines the target sub-area:
  • the positioning engine identifies the object to be located in the second picture according to the feature of the object to be located, and determines a target sub-area in which the object to be located is located in the second picture.
  • the positioning engine may be configured to accurately store the object to be located in the target sub-area, and the positioning engine may pre-store the feature corresponding to the object to be located, where the positioning engine acquires the In the case of the two pictures, the positioning engine may perform feature extraction on the second picture to determine whether the extracted feature matches the feature corresponding to the object to be located, and if so, the positioning engine The object to be located can be identified in the second picture.
  • the positioning engine may perform feature extraction one by one in the plurality of the second pictures until the The positioning engine successfully identifies the object to be located in the second picture.
  • the first optical pattern projected onto the target area shown in this embodiment includes a plurality of sub-areas, wherein in order to achieve precise positioning of the object to be positioned, the first optical pattern shown in this embodiment is The relative position in the second picture is unchanged from the relative position of the corrected first optical pattern shown in the embodiment shown in FIG. 2 in the first picture.
  • the first optical pattern 601 includes a plurality of sub-regions 603, and the target sub-region 604 is where the to-be-positioned object 602 is located in the plurality of sub-regions 603. Sub-area.
  • the positioning engine determines the target sub-area, and the following description is that the object to be located is located in a plurality of sub-areas.
  • the positioning engine is a specific process of determining the target sub-area:
  • the to-be-positioned object 701 is located in eight sub-areas, and the positioning engine may determine that the sub-area in which the target positioning point is located is the target sub-area.
  • the target location point is a pixel point corresponding to the object to be located 701.
  • the target locating point may be any pixel point on the object 701 to be located, and the target locating point may be a geometric center pixel point of the object 701 to be located.
  • the target positioning point may be any pixel located around the object 701 to be located.
  • the target positioning point is not limited, as long as the positioning of the object to be located can be achieved by using the target positioning point.
  • the target positioning point is the geometric center pixel of the object to be positioned 701.
  • the target sub-area is a sub-area in which the geometric center pixel point of the object to be located 701 is located.
  • the positioning engine shown in this embodiment is configured to implement precise positioning of the object to be located, and the positioning engine may determine coordinates of the object to be located in the second picture.
  • the positioning engine can set a coordinate origin in the second picture.
  • the setting position of the coordinate origin is not limited, as long as the coordinate origin is any pixel in the second picture, as shown in FIG. 6, the positioning engine can be used.
  • An example of the pixel in the upper left corner of the second picture is the coordinate origin as an example:
  • the positioning engine may determine the location of the target sub-area in the second picture, that is, the positioning engine may determine that the target sub-area is located in the second picture.
  • the positioning engine may acquire a second sequence of pictures, where the second sequence of pictures includes at least one second picture captured by the camera, and each part in the second picture sequence The two pictures are sorted in order of shooting time from front to back. In the case that the object to be positioned is moved, the relative positions of the target positioning points of the object to be positioned in each of the second pictures are different.
  • the positioning engine may determine a change of the position of the object to be positioned at a different shooting time with respect to the second picture, so that the positioning engine determines a movement track of the object to be positioned or Movement speed, etc.
  • Step 504 The positioning engine determines, according to the location of the target sub-area in the second picture, a location of the object to be located in the target area.
  • a location mapping relationship exists between the second picture and the target area, and the positioning engine determines that the target sub-area is in the second picture, and determines that the object to be located is The location in the target area.
  • the positioning engine has stored the relative position of the target area relative to the ground of the indoor environment, and the positioning engine may determine that the object to be positioned is indoor according to the position of the object to be positioned in the target area. The location in the ground of the environment.
  • the position of the object to be positioned in the target area shown in this embodiment may be a two-dimensional position of the object to be positioned in the target area.
  • the position of the object to be positioned shown in the embodiment in the target area may be a two-dimensional position of the object to be positioned on the ground of the indoor environment.
  • the positioning engine may acquire the position of the object to be located in the target area according to the location of the target sub-area in the second picture and the spacing of the target sub-area. For example, the positioning engine can obtain the spacing of each of the sub-areas. When the projection device projects the sub-areas into the target area, the spacing of each sub-area is 1 meter. ;
  • the positioning engine may determine that the object to be located is located.
  • the coordinates of the target sub-area in the target area are (5 ⁇ 1, 3 ⁇ 1), and the coordinates of the target sub-area in the target area are determined, thereby realizing the positioning of the object to be located. .
  • each of the sub-regions is exemplified as an example. In a specific application, the sub-regions may also have other shapes.
  • the object to be located is located, only the device is determined. The position of the target sub-area in the second picture and the spacing of the target sub-area can obtain the position of the object to be located in the target area.
  • the first way to determine the spacing of the target sub-areas is:
  • the positioning engine may acquire at least four preset pixel points in the second picture.
  • the positioning engine knows coordinates of the preset pixel in the second picture and coordinates of the preset pixel in the target area.
  • the coordinates of the preset pixel point acquired by the positioning engine in the second picture and the coordinates of the preset pixel point in the target area may be input by the user in advance as an example.
  • An exemplary illustration is made.
  • the positioning engine acquires the coordinates of the preset pixel in the second picture and the coordinates of the preset pixel in the target area. There is no limit in the example.
  • the coordinate system in the second picture and the coordinate system of the target area may be coordinate systems corresponding to each other, for example, the coordinate origin in the second picture is an upper left corner in the second picture, Correspondingly, the coordinate origin in the target area is the upper left corner in the target area.
  • each of the sub-regions is a square image
  • the preset pixel points determined by the positioning engine in the second picture 800 are E, B, C, and D, and in the second picture 800, the positioning engine may determine The coordinates (x1, y1) of the sub-area in which the preset pixel point E is located in the second picture 800, that is, the sub-area in which the preset pixel point E is located are included in the second picture 800 In the first few rows of the first optical pattern, taking FIG. 8 as an example, the (x1, y1) is (3, 3).
  • the positioning engine can determine the coordinates (x2, y2), (x3, y3), and (x4, y4) of the sub-region where the preset pixel points B, C, and D are located in the second picture 800. ).
  • the positioning engine may also obtain actual coordinates of the preset pixel points E, B, C, and D in the target area, respectively (X1, Y1), (X2, Y2), (X3, Y3) ), (X4, Y4).
  • the positioning engine can obtain the preset formula as:
  • the positioning engine shown in this embodiment may bring the coordinates of the acquired preset pixel points in the second picture and the coordinates of each preset pixel point in the target area to the preset respectively.
  • the matrix A and the parameter W' can be solved in the formula.
  • the matrix A is a matrix of three rows and three columns.
  • the positioning engine may determine a preset correspondence according to the matrix A and the parameter W'.
  • the preset correspondence relationship is:
  • the positioning engine may store the preset correspondences that have been acquired.
  • the positioning engine may acquire coordinates of the first pixel in the second picture and coordinates of the second pixel in the second picture.
  • the first pixel point and the second pixel point are respectively pixel points at two ends of any one of the plurality of sub-areas included in the second picture.
  • the first pixel point may be a pixel point in an upper left corner of the sub-region
  • the second pixel point may be a pixel point in an upper right corner of the same sub-region.
  • the positioning engine can bring the coordinates of the first pixel and the second pixel in the second picture into the preset correspondence, and the positioning engine can determine the location. Decoding coordinates of the first pixel in the target area, and coordinates of the second pixel in the target area, coordinates of the positioning engine in the target area of the first pixel The second pixel is subtracted between coordinates in the target area to determine the spacing of the sub-areas.
  • the second way to determine the spacing of the target sub-areas is:
  • the positioning engine obtains the first formula:
  • the parameter d1 in the first formula is the pitch of the grid on the imaging device of the projection device, wherein the grid on the imaging device of the projection device is used to project the sub-region on the target region;
  • the parameter d2 in the first formula is a pitch of the sub-area projected by the projection device into the target area
  • the parameter h1 in the first formula is the distance from the imaging device of the projection device to the lens, wherein the imaging device may be a liquid crystal film, or a liquid crystal projection, or a digital micro mirror, or a galvanometer, etc., this embodiment An exemplary description is made by taking the imaging device as a liquid crystal panel as an example:
  • the parameter h2 in the first formula is the distance from the plane of the target area to the lens
  • the parameter h in the first formula is the height of the liquid crystal panel from the plane of the target area, and the h can be measured when the projection device is deployed, and the user can input the measured h to the Projection device
  • the parameter f in the first formula is the focal length of the lens of the projection device.
  • the parameter d1, the parameter h1, the parameter h2, the parameter h, and the parameter f are known parameters.
  • the positioning engine can derive the second formula according to the first formula:
  • the positioning engine may have a known parameter d1, the parameter h1, the parameter h2, the parameter h, and the parameter f Bringing into the second formula, the positioning engine can solve the pitch of the sub-areas projected by the projection device into the target area.
  • the location of the object to be located in the target area shown in this embodiment may also be a three-dimensional position of the object to be located in the target area.
  • the location of the object to be located in the target area in the embodiment may be a two-dimensional coordinate of the object to be located in the ground of the indoor environment and a height of the object to be located.
  • the first way to determine the height of the object to be located is:
  • the camera included in the positioning system is a camera for performing depth information measurement, such as a camera of a depth camera Kinect Sensor or a camera of a binocular camera.
  • the positioning engine may directly instruct the camera to collect height information, wherein the height information is used to indicate a height of the object to be positioned within the target area.
  • the positioning engine can directly acquire the height information, so that the positioning engine can acquire the height of the object to be positioned within the target area.
  • the second way to determine the height of the object to be located is:
  • the height of the projection device 1001 from the target region plane is L.
  • the pitch of the target sub-region 1004 projected by the projection device 1001 in the target region is d2;
  • the pitch of the sub-region projected by the projection device 1001 on the surface of the object to be positioned 1003 is d1;
  • the height L, the spacing d2, and the spacing d1 can be measured by the positioning engine.
  • the positioning engine may calculate the height h of the object 100 to be positioned according to the third formula
  • the positioning engine determines a third formula according to a triangle similarity principle:
  • the positioning engine may derive the fourth formula according to the third formula:
  • the fourth formula is:
  • the positioning engine may bring the acquired height L, the spacing d2, and the spacing d1 into the fourth formula, thereby causing the positioning.
  • the engine can obtain the height h of the object 100 to be positioned.
  • a third way to determine the height of the object to be located is:
  • point A is the position of the projection device 1101
  • point B is the position of the camera 1102
  • d is the distance between the camera 1102 and the projection device 1101.
  • the camera 1102 and the projection device 1101 are mounted on a plane parallel to the target area, and the height difference between the horizontal plane where the camera 1102 and the projection device 1101 are located and the horizontal plane where the target area is located is L.
  • the projection point E is reflected by the surface point C of the object 1103 to be positioned, and the set position D is the position of the virtual image formed after the reflected light enters the camera 1102, and the surface point C of the object is at a height from the target area.
  • L1 is the distance between point D and point E
  • L2 is the distance between point A and point B
  • the positioning engine can determine the fifth formula:
  • the positioning engine may bring the detected L1, L2, and L into the fifth formula in a specific process of determining the height of the object to be located, so that the positioning engine solves The height h of the object to be positioned.
  • the description of how to locate the object to be located in this embodiment is an optional example, as long as the positioning engine can determine the spacing of the projection device to each sub-region projected in the target area. If the camera can simultaneously capture the object to be positioned and the calibration object located in the target area, wherein the positioning engine has stored the size of the calibration object, the positioning engine can be based on the size of the calibration object and The relative positions between the target regions determine the spacing of the sub-regions. For example, if the positioning engine has determined that the length of the calibration is 1 meter and the calibration occupies the length of 2 sub-regions, then the spacing of each sub-region is 0.5 meters.
  • the positioning engine, the projection device, and the camera cooperate with each other, accurate positioning of the object to be positioned can be realized, and only data interaction between the positioning engine, the camera, and the projection device is required.
  • the positioning of the object to be positioned can be realized, so that the dependence on the environment is low, the positioning engine can flexibly adjust the first optical pattern according to the environment, and the positioning engine can control the projection device to project the first optical pattern with different sizes to the target area, thereby Achieve positioning requirements for different precision of the object to be positioned.
  • Step 1201 The camera collects and sends a second picture to a positioning engine.
  • Step 1202 The positioning engine receives the second picture.
  • step 1201 to the step 1202 in this embodiment is shown in the step 501 to the step 502 shown in FIG. 5, and the specific execution process is not described in this embodiment.
  • Step 1203 The positioning engine determines a target sub-area in which the object to be located is located in the first optical pattern, where the target sub-area is a sub-area in which the object to be located is located in the plurality of sub-areas of the first optical pattern.
  • the target sub-area is a sub-area where the target positioning point of the object to be located is located in multiple sub-areas of the first optical pattern.
  • the target sub-region is taken as an example of a sub-region in which the target location of the object to be located is located in multiple sub-regions of the first optical pattern:
  • the positioning engine may determine that the object to be located 1304 is located in the sub-area 1303 of the plurality of sub-areas of the first optical pattern 1300 , that is, the sub-area 1303 is the target sub-area.
  • Step 1204 The positioning engine controls the projection device to project a second optical pattern to the target sub-area 1303.
  • the positioning engine may send a projection instruction to the projection device, where the projection instruction is used to instruct the projection device to project a second optical pattern to the target sub-area 1303, wherein the second optical pattern is also A plurality of sub-regions are included, and an area of the sub-region included in the second optical pattern is smaller than an area of the sub-region included in the first optical pattern.
  • the projection instruction is used to instruct the projection device to project a second optical pattern to the target sub-area 1303 , wherein an area of each sub-area of the second optical pattern is smaller than the first The area of the sub-area included in the optical pattern.
  • the positioning engine determines that the area of the sub-area 1302 included in the first optical pattern is 1 m*1 m, and the positioning engine determines that the target positioning point of the object to be located is located in the target sub-area 1303.
  • the positioning engine may project a second optical pattern in the target sub-area 1303, and the second optical pattern includes a plurality of sub-areas having an area of 10 cm*10 cm.
  • the positioning engine shown in this embodiment can also project a plurality of sub-area optical patterns including an area of 10 cm*10 cm into the target sub-area 1303 and around the target sub-area 1303.
  • 10 sub-regions 1401 can be projected in the target sub-region 1302 , and the positioning accuracy of the object to be located is changed from 1 m to 10 cm, thereby improving the accuracy of positioning the object to be positioned.
  • Step 1205 The positioning engine determines a target sub-area where the object to be located is located in the second optical pattern, where the target sub-area is a sub-area in which the object to be located is located in the plurality of sub-areas of the second optical pattern.
  • the positioning engine in a case where the positioning engine has projected the second optical pattern to the target sub-area 1303, the target sub-area 1303 is marked as a plurality of smaller sub-areas by the second optical pattern, and the positioning engine may further A sub-region in which the object to be positioned is located in a plurality of sub-regions of the second optical pattern is determined.
  • Step 1206 The positioning engine determines, according to the location of the target sub-area in the second picture determined in step 1205, the location of the object to be located in the target area.
  • the positioning accuracy of positioning the object to be positioned can be further provided, and in this embodiment, the object to be positioned can be roughly positioned by the first optical pattern, and the to-be-determined is determined.
  • the positioning engine may control the projection device to project a finer second in the target sub-region in the first optical pattern
  • the optical pattern avoids the second optical pattern with high precision projection on the entire target area, which can achieve more accurate positioning and avoid the complexity of positioning the object to be positioned, thereby improving the efficiency of positioning the object to be positioned.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Projection Apparatus (AREA)

Abstract

一种定位系统以及定位方法,其中,该定位系统包括至少一个投射设备、至少一个摄像头以及定位引擎,在定位过程中,投射设备在目标区域内投射第一光学图案,第一光学图案用于对第二图片进行标定,定位引擎可通过摄像头采集的第二图片确定出待定位对象在第一光学图案中的位置,从而使得定位引擎可根据待定位对象在第一光学图案中的位置对待定位对象在目标区域内进行定位,可见,在定位过程中无需过多人力的投入,节省了大量的人力成本。

Description

一种定位系统以及定位方法
本申请要求于2017年7月31日提交中国专利局、申请号为201710640551.8、发明名称为“一种定位系统以及定位方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机领域,尤其涉及一种定位系统以及定位方法。
背景技术
精准的定位技术是生产组织过程信息化、大数据化以及智能化的前提,可以应用于工业、虚拟现实、体育和机器人等各个领域。例如,在工业领域,现代智慧工厂中,实时精确地定位人员、物资或车辆位置是工业4.0(第四次工业革命)重要组成。人、物、车的位置信息准确地将反映到控制中心,可以方便物资查询、盘点、车辆监控等,通过合理的调度安排,从而提高企业的管理水平。
常见的定位技术包括全球定位系统(global positioning system,缩写:GPS)定位和无线定位等。其中,GPS定位不能用于在工厂和仓库等室内环境下,并且GPS的定位精度也不能满足工业生产中的需求;无线定位可以用于室内定位,例如,无线定位中的测角定位,通过测量待定位目标与锚点之间收发信号的到达角(angle of arrival,缩写:AoA)或者发射角(angle of departure,AoD)方向性信息,得到待定位目标和锚点之间的相对方位或角度,从而确定待定位目标在一条通过锚点的已知方向的直线上,多条直线的交点即为待定位目标的位置。
锚点的部署位置和安放姿态需要精心设计和布放,部署过程复杂,在锚点的部署过程中以及后续对锚点的维护需要大量的人力资源,从而提升了定位成本。
发明内容
本发明实施例提供了一种定位系统以及定位方法,其能够降低定位成本,且有效的节省人力的投入。
本发明实施例第一方面提供了一种定位系统,包括至少一个投射设备、至少一个摄像头以及定位引擎;
所述定位引擎与所述至少一个投射设备中的任一投射设备连接,所述定位引擎还与至少一个摄像头中的任一摄像头连接;
本申请所示的定位系统应用至仓储物流,工厂车间等空旷的室内环境中,则所述待定位对象可为人员、物资、车辆等,所述摄像头以及所述投射设备可悬挂在所述室内环境中的屋顶,所述目标区域为室内环境中的地面。
所述定位引擎用于将投射指令发送给投射设备。
其中,所述投射指令用于指示所述投射设备向所述目标区域投射第一光学图案,投射到所述目标区域的所述第一光学图案包括多个子区域,具体的,所述投射指令用于指示第一光学图案所包括的各子区域的图样以及各子区域的面积等。
所述投射设备用于根据所述投射指令向目标区域投射所述第一光学图案。
所述投射设备可通过反射式成像技术(Liquid Crystal on Silicon,LCOS)、高温多 晶硅液晶穿透式投影技术、数字微镜反射式技术(Digital Micro-mirror Device,DMD)或振镜投影技术等以实现向目标区域投射所述第一光学图案。
所述至少一个摄像头用于采集并向所述定位引擎发送第二图片,所述第二图片包含所述第一光学图案以及位于所述第一光学图案内的待定位对象。
所述定位引擎用于接收所述第二图片,并根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
可见,本方面所提供的定位系统,在定位过程中,定位引擎可通过摄像头采集的第二图片确定出待定位对象在所述第一光学图案中的位置,从而使得定位引擎可根据待定位对象在所述第一光学图案中的位置对所述待定位对象在目标区域内进行定位,而且实现了定位引擎根据应用场景以及定位需要自适应的对所述投射设备以及摄像头进行控制,以实现对所述待定位对象的精确定位,可见,在定位过程中无需过多人力的投入,节省了大量的人力成本。
本发明实施例第一方面的一种可选的实现方式中,
所述至少一个摄像头还用于采集第一图片,所述第一图片包含所述至少一个投射设备投射到所述目标区域的所述第一光学图案;
所述至少一个摄像头还用于向所述定位引擎发送所述第一图片;
所述定位引擎还用于接收所述第一图片,并根据所述第一图片标定所述目标区域。
可见,所述投射设备在目标区域内投射第一光学图案,所述第一光学图案用于对第二图片进行标定,从而使得标定好的第一光学图案能够有效的保障对所述待定位对象的精确定位,而且定位引擎、投射设备以及摄像头的部署简单,从而降低了部署成本。
本发明实施例第一方面的一种可选的实现方式中,
所述系统包括两个投射设备,分别为第一投射设备和第二投射设备;
所述第一投射设备以及所述第二投射设备用于分别向所述目标区域投射所述第一光学图案;
所述至少一个摄像头用于采集两张所述第一图片,
其中,两张所述第一图片分别为:包含所述第一投射设备投射到所述目标区域的所述第一光学图案的第一图片,和,包含所述第二投射设备投射到所述目标区域的所述第一光学图案的第一图片;
所述至少一个摄像头用于将已采集到的两张所述第一图片发送给所述定位引擎;
所述定位引擎用于接收两张所述第一图片;
所述定位引擎用于分别从两张所述第一图片中识别出所述第一光学图案所包括的多个子区域;
所述定位引擎用于将调整信息发送给第一投射设备和/或所述第二投射设备;
所述第一投射设备和/或所述第二投射设备根据所述调整信息做调整,以使得两张所述第一图片的所述第一光学图案的各子区域对齐,所述调整信息用于控制两张所述第一图片的所述第一光学图案的各子区域对齐。
本方面所示的定位系统能够自适应的对投射到所述目标区域上的所述第一光学图案进 行调整,从而使得所述定位引擎能够采集到各子区域对齐的所述第一光学图案,从而有效的保障了通过所述第一光学图案对所述目标区域进行标定的准确性,从而有效的保障对待定位对象的精确定位。
本发明实施例第一方面的一种可选的实现方式中,
所述定位引擎用于确定目标子区域,所述目标子区域为所述待定位对象在所述第一光学图案的多个子区域中所在的子区域;
所述定位引擎还用于向所述至少一个投射设备发送控制信令,所述控制信令用于控制所述至少一个投射设备向所述目标子区域投射第二光学图案;
其中,所述第二光学图案包括多个子区域,且所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积;
所述至少一个投射设备用于接收所述控制信令,并用于根据所述控制指令向所述目标子区域投射所述第二光学图案。
采用本方面所示的定位系统,能够进一步的提供对待定位对象进行定位的定位精度,而且本实施例可通过第一光学图案对所述待定位对象进行粗略的定位,在确定出所述待定位对象的目标定位点位于所述第一光学图案中的情况下,所述定位引擎可控制投射设备在所述第一光学图案中投射更为精细的第二光学图案,避免对整个目标区域投射精度较高的第二光学图案,既能实现更精确的定位,又能避免了对待定位对象进行定位的复杂度,从而提升了对待定位对象进行定位的效率。
本发明实施例第一方面的一种可选的实现方式中,
所述第二图片中还包括所述第二光学图案以及位于所述第二光学图案内的待定位对象;
所述定位引擎还用于根据所述第二图片中的所述待定位对象在所述第二光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
采用本方面所示的定位系统,能够根据所述待定位对象在所述第二光学图案中的位置,实现对所述待定位对象的精确定位,而且降低了对待定位对象进行定位的复杂度,从而提升了对待定位对象进行定位的效率。
本发明实施例第一方面的一种可选的实现方式中,
所述定位引擎还用于,获取目标子区域的间距;
所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域;
若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标子区域为所述第一光学图案中所包括的子区域;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标子区域为所述第二光学图案中所包括的子区域;
所述定位引擎还用根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置。
本发明实施例第一方面的一种可选的实现方式中,
所述至少一个摄像头还用于采集高度信息,所述高度信息用于指示所述待定位对象在所述目标区域内的高度;
所述定位引擎还用于获取所述至少一个摄像头采集的所述高度信息。
本方面所示的定位系统还能够实现对所述待定位对象的三维定位,从而实现了对待定位对象的更为全面的定位。
本发明实施例第一方面的一种可选的实现方式中,
所述定位引擎还用于,获取目标子区域的间距d2;
所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域;
若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标光学图案为所述第一光学图案;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标光学图案为所述第二光学图案;
在所述待定位对象位于所述目标光学图案内的情况下,所述定位引擎用于获取所述至少一个投射设备在所述待定位对象的表面所投射的子区域的间距d1;
所述定位引擎用于根据公式h=(1-d 1/d 2)L获取所述待定位对象在所述目标区域内的高度h;
其中,L为所述至少一个投射设备距离所述目标区域的高度。
本方面所示的定位系统还能够实现对所述待定位对象的三维定位,从而实现了对待定位对象的更为全面的定位。
本发明实施例第一方面的一种可选的实现方式中,
所述至少一个投射设备还用于投射目标光线;
所述目标光线为所述至少一个投射设备所投射的多个光线中的一个光线;
所述定位引擎还用于,获取所述目标光线在所述目标区域上所投射的投射点的位置;
在所述待定位对象位于目标光学图案内的情况下,用于获取设定位置;
其中,若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标光学图案为所述第一光学图案;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标光学图案为所述第二光学图案;
所述设定位置为反射光线进入所述至少一个摄像头中的一个摄像头后在所述目标光学图案中所形成的虚像所在的位置;
所述反射光线为所述目标光线经由所述待定位对象的表面反射后所生成的光线;
所述定位引擎还用于根据公式
Figure PCTCN2018077742-appb-000001
获取所述待定位对象在所述目标区域内的高度h;
其中,L1为所述投射点的位置和所述设定位置之间的间距,所述L2为位于相同水平 面的所述至少一个投射设备和所述摄像头在水平方向上的间距,所述L为所述至少一个投射设备距离所述目标区域的高度。
本方面所示的定位系统还能够实现对所述待定位对象的三维定位,从而实现了对待定位对象的更为全面的定位。
本发明实施例第二方面提供了一种定位方法,包括:
步骤A、定位引擎获取至少一个摄像头采集的第二图片。
其中,所述第二图片包含至少一个投射设备向目标区域投射的第一光学图案以及位于所述第一光学图案内的待定位对象,所述第一光学图案包括多个子区域。
步骤B、所述定位引擎根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
可见,本方面所提供的定位方法,在定位过程中,定位引擎可通过摄像头采集的第二图片确定出待定位对象在所述第一光学图案中的位置,从而使得定位引擎可根据待定位对象在所述第一光学图案中的位置对所述待定位对象在目标区域内进行定位,而且实现了定位引擎根据应用场景以及定位需要自适应的对所述投射设备以及摄像头进行控制,以实现对所述待定位对象的精确定位,可见,在定位过程中无需过多人力的投入,节省了大量的人力成本。
本发明实施例第一方面的一种可选的实现方式中,所述步骤A之前,还需执行如下步骤:
步骤A01、获取所述至少一个摄像头采集的第一图片;
其中,所述第一图片包含所述至少一个投射设备投射到所述目标区域的所述第一光学图案;
步骤A02、根据所述第一图片标定所述目标区域。
可见,所述投射设备在目标区域内投射第一光学图案,所述第一光学图案用于对第二图片进行标定,从而使得标定好的第一光学图案能够有效的保障对所述待定位对象的精确定位,而且定位引擎、投射设备以及摄像头的部署简单,从而降低了部署成本,
本发明实施例第二方面的一种可选的实现方式中,所述步骤A02具体包括如下步骤:
接收两张所述第一图片,两张所述第一图片由所述至少一个摄像头采集,两张所述第一图片分别为:包含第一投射设备投射到所述目标区域的所述第一光学图案的第一图片,和,包含第二投射设备投射到所述目标区域的所述第一光学图案的第一图片;
分别从两张所述第一图片中识别出所述第一光学图案所包括的多个子区域;
发送调整信息给所述第一投射设备和/或所述第二投射设备,以使所述第一投射设备和/或所述第二投射设备根据所述调整信息做调整使得两张所述第一图片的所述第一光学图案的各子区域对齐,所述调整信息用于控制两张所述第一图片的所述第一光学图案的各子区域对齐。
本方面所示的定位方法能够自适应的对投射到所述目标区域上的所述第一光学图案进行调整,从而使得所述定位引擎能够采集到各子区域对齐的所述第一光学图案,从而有效的保障了通过所述第一光学图案对所述目标区域进行标定的准确性,从而有效的保障对待 定位对象的精确定位。
本发明实施例第二方面的一种可选的实现方式中,执行本方面所示的步骤A之前,还需执行如下步骤:
步骤A11、确定目标子区域;
所述目标子区域为所述待定位对象在所述第一光学图案的多个子区域中所在的子区域;
步骤A12、向所述至少一个投射设备发送控制信令;
所述控制信令用于控制所述至少一个投射设备向所述目标子区域投射第二光学图案,所述第二光学图案包括多个子区域,且所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积。
采用本方面所示的定位方法,能够进一步的提供对待定位对象进行定位的定位精度,而且本实施例可通过第一光学图案对所述待定位对象进行粗略的定位,在确定出所述待定位对象的目标定位点位于所述第一光学图案中的情况下,所述定位引擎可控制投射设备在所述第一光学图案中投射更为精细的第二光学图案,避免对整个目标区域投射精度较高的第二光学图案,既能实现更精确的定位,又能避免了对待定位对象进行定位的复杂度,从而提升了对待定位对象进行定位的效率。
本发明实施例第二方面的一种可选的实现方式中,所述步骤B具体包括:
根据所述第二图片中的所述待定位对象在所述第二光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
采用本方面所示的定位方法,能够根据所述待定位对象在所述第二光学图案中的位置,实现对所述待定位对象的精确定位,而且降低了对待定位对象进行定位的复杂度,从而提升了对待定位对象进行定位的效率。
本发明实施例第二方面的一种可选的实现方式中,所述步骤B具体包括:
步骤B11、获取目标子区域的间距;
所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域;
若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标子区域为所述第一光学图案中所包括的子区域;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标子区域为所述第二光学图案中所包括的子区域;
步骤B12、获取目标子区域的间距;
步骤B13、根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置。
本发明实施例第二方面的一种可选的实现方式中,所述方法还包括:
所述定位引擎获取所述至少一个摄像头采集的高度信息,所述高度信息用于指示所述待定位对象在所述目标区域内的高度。
本方面所示的定位方法还能够实现对所述待定位对象的三维定位,从而实现了对待定 位对象的更为全面的定位。
本发明实施例第二方面的一种可选的实现方式中,所述步骤B具体还包括如下步骤:
步骤B21、所述定位引擎获取目标子区域的间距d2;
所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域;
若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标光学图案为所述第一光学图案;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标光学图案为所述第二光学图案;
步骤B22、在所述待定位对象位于所述目标光学图案内的情况下,所述定位引擎获取所述至少一个投射设备在所述待定位对象的表面所投射的子区域的间距d1;
步骤B23、根据公式h=(1-d 1/d 2)L获取所述待定位对象在所述目标区域内的高度h;
其中,L为所述至少一个投射设备距离所述目标区域的高度。
本方面所示的定位系统还能够实现对所述待定位对象的三维定位,从而实现了对待定位对象的更为全面的定位。
本发明实施例第二方面的一种可选的实现方式中,所述步骤B具体包括:
步骤B31、所述定位引擎获取所述至少一个投射设备所投射的目标光线在所述目标区域上所投射的投射点的位置;
所述目标光线为所述至少一个投射设备所投射的多个光线中的一个光线;
步骤B32、在所述待定位对象位于目标光学图案内的情况下,获取设定位置;
其中,若基于所述第一光学图案对所述待定位对象进行定位,则本实现方式中所示的所述目标光学图案为所述第一光学图案;
若基于所述第二光学图案对所述待定位对象进行精确的定位,则本实现方式中所示的所述目标光学图案为所述第二光学图案;
所述设定位置为反射光线进入所述至少一个摄像头中的一个摄像头后在所述目标光学图案中所形成的虚像所在的位置,所述反射光线为所述目标光线经由所述待定位对象的表面反射后所生成的光线;
步骤B33、所述定位引擎根据公式
Figure PCTCN2018077742-appb-000002
获取所述待定位对象在所述目标区域内的高度h;
其中,L1为所述投射点的位置和所述设定位置之间的间距,所述L2为位于相同水平面的所述至少一个投射设备和所述摄像头在水平方向上的间距,所述L为所述至少一个投射设备距离所述目标区域的高度。
从以上技术方案可以看出,本发明实施例具有以下优点:
本申请所示的定位系统以及定位方法,在定位过程中,投射设备在目标区域内投射第一光学图案,所述第一光学图案用于对第二图片进行标定,定位引擎可通过摄像头采集的第二图片确定出待定位对象在所述第一光学图案中的位置,从而使得定位引擎可根据待定 位对象在所述第一光学图案中的位置对所述待定位对象在目标区域内进行定位,而且定位引擎、投射设备以及摄像头的部署简单,从而降低了部署成本,而且实现了定位引擎根据应用场景以及定位需要自适应的对所述投射设备以及摄像头进行控制,以实现对所述待定位对象的定位,可见,在定位过程中无需过多人力的投入,节省了大量的人力成本。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本申请所示的定位系统的一种实施例结构示意图;
图2为本申请所示的定位方法的一种实施例步骤流程图;
图3为本申请所示的目标区域包括的第一光学图案的一种实施例结构示意图;
图4为本申请所示的目标区域包括的第一光学图案的另一种实施例结构示意图;
图5为本申请所示的定位方法的另一种实施例步骤流程图;
图6为本申请所示的目标区域包括的第一光学图案的一种实施例结构示意图;
图7为本申请所示的目标区域包括的第一光学图案的另一种实施例结构示意图;
图8为本申请所示的目标区域包括的第一光学图案的另一种实施例结构示意图;
图9为本申请所示的获取目标子区域在目标区域中的间距的一种实施例示意图;
图10为本申请所示的获取目标子区域在目标区域中的间距的另一种实施例示意图;
图11为本申请所示的获取目标子区域在目标区域中的间距的另一种实施例示意图;
图12为本申请所示的定位方法的另一种实施例步骤流程图;
图13为本申请所示的目标区域包括的第二光学图案的一种实施例结构示意图;
图14为本申请所示的目标区域包括的第二光学图案的另一种实施例结构示意图。
具体实施方式
本申请首先提供了一种能够实现对待定位对象进行定位的定位系统,以下结合图1所示对所述定位系统的具体结构进行说明:
所述定位系统包括定位引擎101、至少一个摄像头103、至少一个投射设备102。
其中,本实施例所示的至少一个摄像头103通过有线或无线的方式与所述定位引擎101连接,所述至少一个投射设备102通过有线或无线的方式与所述定位引擎101连接。
可选的,本申请所示的所述定位引擎101在物理实体上可集成设置在所述摄像头103中,则所述摄像头103和所述投射设备102可通过有线或无线的方式连接。
可选的,本申请所示的所述定位引擎101在物理实体上可集成设置在所述投射设备102中,则所述投射设备102和所述摄像头103可通过有线或无线的方式连接。
本申请所示的有线的方式可为以太网、电缆Cable、双绞线或光纤等有线方式。
无线的方式可为无线保真(WIreless-Fidelity,WI-FI),或蓝牙Bluetooth,物联网(Internet of Things,IoT)等无线的方式。
所述至少一个投射设备102用于在所述定位引擎101的控制下向目标区域进行光学图 案100的投射;
所述至少一个摄像头103用于在所述定位引擎101的控制下对目标区域进行拍照,从而采集到包括有光学图案100的图片。
所述定位引擎101能够对图片进行分析,从而在目标区域内实现对所述待定位对象的定位。
若本申请所示的定位系统应用至仓储物流,工厂车间等空旷的室内环境中,则所述待定位对象可为人员、物资、车辆等,所述摄像头103以及所述投射设备102可悬挂在所述室内环境中的屋顶,所述目标区域为室内环境中的地面。
需明确的是,本申请对所述定位系统的应用领域为示例性说明,不做限定,例如,还可将所述定位系统应用至虚拟现实等领域。
所述定位系统还可包括计算机设备104,所述定位引擎101可将相关信息发送给所述计算机设备104,所述相关信息可为定位信息,所述定位信息为所述定位引擎101所确定的所述待定位对象在所述目标区域内的位置信息,从而使得所述计算机设备104根据所述位置信息能够便于对待定位对象进行查询、盘点、监控,进而对所述待定位对象进行合理的调度安排。
所述相关信息还可为部署信息,所述部署信息用于指示所述摄像头103以及所述投射设备102在工厂仓库中部署的位置信息等,从而便于计算机设备104对所述摄像头103以及所述投射设备102进行统计以及管理等。
为了实现对待定位对象进行精准的定位,则首先需要对投射设备投射到目标区域的第一光学图案进行校正,从而使得校正后的所述第一光学图案能够对摄像头采集到的第一图片进行标定,基于图1所示的定位系统,以下结合图2所示的实施例说明如何对投射设备投射到目标区域内的第一光学图案进行校正的具体过程。
步骤201、所述定位引擎将投射指令发送给投射设备。
本步骤所示的投射设备为所述定位系统所包括的多个投射设备中的任一投射设备。
本实施例所示的所述投射指令用于指示所述投射设备向所述目标区域投射第一光学图案,投射到所述目标区域的所述第一光学图案包括多个子区域。
以下对所述投射指令进行具体说明:
本实施例所示的所述定位引擎可预先确定第一光学图案所包括的各子区域的图样,本实施例所示的各子区域的图样可为菱形、圆形、扇形、六边形或方形。
本实施例以各子区域的图样为如图1所示的方形为例进行示例性说明,则所述定位引擎所生成的所述投射指令用于指示所述投射设备向所述目标区域所投射的子区域的图样为方形。
本实施例对所述投射指令的说明为可选的示例,不做限定,例如,所述投射指令还可用于指示所述投射设备所投射的所述子区域的面积等。
本实施例中,若所述投射设备所指示的所述子区域的面积越小,则对待定位对象的定位越精确,可见,所述定位引擎可根据应用场景的不同确定不同的子区域的面积,例如, 若需要对待定位对象进行高精度的定位,则所述投射指令所指示的所述子区域的面积会比较小,若需要对待定位对象进行粗略的定位,则所述投射指令所指示的所述子区域的面积会比较大。
步骤202、所述投射设备根据所述投射指令,向所述目标区域投射第一光学图案。
本实施例中,在所述投射设备接收到所述投射指令的情况下,所述投射设备即可向所述目标区域投射第一光学图案,且所述投射设备所投射的所述第一光学图案的图样与所述投射指令所指示的图样一致。
所述第一光学图案包括有多个子区域,多个子区域可实现对目标区域的标定,从而使得所述定位引擎能够在已标定的所述目标区域上,实现对待定位对象的定位。
具体的,本实施例所示的所述投射设备可通过反射式成像技术(Liquid Crystal on Silicon,LCOS)、高温多晶硅液晶穿透式投影技术、数字微镜反射式技术(Digital Micro-mirror Device,DMD)或振镜投影技术等以实现向目标区域投射所述第一光学图案。
可选的,本实施例所示的所述投射设备所投射的光可为可见光。
其中,在将本实施例所示的投射设备部署在室内环境中的屋顶上时,则投射有可见光的所述投射设备可替代室内环境中的照明灯,在提供室内照片的同时,还能够实现本申请所示的对待定位对象的定位流程,从而节省了照明灯的部署,节省了成本。
可选的,本实施例所示的所述投射设备所投射的光还可为不可见光,如红外光或紫外光等,若所述投射设备所投射的光为红外光,则本实施例所示的摄像头需要具有采集红外图像的能力,以使具有采集红外图像的能力的摄像头能够对所述投射设备通过红外光所投射的第一光学图案进行拍摄。
可选的,本实施例所示的所述投射设备可向所述目标区域投射亮度小于预设阈值的暗线,而所述目标区域被大面积光照以呈现光亮的效果,则投射设备所投射的暗线可在所述目标区域上形成所述第一光学图案。
步骤203、摄像头采集并发送第一图片。
本实施例中,所述定位引擎可控制摄像头对所述目标区域进行拍摄,摄像头即可采集到第一图片,其中,所述第一图片包含所述至少一个投射设备投射到所述目标区域的第一光学图案。
本实施例所示的所述定位系统所包括的一个或多个摄像头可同时对所述目标区域进行拍摄以获取所述第一图片。
步骤204、所述定位引擎接收所述第一图片。
可选的,本实施例所示的所述定位引擎可接收到多张第一图片,多张第一图片可由所述定位系统所包括的任一摄像头对目标区域进行连续拍摄以采集到的,或多张第一图片可由所述定位系统所包括的多个摄像头对目标区域分别拍摄以采集到的。
本实施例中,所述第一图片可能由于噪声点等其他信息的干扰,则定位引擎在对第一图片进行分析时无法检测到摄像头采集的所述第一光学图案,则所述定位引擎可在获取到多张第一图片中逐一进行检测,直至成功检测到所述第一图片中所包括的所述第一光学图 案。
可选的,若本实施例所示的所述定位引擎成功在多张第一图片中检测到所述第一光学图案,则所述定位引擎即可在成功检测到所述第一光学图案的多张第一图片中,随机挑选出一张包含有所述第一光学图案的第一图片,或,所述定位引擎可挑选出第一光学图案最清晰的所述第一图片,具体在本实施例中不做限定。
步骤205、所述定位引擎关闭所述投射设备。
本实施例在所述摄像头对所述目标区域进行拍摄的过程中,若所述待定位对象出现在所述第一光学图案内,则所述摄像头所拍摄到的第一图片中就会包括位于所述第一光学图案内的待定位对象,此时的第一图片即为本申请中所述的第二图片。
在所述定位引擎获取到包含有所述第一光学图案的所述第一图片的情况下,所述定位引擎即可识别出所述第一光学图案的图样、位置和方向等属性信息,所述定位引擎即可将所述属性信息进行存储。
本实施例中,在所述定位引擎成功存储有所述第一光学图案的属性信息的情况下,所述定位引擎即可将关闭指令发送给所述投射设备,所述关闭指令用于指示所述投射设备进行关闭,所述投射设备在接收到所述关闭指令的情况下,即可进行关闭。
本实施例中,以所述定位系统包括有多个投射设备为例进行示例性说明,则在执行完步骤205的情况下,即可返回执行步骤201,直至所述定位引擎对所述定位系统所包括的多个投射设备通过步骤201至步骤205轮询一遍,从而使得所述定位引擎能够存储分别与各投射设备对应的所述属性信息。
步骤206、所述定位引擎判断所述第一光学图案是否满足目标条件,若否,则执行步骤207。
本实施例所示的所述目标条件为各所述投射设备向所述目标区域所投射的第一光学图案中任意相邻的两个子区域相互对齐,且任意相邻的两个子区域的样式相同。
具体的,所述样式为各子区域的图样以及各子区域的面积。
具体的,所述定位引擎可根据各投射设备的属性信息判断所述第一光学图案是否满足所述目标条件,若否,则所述定位引擎可对各投射设备所投射的第一光学图案进行分析,若不同的投射设备所投射的第一光学图案的图样不一致,则所述定位引擎可通过调整信息控制投射设备所投射的第一光学图案的图样,直至所有所述投射设备所投射的第一光学图案的图样一致。
其中,所述定位引擎可对所述第一图片进行二值化,即所述定位引擎将第一图片上的像素点的亮度值进行重新设定,从而使得第一光学图案的亮度值和第一图片的背景的亮度值不同,从而使得所述定位引擎能够在所述第一图片中将所述第一光学图案进行提取,所述定位引擎即可判断出不同的投射设备所投射的第一光学图案的图样是否一致。
本实施例对所述定位引擎在第一图片中提取所述第一光学图案的过程的说明为可选的示例,不做限定,在具体应用中,只要所述定位引擎能够成功提取出所述第一光学图案即可。
具体的,若投射设备所投射的相邻的两个第一光学图案没有对齐,则所述定位引擎可 通过调整信息控制所述投射设备进行平移,旋转,缩放,放大等,直至相邻的两个第一光学图案对齐在一起。
结合图1和图3所示进行示例性的说明,所述定位引擎101根据第一投射设备L1以及第二投射设备L2所发送的属性信息确定出第一光学图案301以及第一光学图案302,其中,所述第一投射设备可为所述定位系统所包括的任一投射设备,所述第一光学图案301为所述定位系统所包括的第一投射设备L1所投射的光学图案,所述第一光学图案302为所述定位系统所包括的第二投射设备L2所投射的光学图案,且所述第一光学图案301与所述第一光学图案302在所述目标区域内相邻。
如图3所示,所述定位引擎即可根据所述属性信息确定出所述第一光学图案301和所述第一光学图案302所包括的任意两个子区域的样式相同,但是所述第一光学图案301和所述第一光学图案302没有对齐在一起,则可通过执行步骤208实现第一光学图案301和所述第一光学图案302的相互对齐。
若所述定位引擎根据所述属性信息确定出所述第一光学图案301和所述第一光学图案302所包括的任意两个子区域的样式相同,且所述第一光学图案301和所述第一定光学图案302对齐在一起,则说明无需对投射设备所投射的第一光学图案进行校正。
步骤207、所述定位引擎将调整信息发送给投射设备。
本实施例所示的所述调整信息包括:指示所述投射设备选择第一光学图案的样式,对投射的第一光学图案进行平移、旋转、缩放等。
具体的,如图3所示,本实施例所示的所述调整信息可用于指示所述第一投射设备沿X轴平移的距离,以及沿Y轴平移的距离。
步骤208、所述投射设备根据调整信息对所述第一光学图案进行调整。
如图4所示,本实施例所示的第一投射设备根据所述调整信息,对所述第一投射设备所投射的第一光学图案沿X轴以及Y轴进行平移,以使平移后的所述第一投射设备L1所投的第一光学图案301和所述第二投射设备L2所投所述第一光学图案302对齐在一起。
本实施例所示的步骤206至步骤208可重复执行多次,从而使得各投射设备投射到所述目标区域内的第一光学图案能够更准确的对齐在一起,且保证位于所述目标区域内的多个第一光学图案中任意两个子区域的样式相同。
本实施例步骤201至步骤208所示的过程说明了在所述定位系统包括有多个投射设备时,是如何对投射到所述目标区域内的第一光学图案进行校正的具体过程。
在具体应用中,本申请所示的所述定位系统也可仅包括一个投射设备,则在所述定位系统包括有一个投射设备的情况下,在执行本实施例所示的步骤207的过程中,所述目标条件可为所述第一光学图案中任意两个子区域的样式相同即可。
若所述定位引擎经由步骤206判断出所述第一光学图案中任意两个子区域的样式不同,则所述定位引擎可将调整信息发送给所述投射设备,以使投射设备能够更改第一光学图案的样式,直至所述第一光学图案中任意两个子区域的样式相同为止。
本实施例所示以所述定位引擎向多个所述投射设备,依次发送所述投射指令为例进行示例性说明:
在本申请中,所述定位引擎也可向多个所述投射设备,同时发送所述投射指令,所述定位引擎为了区分各所述投射设备所投射的第一光学图案,则所述定位引擎通过所配置的所述投射指令控制不同的所述投射设备投射不同的所述第一光学图案,例如,所述投射指令用于指示不同的投射设备投射颜色不同的第一光学图案,又如,所述投射指令用于指示不同的投射设备投射亮度不同的第一光学图案,又如,所述投射指令用于指示不同的投射设备投射闪烁频率不同的第一光学图案。
在定位引擎区分出不同的投射设备所投射的第一光学图案的情况下,所述定位引擎即可获取到不同的投射设备所投射的第一光学图案的属性信息,属性信息的具体说明请详见上述所示,具体不做赘述,所述定位引擎根据各所述投射设备的属性信息如何进行第一图片的校正的,请详见上述所示,具体不做赘述。
在所述定位引擎向多个所述投射设备,同时发送所述投射指令的情况下,可大大减少对第一光学图案进行校正的过程所消耗的时间,从而提升了对第一光学图案进行校正的效率。
本申请图2所示的实施例在定位引擎对待定位对象进行定位的过程中,无需每次对待定位对象进行定位时,均执行一次图2所示的实施例,只有在定位环境变化时,例如定位系统所包括摄像头或投射设备歪了,或者对摄像头或投射设备进行了重新的部署等情况,则需要执行图2所示的实施例,在定位环境没有变化时,可不执行图2所示的实施例,当然,若为了有效的保障对待定位对象的精确定位,也可每次对待定位对象进行定位时,均执行一次图2所示的实施例,具体在本申请中不做限定。
采用本实施例所示的方法,能够实现对第一光学图案进行校正,且在校正的过程中,可实现定位引擎自适应的对摄像头和投射设备进行控制以进行第一光学图案的校正,校正后的所述第一光学图案能够对第一图片进行精确的标定,从而提升了对待定位对象定位的精确性。校正过程不需要人工的干预,极大的节约了人力成本,而且定位引擎、投射设备以及摄像头的部署简单方便,无需部署价格昂贵的设备,节省了定位系统部署的成本,从而适合大规模工业应用。而且即便摄像头以及投射设备的位置发生了改变,所述定位引擎也能够重新进行校正,从而有效的保障了后续对待定位对象进行定位的精度,而且大量的节约了人力,降低了部署成本。
基于图2所示说明了如何对投射设备投射到目标区域内的第一光学图案进行校正的具体过程,以下结合图5所示的实施例说明了如何对待定位对象进行定位的具体过程:
步骤501、所述摄像头采集并向定位引擎发送第二图片。
所述第二图片包含所述第一光学图案以及位于所述第一光学图案内的待定位对象。
如图6所示,所述第二图片600包含投射设备投射到目标区域的第一光学图案601,其中,本实施例所示的第一光学图案为图2所示的实施例中经过校正的第一光学图案。
可选的,在需要对所述待定位对象进行定位时,所述定位引擎可控制所述摄像头采集所述第二图片,具体可为,所述定位引擎向所述摄像头发送采集指令,从而使得所述摄像头在接收到所述采集指令的情况下,对所述目标区域进行拍摄以采集所述第二图片。
所述定位引擎可对采集到的第二图片进行分析,若所述第二图片中包括如图6所示的位于所述第一光学图案601内的待定位对象602,则可执行后续步骤以进行对待定位对象的定位过程。
可选的,用户在确定需要对待定位对象进行定位时,用户即可向所述定位引擎输入定位指示信息,则所述定位引擎即可接收用户输入的所述定位指示信息,则所述定位引擎在接收到所述定位指示信息的情况下,即可执行本实施例所示的步骤501。
其中,所述定位引擎可直接接收用户输入的所述定位指示操作,如所述定位引擎可生成操作界面,所述操作界面用于接收用户输入的所述定位指示操作,或所述定位引擎可通过与所述定位引擎连接的外部设备接收用户输入的所述定位指示操作,所述外部设备可为图1所示的计算机设备104,或与所述定位引擎连接的终端设备,如智能手机等。
步骤502、所述定位引擎接收所述第二图片。
可选的,本实施例所示可由至少一个摄像头可采集到多张第二图片,所述定位引擎可对多张第二图片进行分析,从而使得所述定位引擎在成功检测到包括所述第一光学图案的多张第二图片中,随机挑选出一张第二图片,或,所述定位引擎可挑选出第一光学图案以及所述待定位对象最清晰的第二图片,具体在本实施例中不做限定。
步骤503、所述定位引擎确定待定位对象所在的目标子区域。
所述第一光学图案包括多个子区域,所述待定位对象在所述多个子区域中所位于的子区域为目标子区域。
以下对所述定位引擎如何确定所述目标子区域的过程进行示例性说明:
所述定位引擎根据待定位对象的特征在所述第二图片中识别出所述待定位对象,并确定出待定位对象在第二图片中所在的目标子区域。
具体的,所述定位引擎为实现对待定位对象在目标子区域内的精确定位,则所述定位引擎可预先存储有与所述待定位对象对应的特征,所述定位引擎在获取到所述第二图片的情况下,所述定位引擎即可对所述第二图片进行特征提取,从而确定出已提取出的特征是否与所述待定位对象对应的特征相匹配,若是,则所述定位引擎即可在所述第二图片中识别出所述待定位对象。
可选的,在所述定位引擎获取到至少一个摄像头所采集到的多张第二图片的情况下,则所述定位引擎可在多张所述第二图片中逐一进行特征提取,直至所述定位引擎在所述第二图片中成功识别出所述待定位对象。
本实施例所示的投射到所述目标区域上的所述第一光学图案包括多个子区域,其中,为实现对待定位对象的精确定位,则本实施例所示的第一光学图案在所述第二图片中的相对位置,与图2所示的实施例所示的经过校正的第一光学图案在所述第一图片中的相对位置不变。
如图6所示,在所述第二图片中,所述第一光学图案601包括多个子区域的603,所述目标子区域604为所述待定位对象602在所述多个子区域603中所在的子区域。
如图6所示说明了在所述待定位对象位于一个所述子区域内的情况下,所述定位引擎是如何确定所述目标子区域的,以下说明在待定位对象位于多个子区域内的情况下,所述 定位引擎是如何确定所述目标子区域的具体过程:
如图7所示,在所述第二图片700中,所述待定位对象701位于8个子区域内,则所述定位引擎可确定目标定位点所位于的子区域为所述目标子区域。
其中,所述目标定位点为与所述待定位对象701对应的像素点。
例如,所述目标定位点可为所述待定位对象701上的任一像素点,如所述目标定位点可为所述待定位对象701的几何中心像素点。
又如,所述目标定位点可为位于所述待定位对象701周围的任一像素点。
本实施例对所述目标定位点不做限定,只要能够通过所述目标定位点实现对待定位对象的定位即可,本实施例以所述目标定位点为所述待定位对象701的几何中心像素点为例进行示例性说明,则所述目标子区域为所述待定位对象701的几何中心像素点所位于的子区域。
具体的,本实施例所示的定位引擎为实现对所述待定位对象的精确定位,则所述定位引擎可确定出所述待定位对象在所述第二图片中的坐标。
其中,所述定位引擎可在所述第二图片中设定坐标原点。
本实施例对所述坐标原点的设定位置不做限定,只要所述坐标原点为所述第二图片中的任一像素点即可,以图6所示为例,所述定位引擎可以所述第二图片中左上角的像素点为坐标原点为例进行示例性说明:
通过本实施所示的步骤503,所述定位引擎即可确定出所述目标子区域在所述第二图片中的位置,即所述定位引擎可确定所述目标子区域位于所述第二图片中的第几行和第几列。
本实施例中,所述定位引擎可获取第二图片序列,所述第二图片序列中包括至少一个所述摄像头所拍摄的多张第二图片,且位于所述第二图片序列中的各第二图片以拍摄时间由前到后的顺序进行排序,在所述待定位对象进行运动的情况下,则所述待定位对象的目标定位点在各所述第二图片中的相对位置是不同的,则所述定位引擎即可确定出所述待定位对象在不同的拍摄时间,相对于所述第二图片的位置的变化,从而使得所述定位引擎确定出所述待定位对象的移动轨迹或移动速度等。
步骤504、所述定位引擎根据所述目标子区域在第二图片中的位置确定出所述待定位对象在所述目标区域中的位置。
本实施例中,第二图片与目标区域之间存在位置映射关系,所述定位引擎在确定出所述目标子区域在所述第二图片中的位置,即可确定出所述待定位对象在所述目标区域中的位置。
所述定位引擎已存储了目标区域相对于室内环境的地面的相对位置,则所述定位引擎即可根据所述待定位对象在所述目标区域中的位置,确定出所述待定位对象在室内环境的地面中的位置。
本实施例所示的所述待定位对象在所述目标区域中的位置可为所述待定位对象在所述目标区域中二维位置。
例如,本实施例所示的所述待定位对象在所述目标区域中的位置可为所述待定位对象 在室内环境的地面上的二维位置。
具体的,所述定位引擎即可根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距获取所述待定位对象在所述目标区域中的位置。以图6所示为例,所述定位引擎可获取到各所述子区域的间距,如所述投射设备将子区域投射到所述目标区域中时,各所述子区域的间距为1米;
所述定位引擎确定出所述目标子区域位于所述第一光学图案中的第5行第3列的子区域中,则所述定位引擎即可确定出所述待定位对象所位于的所述目标子区域在所述目标区域中的坐标为(5×1,3×1),通过确定出所述目标子区域在所述目标区域内的坐标,从而实现了对所述待定位对象的定位。
本实施例以各所述子区域的形状为方形为例进行示例性说明,在具体应用中,所述子区域也可为其他形状,在对所述待定位对象进行定位时,只要确定出所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距即可获取到所述待定位对象在所述目标区域中的位置。
以下对如何确定所述目标子区域的间距的过程进行示例性说明:
第一种确定所述目标子区域的间距的方式为:
所述定位引擎可获取所述第二图片中至少有四个预设像素点。
具体的,所述定位引擎已知所述预设像素点在所述第二图片中的坐标以及所述预设像素点在所述目标区域内的坐标。
本实施例以所述定位引擎所获取到的所述预设像素点在所述第二图片中的坐标以及所述预设像素点在所述目标区域内的坐标可为用户预先输入的为例进行示例性说明。
需明确的是,本实施例对所述定位引擎获取所述预设像素点在所述第二图片中的坐标以及所述预设像素点在所述目标区域内的坐标的具体方式在本实施例中不做限定。
其中,所述第二图片中的坐标系和所述目标区域的坐标系可为相互对应的坐标系,如,在所述第二图片中的坐标原点为所述第二图片中的左上角,对应的,在所述目标区域内的坐标原点为所述目标区域中的左上角。
如图8所示,以各所述子区域为方形图像为例进行示例性说明;
本实施例以所述定位引擎在所述第二图片800中所确定的预设像素点为E、B、C、D,则在所述第二图片800中,所述定位引擎即可确定出所述预设像素点E所位于的子区域在所述第二图片800中的坐标(x1,y1),即所述预设像素点E所位于的子区域在所述第二图片800所包括的第一光学图案中的第几行第几列,以图8为例,则所述(x1,y1)为(3,3)。
可见,所述定位引擎即可确定出预设像素点B、C以及D所位于的子区域在所述第二图片800中的坐标(x2,y2),(x3,y3)以及(x4,y4)。
所述定位引擎还可获取到所述预设像素点为E、B、C、D在所述目标区域内的实际坐标,分别为(X1,Y1)、(X2,Y2)、(X3,Y3)、(X4,Y4)。
所述定位引擎即可获取到预设公式为:
Figure PCTCN2018077742-appb-000003
本实施例所示的定位引擎可将已获取到的各预设像素点在所述第二图片中的坐标以及各预设像素点在所述目标区域内的坐标分别带入至所述预设公式中即可求解出矩阵A以及参数W′。
其中,所述矩阵A是一个三行三列的矩阵。
所述定位引擎即可根据所述矩阵A以及参数W′确定出预设对应关系。
其中,所述预设对应关系为:
Figure PCTCN2018077742-appb-000004
可见,通过所述预设对应关系建立了在所述第二图片中各子区域在所述第二图片中的坐标与各子区域在目标区域内的坐标的对应关系。
所述定位引擎即可将已获取到的所述预设对应关系进行存储。
在所述定位引擎为所述待定位对象进行定位时,所述定位引擎可获取第一像素点在所述第二图片中的坐标以及第二像素点在所述第二图片中的坐标。
具体的,所述第一像素点和所述第二像素点分别为所述第二图片所包括的多个子区域中的任一子区域图像两端的像素点。
如,所述第一像素点可为子区域左上角的像素点,所述第二像素点可为相同子区域的右上角的像素点。
所述定位引擎即可将所述第一像素点和所述第二像素点在所述第二图片中的坐标带入至所述预设对应关系中,则所述定位引擎即可确定出所述第一像素点在所述目标区域内的坐标,以及所述第二像素点在所述目标区域内的坐标,所述定位引擎在所述第一像素点在所述目标区域中的坐标和所述第二像素点在所述目标区域中的坐标之间做减法,从而确定出所述子区域的间距。
第二种确定所述目标子区域的间距的方式为:
所述定位引擎获取第一公式:
Figure PCTCN2018077742-appb-000005
本种确定所述目标子区域的间距的方式中,如图9所示为例,以一个投射设备投射出所述目标子区域为例进行示例性说明:
所述第一公式中的参数d1是所述投射设备的成像器件上格子的间距,其中,所述投射设备的成像器件上格子用于在所述目标区域上投射出所述子区域;
所述第一公式中的参数d2是所述投射设备投射到所述目标区域内的子区域的间距;
所述第一公式中的参数h1是所述投射设备的成像设备到透镜的距离,其中,所述成像设备可为液晶片,或液晶投影,或数字微镜,或振镜等,本实施例以所述成像设备可为液晶片为例进行示例性说明:
所述第一公式中的参数h2是所述目标区域所在平面到透镜的距离;
所述第一公式中的参数h是所述液晶片距离所述目标区域平面的高度,所述h可在部署所述投射设备时进行测量,用户即可将测量得到的所述h输入至所述投射设备;
所述第一公式中的参数f是所述投射设备的透镜的焦距。
其中,所述参数d1、所述参数h1、所述参数h2、所述参数h以及所述参数f为已知的参数。
所述定位引擎即可根据第一公式推导出第二公式:
Figure PCTCN2018077742-appb-000006
可见,在所述定位引擎确定所述目标子区域的间距的过程中,所述定位引擎可将已知的参数d1、所述参数h1、所述参数h2、所述参数h以及所述参数f带入至所述第二公式中,所述定位引擎即可求解出所述投射设备投射到所述目标区域内的子区域的间距。
可选的,本实施例所示的所述待定位对象在所述目标区域中的位置还可为所述待定位对象在所述目标区域中三维位置。
具体的,本实施例所示的所述待定位对象在所述目标区域中的位置可为所述待定位对象在室内环境的地面中的二维坐标以及所述待定位对象的高度。
确定所述待定位对象在室内环境的地面中的二维坐标的具体过程请详见上述所示,具体不做赘述。
本实施例所示的定位引擎获取所述待定位对象的高度的具体过程可为:
确定所述待定位对象的高度的第一种方式为:
所述定位系统所包括的摄像头为用于进行深度信息测量的摄像头,如深度相机Kinect Sensor所具有的摄像头或双目相机所具有的摄像头等。
所述定位引擎即可直接指示所述摄像头采集高度信息,其中,所述高度信息用于指示所述待定位对象在所述目标区域内的高度。
所述定位引擎即可直接获取所述高度信息,从而使得所述定位引擎能够获取到所述待定位对象在所述目标区域内的高度。
确定所述待定位对象的高度的第二种方式为:
如图10所示,以所述目标子区域为图10中的投射设备1001所投射的子区域为例,所述投射设备1001距离所述目标区域平面的高度是L。
当待定位对象1003没有进入目标子区域1004时,所述投射设备1001在所述目标区域 内所投射出的所述目标子区域1004的间距是d2;
当所述待定位对象1003进入目标子区域1004时,所述投射设备1001在所述待定位对象1003的表面所投射的子区域的间距是d1;
其中,高度L、间距d2以及间距d1可由所述定位引擎测量以得到。
所述定位引擎即可根据第三公式计算出所述待定位对象1003的高度h;
所述定位引擎根据三角形相似原理确定出第三公式:
Figure PCTCN2018077742-appb-000007
所述定位引擎即可根据所述第三公式推导出第四公式:
所述第四公式为:
h=(1-d 1/d 2)L
在所述定位引擎确定所述待定位对象的高度的具体过程中,所述定位引擎可将已获取到的高度L、间距d2以及间距d1带入至所述第四公式,从而使得所述定位引擎即可获取到所述待定位对象1003的高度h。
确定所述待定位对象的高度的第三种方式为:
如图11所示为例,点A为投射设备1101的位置,点B是摄像头1102的位置,d是所述摄像头1102与所述投射设备1101的距离。
所述摄像头1102和所述投射设备1101在同一平行于所述目标区域的平面上安装,则所述摄像头1102和所述投射设备1101所位于的水平面和所述目标区域所在的水平面的高度差是L。
获取所述投射设备1101所投射的目标光线在所述目标区域上所投射的投射点E的位置,所述目标光线为所述投射设备1101所投射的多个光线中的一个光线;
当待定位对象1103进入目标区域后,投射点E经所述待定位对象1103表面点C反射,设定位置D是反射光线进入摄像头1102后形成的虚像所在位置,物体表面点C距离目标区域高度为h,则根据三角形ABC和DEC的相似性可得:
Figure PCTCN2018077742-appb-000008
其中,L1为点D和点E之间的距离,L2为点A和点B之间的距离。
所述定位引擎即可确定出第五公式:
Figure PCTCN2018077742-appb-000009
所述定位引擎在确定所述待定位对象的高度的具体过程中,所述定位引擎可将检测到的L1、L2以及L带入至所述第五公式中,从而使得所述定位引擎求解出所述待定位对象的高度h。
本实施例对如何对所述待定位对象进行定位的说明为可选的示例,只要所述定位引擎 能够确定出所述投射设备向所述目标区域内所投射的各子区域的间距即可,如所述摄像头可同时对位于所述目标区域内的待定位对象和标定物进行拍摄,其中,所述定位引擎已存储了标定物的尺寸,则所述定位引擎即可根据标定物的尺寸和目标区域之间的相对位置,确定出所述各子区域的间距。例如,若所述定位引擎已确定所述标定物的长度为1米,而所述标定物占据了2个子区域的长度,则说明各子区域的间距为0.5米。
采用本实施例所示的定位方法的有益效果在于:
本实施例所示的定位方法,在所述定位引擎、投射设备以及摄像头相互配合的情况下,能够实现对待定位对象的精确定位,而且只需要定位引擎、摄像头以及投射设备之间的数据交互即可实现对待定位对象的定位,从而对环境的依赖低,定位引擎可依据环境灵活的调整第一光学图案,而且定位引擎可控制所述投射设备向目标区域投射尺寸不同的第一光学图案,从而实现对待定位对象不同精度的定位需求。
基于图2所示说明了如何对投射设备投射到目标区域内的第一光学图案进行校正的具体过程,以下结合图12所示的实施例说明了如何对待定位对象进行快速定位的具体过程:
步骤1201、所述摄像头采集并向定位引擎发送第二图片。
步骤1202、所述定位引擎接收所述第二图片。
本实施例所述的步骤1201至步骤1202的具体执行过程,请详见图5所示的步骤501至步骤502所示,具体执行过程在本实施例中不做赘述。
步骤1203、所述定位引擎确定待定位对象在第一光学图案中所在的目标子区域,该目标子区域为待定位对象在所述第一光学图案的多个子区域中所在的子区域。
可选的,所述目标子区域为所述待定位对象的目标定位点在所述第一光学图案的多个子区域中所在的子区域。
所述目标定位点的具体说明请详见上述实施例所示,具体在本实施例中不做赘述。
本实施例中,以所述目标子区域为所述待定位对象的目标定位点在所述第一光学图案的多个子区域中所在的子区域为例进行示例性说明:
若以图13所示为例,则所述定位引擎可确定出所述待定位对象1304位于第一光学图案1300的多个子区域中所在子区域1303,即子区域1303为目标子区域。
步骤1204、所述定位引擎控制所述投射设备向所述目标子区域1303投射第二光学图案。
具体的,所述定位引擎可向所述投射设备发送投射指令,所述投射指令用于指示所述投射设备向所述目标子区域1303投射第二光学图案,其中,所述第二光学图案也包括多个子区域,且所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积。
如图13所示为例,所述投射指令用于指示所述投射设备向所述目标子区域1303所投射第二光学图案,其中,第二光学图案的各子区域的面积小于所述第一光学图案所包括的子区域的面积。
以所述定位引擎确定出所述第一光学图案所包括的子区域1302的面积为1m*1m为例,在所述定位引擎确定出所述待定位对象的目标定位点位于目标子区域1303内时,所述定位引擎即可在所述目标子区域1303内投射第二光学图案,第二光学图案包含多个面积为10cm*10cm的子区域。
当然,本实施例所示的定位引擎也可向所述目标子区域1303内以及所述目标子区域1303的周围投射包括多个面积为10cm*10cm的子区域光学图案。
以图14所示为例,在所述目标子区域1302内可投射10个子区域1401,对所述待定位对象的定位精度从1m变到了10cm,从而提升了对待定位对象进行定位的精度。
步骤1205、所述定位引擎确定待定位对象在第二光学图案中所在的目标子区域,该目标子区域为待定位对象在所述第二光学图案的多个子区域中所在的子区域。
本实施例中,在所述定位引擎已向目标子区域1303投射第二光学图案的情况下,目标子区域1303被第二光学图案标定为多个更小的子区域,所述定位引擎可进一步确定出待定位对象在第二光学图案的多个子区域中所在的子区域。
步骤1206、所述定位引擎根据步骤1205中确定出的目标子区域在第二图片中的位置,确定待定位对象在目标区域内的位置。
本实施例所示的定位引擎根据所述第二光学图案确定所述待定位对象在所述目标区域中的位置的具体过程的说明,请详见上述实施例所示根据所述第一光学图案确定所述待定位对象在所述目标区域中的位置的具体过程,具体在本实施例中不做赘述。
采用本实施例所示的定位方法,能够进一步的提供对待定位对象进行定位的定位精度,而且本实施例可通过第一光学图案对所述待定位对象进行粗略的定位,在确定出所述待定位对象的目标定位点所位于的第一光学图案中的目标子区域的情况下,所述定位引擎可控制投射设备在所述第一光学图案中的目标子区域中投射更为精细的第二光学图案,避免对整个目标区域投射精度较高的第二光学图案,既能实现更精确的定位,又能避免了对待定位对象进行定位的复杂度,从而提升了对待定位对象进行定位的效率。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既 可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (18)

  1. 一种定位系统,其特征在于,包括至少一个投射设备、至少一个摄像头以及定位引擎,且所述定位引擎与所述至少一个投射设备中的任一投射设备连接,所述定位引擎还与至少一个摄像头中的任一摄像头连接;
    所述至少一个投射设备用于向目标区域投射第一光学图案,所述第一光学图案包括多个子区域;
    所述至少一个摄像头用于采集并向所述定位引擎发送第二图片,所述第二图片包含所述第一光学图案以及位于所述第一光学图案内的待定位对象;
    所述定位引擎用于接收所述第二图片,并根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
  2. 根据权利要求1所述的系统,其特征在于,所述至少一个摄像头还用于采集并向所述定位引擎发送第一图片,所述第一图片包含所述至少一个投射设备投射到所述目标区域的所述第一光学图案;
    所述定位引擎还用于接收所述第一图片,并根据所述第一图片标定所述目标区域。
  3. 根据权利要求2所述的系统,其特征在于,所述系统包括两个投射设备,分别为第一投射设备和第二投射设备,所述定位引擎根据所述第一图片标定所述目标区域,具体为:所述定位引擎接收两张所述第一图片,两张所述第一图片由所述至少一个摄像头采集,两张所述第一图片分别为:包含所述第一投射设备投射到所述目标区域的第一光学图案的第一图片,和,包含所述第二投射设备投射到所述目标区域的第一光学图案的第一图片;
    所述定位引擎分别从两张所述第一图片中识别出所述第一光学图案所包括的多个子区域;
    所述定位引擎发送调整信息给所述第一投射设备和/或所述第二投射设备,以使所述第一投射设备和/或所述第二投射设备根据所述调整信息做调整使得两张所述第一图片的所述第一光学图案的各子区域对齐,所述调整信息用于控制两张所述第一图片的所述第一光学图案的各子区域对齐。
  4. 根据权利要求1至3任一项所述的系统,其特征在于,
    所述定位引擎用于确定目标子区域,所述目标子区域为所述待定位对象在所述第一光学图案的多个子区域中所在的子区域;
    所述定位引擎还用于向所述至少一个投射设备发送控制信令,所述控制信令用于控制所述至少一个投射设备向所述目标子区域投射第二光学图案,所述第二光学图案包括多个子区域,且所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积;
    所述至少一个投射设备用于接收所述控制信令,并用于根据所述控制指令向所述目标子区域投射所述第二光学图案。
  5. 根据权利要求4所述的系统,其特征在于,所述第二图片中还包括所述第二光学图案以及位于所述第二光学图案内的待定位对象;
    所述定位引擎还用于根据所述第二图片中的所述待定位对象在所述第二光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
  6. 根据权利要求1至5任一项所述的系统,其特征在于,
    所述定位引擎还用于,获取目标子区域的间距,所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积,根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置。
  7. 根据权利要求1至6任一项所述的系统,其特征在于,
    所述至少一个摄像头还用于采集高度信息,所述高度信息用于指示所述待定位对象在所述目标区域内的高度;
    所述定位引擎还用于获取所述至少一个摄像头采集的所述高度信息。
  8. 根据权利要求1至6任一项所述的系统,其特征在于,
    所述定位引擎还用于,获取目标子区域的间距d2,所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积,在所述待定位对象位于所述目标光学图案内的情况下,获取所述至少一个投射设备在所述待定位对象的表面所投射的子区域的间距d1,根据公式h=(1-d 1/d 2)L获取所述待定位对象在所述目标区域内的高度h,其中,L为所述至少一个投射设备距离所述目标区域的高度。
  9. 根据权利要求1至6任一项所述的系统,其特征在于,
    所述至少一个投射设备还用于投射目标光线,所述目标光线为所述至少一个投射设备所投射的多个光线中的一个光线;
    所述定位引擎还用于,获取所述目标光线在所述目标区域上所投射的投射点的位置,在所述待定位对象位于目标光学图案内的情况下,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积,所述定位引擎用于获取设定位置,所述设定位置为反射光线进入所述至少一个摄像头中的一个摄像头后在所述目标光学图案中所形成的虚像所在的位置,所述反射光线为所述目标光线经由所述待定位对象的表面反射后所生成的光线,根据公式
    Figure PCTCN2018077742-appb-100001
    获取所述待定位对象在所述目标区域内的高度h,其中,L1为所述投射点的位置和所述设定位置之间的间距,所述L2为位于相同水平面的所述至少一个投射设备和所述摄像头在水平方向上的间距,所述L为所述至少一个投射设备距离所述目标区域的高度。
  10. 一种定位方法,其特征在于,包括:
    获取至少一个摄像头采集的第二图片,所述第二图片包含至少一个投射设备向目标区域投射的第一光学图案以及位于所述第一光学图案内的待定位对象,所述第一光学图案包括多个子区域;
    根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在目标区域中的位置。
  11. 根据权利要求10所述的方法,其特征在于,所述获取至少一个摄像头采集的第二图片之前,所述方法还包括:
    获取所述至少一个摄像头采集的第一图片,所述第一图片包含所述至少一个投射设备投射到目标区域的所述第一光学图案,所述第一光学图案包括多个子区域;
    根据所述第一图片标定所述目标区域。
  12. 根据权利要求11所述的方法,其特征在于,所述根据所述第一图片标定所述目标区域包括:
    接收两张所述第一图片,两张所述第一图片由所述至少一个摄像头采集,两张所述第一图片分别为:包含第一投射设备投射到所述目标区域的所述第一光学图案的第一图片,和,包含第二投射设备投射到所述目标区域的所述第一光学图案的第一图片;
    分别从两张所述第一图片中识别出所述第一光学图案所包括的多个子区域;
    发送调整信息给所述第一投射设备和/或所述第二投射设备,以使所述第一投射设备和/或所述第二投射设备根据所述调整信息做调整使得两张所述第一图片的所述第一光学图案的各子区域对齐,所述调整信息用于控制两张所述第一图片的所述第一光学图案的各子区域对齐。
  13. 根据权利要求10至12任一项所述的方法,其特征在于,所述方法还包括:
    确定目标子区域,所述目标子区域为所述待定位对象在所述第一光学图案的多个子区域中所在的子区域;
    向所述至少一个投射设备发送控制信令,所述控制信令用于控制所述至少一个投射设备向所述目标子区域投射第二光学图案,所述第二光学图案包括多个子区域,且所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在目标区域中的位置包括:
    根据所述第二图片中的所述待定位对象在所述第二光学图案内的位置确定所述待定位对象在所述目标区域中的位置。
  15. 根据权利要求10-14任一项所述的方法,其特征在于,所述根据所述第二图片中的所述待定位对象在所述第一光学图案内的位置确定所述待定位对象在所述目标区域中的位置包括:
    获取目标子区域的间距,所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积;
    根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置。
  16. 根据权利要求10至15任一项所述的方法,其特征在于,所述方法还包括:
    获取所述至少一个摄像头采集的高度信息,所述高度信息用于指示所述待定位对象在所述目标区域内的高度。
  17. 根据权利要求10至15任一项所述的方法,其特征在于,所述根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置包括:
    获取目标子区域的间距d2,所述目标子区域为目标光学图案所包括的多个子区域中所述待定位对象所位于的子区域,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积;
    在所述待定位对象位于所述目标光学图案内的情况下,获取所述至少一个投射设备在所述待定位对象的表面所投射的子区域的间距d1;
    根据公式h=(1-d 1/d 2)L获取所述待定位对象在所述目标区域内的高度h,其中,L为所述至少一个投射设备距离所述目标区域的高度。
  18. 根据权利要求10至15任一项所述的方法,其特征在于,所述根据所述目标子区域在所述第二图片中的位置以及所述目标子区域的间距确定所述待定位对象在所述目标区域中的位置包括:
    获取所述至少一个投射设备所投射的目标光线在所述目标区域上所投射的投射点的位置,所述目标光线为所述至少一个投射设备所投射的多个光线中的一个光线;
    在所述待定位对象位于目标光学图案内的情况下,所述目标光学图案为所述第一光学图案或第二光学图案,所述第二光学图案所包括的子区域的面积小于所述第一光学图案所包括的子区域的面积,获取设定位置,所述设定位置为反射光线进入所述至少一个摄像头中的一个摄像头后在所述目标光学图案中所形成的虚像所在的位置,所述反射光线为所述目标光线经由所述待定位对象的表面反射后所生成的光线;
    根据公式
    Figure PCTCN2018077742-appb-100002
    获取所述待定位对象在所述目标区域内的高度h,其中,L1为所述投射点的位置和所述设定位置之间的间距,所述L2为位于相同水平面的所述至少一个投射设备和所述摄像头在水平方向上的间距,所述L为所述至少一个投射设备距离所述目标区域的高度。
PCT/CN2018/077742 2017-07-31 2018-03-01 一种定位系统以及定位方法 WO2019024498A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP18841158.1A EP3660452B1 (en) 2017-07-31 2018-03-01 Positioning system and positioning method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710640551.8 2017-07-31
CN201710640551.8A CN109323691B (zh) 2017-07-31 2017-07-31 一种定位系统以及定位方法

Publications (1)

Publication Number Publication Date
WO2019024498A1 true WO2019024498A1 (zh) 2019-02-07

Family

ID=65232811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077742 WO2019024498A1 (zh) 2017-07-31 2018-03-01 一种定位系统以及定位方法

Country Status (3)

Country Link
EP (1) EP3660452B1 (zh)
CN (1) CN109323691B (zh)
WO (1) WO2019024498A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110708524A (zh) * 2019-04-24 2020-01-17 广州星博科仪有限公司 一种基于光谱成像的目标投影指示装置
DE102021205659B4 (de) 2021-06-03 2023-08-10 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung eingetragener Verein Verfahren zum Prüfen von Bauteiloberflächen auf Fehler und/oder Bestimmen von Eigenschaften von Bauteilbeschichtungen

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0458283A1 (en) * 1990-05-23 1991-11-27 Nec Corporation Distance information obtaining device
CN101661098A (zh) * 2009-09-10 2010-03-03 上海交通大学 机器人餐厅多机器人自动定位系统
CN103322943A (zh) * 2012-03-22 2013-09-25 维蒂克影像国际公司 激光投影系统及方法
CN205909828U (zh) * 2016-08-06 2017-01-25 中科院合肥技术创新工程院 一种用于室内移动机器人定位的红外路标
CN106403941A (zh) * 2016-08-29 2017-02-15 上海智臻智能网络科技股份有限公司 一种定位方法及装置
CN106461378A (zh) * 2014-08-08 2017-02-22 塞姆布有限公司 具有用于非接触式测量的扫描系统的车辆装备
CN106781476A (zh) * 2016-12-22 2017-05-31 中国人民解放军第三军医大学第三附属医院 交通事故中车辆动态位置分析方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10212364A1 (de) * 2002-03-20 2003-10-16 Steinbichler Optotechnik Gmbh Verfahren und Vorrichtung zur Bestimmung der Absolut-Koordinaten eines Objekts
WO2008149923A1 (ja) * 2007-06-07 2008-12-11 The University Of Electro-Communications 物体検出装置とそれを適用したゲート装置
US9857167B2 (en) * 2015-06-23 2018-01-02 Hand Held Products, Inc. Dual-projector three-dimensional scanner

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0458283A1 (en) * 1990-05-23 1991-11-27 Nec Corporation Distance information obtaining device
CN101661098A (zh) * 2009-09-10 2010-03-03 上海交通大学 机器人餐厅多机器人自动定位系统
CN103322943A (zh) * 2012-03-22 2013-09-25 维蒂克影像国际公司 激光投影系统及方法
CN106461378A (zh) * 2014-08-08 2017-02-22 塞姆布有限公司 具有用于非接触式测量的扫描系统的车辆装备
CN205909828U (zh) * 2016-08-06 2017-01-25 中科院合肥技术创新工程院 一种用于室内移动机器人定位的红外路标
CN106403941A (zh) * 2016-08-29 2017-02-15 上海智臻智能网络科技股份有限公司 一种定位方法及装置
CN106781476A (zh) * 2016-12-22 2017-05-31 中国人民解放军第三军医大学第三附属医院 交通事故中车辆动态位置分析方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3660452A4

Also Published As

Publication number Publication date
EP3660452B1 (en) 2022-07-27
EP3660452A1 (en) 2020-06-03
CN109323691A (zh) 2019-02-12
EP3660452A4 (en) 2020-11-25
CN109323691B (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
US10083522B2 (en) Image based measurement system
CN110717942B (zh) 图像处理方法和装置、电子设备、计算机可读存储介质
WO2021259151A1 (zh) 一种激光标定系统的标定方法、装置以及激光标定系统
TWI555379B (zh) 一種全景魚眼相機影像校正、合成與景深重建方法與其系統
US9769443B2 (en) Camera-assisted two dimensional keystone correction
CN110809786B (zh) 校准装置、校准图表、图表图案生成装置和校准方法
JP6363863B2 (ja) 情報処理装置および情報処理方法
KR20190021342A (ko) 개선된 카메라 캘리브레이션 시스템, 타겟 및 프로세스
EP3115741B1 (en) Position measurement device and position measurement method
CN111345029B (zh) 一种目标追踪方法、装置、可移动平台及存储介质
JP6352208B2 (ja) 三次元モデル処理装置およびカメラ校正システム
US9881377B2 (en) Apparatus and method for determining the distinct location of an image-recording camera
US9992486B2 (en) Method of enhanced alignment of two means of projection
CN109242900B (zh) 焦平面定位方法、处理装置、焦平面定位系统及存储介质
US10319105B2 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
EP4217693A1 (en) Systems and methods for temperature measurement
EP4220547A1 (en) Method and apparatus for determining heat data of global region, and storage medium
US10154249B2 (en) System and method for capturing horizontal disparity stereo panorama
WO2019024498A1 (zh) 一种定位系统以及定位方法
CN110909571B (zh) 一种高精度面部识别空间定位方法
CN112433641B (zh) 一种多rgbd深度传感器的自动校准桌面道具互动系统的实现方法
CN112711982B (zh) 视觉检测方法、设备、系统以及存储装置
US10339702B2 (en) Method for improving occluded edge quality in augmented reality based on depth camera
JPWO2015141185A1 (ja) 撮像制御装置、撮像制御方法およびプログラム
CN114037758A (zh) 基于图像的摄像机姿态感知系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18841158

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018841158

Country of ref document: EP

Effective date: 20200225