CN113822930B - System and method for locating objects in a parking lot with high accuracy - Google Patents

System and method for locating objects in a parking lot with high accuracy Download PDF

Info

Publication number
CN113822930B
CN113822930B CN202010564075.8A CN202010564075A CN113822930B CN 113822930 B CN113822930 B CN 113822930B CN 202010564075 A CN202010564075 A CN 202010564075A CN 113822930 B CN113822930 B CN 113822930B
Authority
CN
China
Prior art keywords
vehicle
image
parking
parking lot
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010564075.8A
Other languages
Chinese (zh)
Other versions
CN113822930A (en
Inventor
章涛
刘卫红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Black Sesame Intelligent Technology Chongqing Co Ltd
Original Assignee
Black Sesame Intelligent Technology Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Black Sesame Intelligent Technology Chongqing Co Ltd filed Critical Black Sesame Intelligent Technology Chongqing Co Ltd
Priority to CN202010564075.8A priority Critical patent/CN113822930B/en
Publication of CN113822930A publication Critical patent/CN113822930A/en
Application granted granted Critical
Publication of CN113822930B publication Critical patent/CN113822930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30264Parking

Abstract

A method of locating objects in a parking lot, comprising: capturing a template image and an entry video of a vehicle entering a parking lot by an entry stereo camera; measuring a time-varying distance of a vehicle entering a parking lot based on the entry video to obtain a distance-to-time relationship; line scanning a time-varying profile of the vehicle entering the parking lot by at least one LIDAR to obtain a profile versus time; constructing a scanned image based on the measured distance versus time relationship and the profile versus time relationship; and forming a three-dimensional structure of the vehicle based on the template image and the scanned image.

Description

System and method for locating objects in a parking lot with high accuracy
Technical Field
The present invention relates to accurate position detection, and more particularly to accurate position detection of vehicles in a parking lot.
Background
In autonomous driving, the positioning of the vehicle is very useful. A method of location detection is implemented by the Global Positioning System (GPS). Standard PGS positioning accuracy is about 5 meters and in the event that GPS signals are blocked, the accuracy may be degraded and/or disabled. Another type of GPS, known as differential GPS, can locate objects that are 1 cm small. However, such differential GPS is very expensive and also suffers from reduced accuracy or failure due to signal blockage.
The synchronized locating and mapping (Simultaneous location and mapping, SLAM) uses images from the vehicle. A high resolution map is constructed and a camera captures an image of the environment. By comparing the image with a high resolution map, the position of the vehicle can be determined. The accuracy is only 20 cm. However, this requires mounting a camera on the vehicle.
Disclosure of Invention
In one embodiment, a method of locating an object in a parking lot includes: capturing a template image and an entry video of a vehicle entering a parking lot by an entry stereo camera (entry stereo camera); measuring a time-varying distance of a vehicle entering a parking lot based on the entry video to obtain a distance-to-time relationship; line scanning a time-varying profile of the vehicle entering the parking lot by at least one LIDAR to obtain a profile versus time; constructing a scanned image based on the measured distance versus time relationship and the profile versus time relationship; and forming a three-dimensional structure of the vehicle based on the template image and the scanned image.
The method may further comprise: capturing a parking lane image of the vehicle with at least one parking lane camera (park camera); matching the parking lane image with the template image; determining an angle of the vehicle in the parking garage image relative to the template image; determining a proportion of the vehicle in the parking garage image relative to the template image; determining a transformation matrix based on the angle of the vehicle and the proportion of the vehicle; and determining a position of the vehicle based on the transformation matrix and the three-dimensional structure of the vehicle.
In another embodiment, a system for locating objects in a parking lot includes: an entry stereo camera that captures a template image and an entry video of a vehicle entering a parking lot; at least one LIDAR that line scans a time-varying profile of the vehicle entering the parking lot to obtain a profile versus time; a non-transitory computer readable medium comprising instructions that, when read by a processor, cause the processor to: measuring a time-varying distance of a vehicle entering a parking lot based on the entry video to obtain a distance-to-time relationship; constructing a scanned image based on the measured distance versus time relationship and the profile versus time relationship; and forming a three-dimensional structure of the vehicle based on the template image and the scanned image.
The system may further include at least one parking lane camera that captures a parking lane image of the vehicle; the non-transitory computer readable medium comprising instructions that, when read by the processor, cause the processor to further: matching the parking lane image with the template image; determining an angle of the vehicle relative to the template image; determining a scale of the vehicle associated with the template image; determining a transformation matrix based on the angle of the vehicle and the proportion of the vehicle; and determining a position of the vehicle based on the transformation matrix and the three-dimensional structure of the vehicle.
Drawings
In the drawings:
FIG. 1 is a schematic diagram of a system according to one embodiment of the invention;
FIG. 2 is a layout overview illustrating a layout of four image capture devices according to one embodiment of the invention;
FIG. 3 depicts a side view illustrating a capture angle of an image capture device in accordance with one embodiment of the present invention;
FIG. 4 depicts a top view of a vehicle entering a parking lot and sensed by a stereo camera and two LIDARs in accordance with one embodiment of the present invention;
FIG. 5 depicts a side view of a vehicle-side LIDAR sensing according to one embodiment of the present invention;
FIG. 6 depicts the conversion of an image into a model according to one embodiment of the invention;
FIG. 7 depicts a mobile translation and transformation portal in accordance with one embodiment of the present invention;
FIG. 8 depicts image-to-model tracking within a parking lot in accordance with one embodiment of the invention;
FIG. 9 is a first schematic flow chart diagram of precisely locating an object in accordance with one embodiment of the present invention; and
FIG. 10 is a second schematic flow chart diagram of accurately positioning an object according to one embodiment of the invention.
Detailed Description
The examples set forth below are intended to illustrate the application of the apparatus and method and are not intended to limit the scope of the invention. Modifications of the device and method equivalent to modifications are intended to fall within the scope of the claims.
Certain terms are used throughout the following description and claims to refer to particular system components. As will be appreciated by one of skill in the art, different companies may refer to a component and/or a method by different names. This document does not intend to distinguish between components and/or methods that differ in name but not function.
In the following discussion and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to …". Furthermore, the term "coupled" or "coupled" (third person) is intended to mean an indirect or direct connection. Thus, if a first device couples with a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.
FIG. 1 depicts an exemplary electronic system for use with a system having three image capture devices. Electronic system 100 may be a computing device for executing software associated with one or more portions or steps, or operations of components and processes, of process 400 provided in fig. 4. Electronic system 100 may be an embedded computer, a personal computer, or a mobile device (e.g., a tablet, a notebook, a smart phone, a PDA, or other touch screen or television having one or more processors embedded or coupled therein, or any other type of computer-related electronic device).
Electronic system 100 may include various types of computer-readable media and interfaces for various other types of computer-readable media. In the depicted example, electronic system 100 includes bus 112, one or more processors 120, system memory 114, read Only Memory (ROM) 118, persistent storage 110, input device interface 122, output device interface 116, and one or more network interfaces 124. In some implementations, the electronic system 100 may include or be integrated with other computing devices or circuits for operating the various components and processes previously described. In one embodiment of the invention, one or more processors 120 are coupled to a light imaging and ranging device (LIDAR) 126, an entry stereo camera 128, and a parking track camera 130 by a bus 112. Additionally, a position transmitter 132 is connected to the bus 112 and provides feedback to the vehicle of its position in the parking lot.
Bus 112 collectively represents all of the system buses, peripheral buses, and chipset buses that communicatively connect the numerous internal devices of electronic system 100. For example, bus 112 communicatively connects one or more processors 120 with ROM118, system memory 114, persistent storage 110, LIDAR 126, entry stereo camera 128, and parking track camera 130.
One or more processors 120 retrieve instructions to be executed and data to be processed from these various memory units in order to perform the processes of the subject disclosure. The one or more processing units may be single-core processors or multi-core processors in different embodiments.
ROM118 stores static data and instructions required by one or more processors 120 and other modules of the electronic system. On the other hand, persistent storage 110 is a read-write storage. The device is a non-volatile memory unit that stores instructions and data even when the electronic system 100 is turned off. Some embodiments of the present invention use a mass storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 110.
Other embodiments use removable storage devices, such as floppy disks, flash memory drives, and their corresponding disk drives, as the permanent storage device 110. Similar to persistent storage 110, system memory 114 is a read-write memory device. However, unlike persistent storage 110, system memory 114 is a volatile read-write memory, such as random access memory. The system memory 114 stores some of the instructions and data that the processor needs at runtime. In some implementations, the processes of the subject disclosure are stored in system memory 114, persistent storage 110, and/or ROM 118. The one or more processors 120 retrieve instructions to be executed and data to be processed from these various memory units in order to perform the processes of some embodiments.
Bus 112 is also connected to input device interface 122 and output device interface 116. The input device interface 122 enables a user to communicate information and select commands to the electronic system. Input devices for use with input device interface 122 include, for example, alphanumeric keyboards and pointing devices (also referred to as "cursor control devices"). The output device interface 116 can, for example, display images generated by the electronic system 100. Output devices used with output device interface 116 include, for example, printers and display devices (e.g., cathode Ray Tubes (CRTs) or Liquid Crystal Displays (LCDs)). Some implementations include devices such as touch screens that function as input devices and output devices.
Finally, as illustrated in fig. 1, bus 112 may also couple electronic system 100 to a network (not shown) through network interface 124. The network interface 124 may include, for example, a wireless access point (e.g., bluetooth or WiFi) or a radio circuit for connecting to a wireless access point. The network interface 124 may also include hardware (e.g., ethernet hardware) for connecting the computer to a portion of a computer network, such as a Local Area Network (LAN), wide Area Network (WAN), wireless LAN, or intranet, or one of a plurality of networks, such as the internet. Any or all of the components of electronic system 100 may be used with the subject disclosure.
While the above discussion primarily refers to a microprocessor or multi-core processor executing software, some embodiments are performed by one or more integrated circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA). In some embodiments, such integrated circuits execute instructions stored on the circuits themselves.
As used in this specification and any claims of this application, the terms "computer," "server," "processor," and "memory" refer to electronic or other technical equipment. These terms exclude a person or group of people. For purposes of this description, the term display or displaying means displaying on an electronic device.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user), a keyboard, and a pointing device (e.g., a mouse or a trackball by which the user can provide input to the computer). Other types of devices may also be used to provide interaction with a user; for example, feedback provided to the user may be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component (e.g., as a data server) or that includes a middleware component (e.g., an application server) or that includes a front-end component (e.g., a client computer having a graphical user interface through which a user can interact with an implementation of the subject matter described in this specification), or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include Local Area Networks (LANs) and Wide Area Networks (WANs), the internet (e.g., the internet), and peer-to-peer networks (e.g., peer-to-peer networks).
The computing system may include a client and a server. The client and server are generally remote from each other and typically interact through a communication internetwork. The relationship between client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. Data generated at a client device (e.g., a result of a user interaction) may be received from a client device at a server.
Fig. 2 depicts parking lane cameras in a parking lot, the cameras 210, 212 being arranged side by side and forming an array 210, 212, 222, 224. The images of the parking garage cameras are superimposed to cover the parking garage. The image is a bird's eye view of four parking lane cameras.
In the embodiment of fig. 2, at least one parking lane camera captures a parking lane image (park image) of the vehicle, and a processor matches the parking lane image with the template image, determines an angle of the vehicle relative to the template image, and determines a scale of the vehicle relative to the template image. At this time, the system determines a transformation matrix based on the angle of the vehicle and the proportion of the vehicle, and determines the position of the vehicle based on the transformation matrix and the three-dimensional structure of the vehicle.
Fig. 3 depicts a geometric relationship between two image capturing devices in an array of image capturing devices. The image capture devices 310, 312 are disposed side-by-side and have a vertical field of view (FOV), tilt angle, and overlap portion, with the image capture devices 318, 320 also in side-by-side relationship, showing a horizontal field of view and overlap portion.
The cameras may be mounted in a side-by-side configuration in a parking lot. The image received by the camera has a superimposed area to cover the entire driving area. The deployment of an exemplary parking lane camera is illustrated in fig. 2 and 3. Fig. 2 is a bird's eye view of cameras disposed in a parking lot having four cameras. Fig. 3 is a detailed view showing the relationship between two parking lane cameras. The parking garage camera may be a stereo camera or a monocular camera.
Fig. 4 presents an exemplary embodiment of image capturing and scanning of a vehicle. In this example, a stereo camera 410 and two single line scan LIDARs 412, 414 are used. As the vehicle 412 passes through the portal, the two line scan LIDAR scans the vehicle. The stereo camera 410 continuously measures the distance of the vehicle, and can calculate the vehicle speed. Due to the vehicle speed and the LIDAR scan on both sides of the vehicle, a three-dimensional structure of the vehicle can be accurately constructed. The three-dimensional structure may be matched to an image captured from a stereo camera.
When a vehicle enters a parking lot, an entry stereo camera captures a template image and an entry video of the vehicle. At least one LIDAR line scans the time-varying profile of the vehicle entering the parking lot to obtain a profile versus time. The processor receives the incoming stereo camera video and measures the time-varying distance of the vehicle entering the parking lot to obtain a distance versus time relationship. A processor receives the LIDAR line scan and constructs a scan image based on the measured distance from the incoming stereo camera versus time and the contour from the LIDAR versus time, and forms a three-dimensional structure of the vehicle based on the template image and the scan image. When the vehicle enters the parking lot, the LIDAR may be positioned to the right and left of the travel path of the vehicle.
Fig. 5 illustrates a three-dimensional build process with LIDAR 510 on the vehicle side. At time t0, line scan laser beam 512 measures the distance of the white point on the laser line in image t 0. As the vehicle moves, at time t1, the laser measures the distance of the white point on the laser line in image t 1. Since the vehicle speed V is measured by the incoming stereo camera, it can be determined by (t n+1 -t n ) X V to calculate the horizontal distance between each laser measurement point, the vertical distance may also be determined by LIDAR. In this way, the x, y, z coordinates of each laser point 514 can be determined and the three-dimensional structure of the side of the vehicle can be modeled. Alternatively, the other side may be scanned and then the overall three-dimensional structure of the vehicle constructed.
Fig. 6 depicts an example of a mapping between a template image 612 and a three-dimensional structure 610.
The image of the vehicle may be captured in a parking lot using a monocular camera or a stereo camera. The image will be matched to the template image to obtain the angle and scale associated with the template image. The angle and the ratio are used to calculate the distance between the vehicle and the current camera. The precise location of the vehicle may be determined by combining the angles, proportions, distances and the three-dimensional structure of the vehicle. Since the template image and the three-dimensional structure can be generated with high accuracy, the position of the vehicle can be accurately determined using a general camera.
Fig. 7 illustrates an exemplary process in which a template image 710 is captured by an incoming stereo camera and a three-dimensional structure 712 is matched to the template image. The parking garage image 714 is captured by a parking garage camera in the parking garage. The parking garage image 714 and the template image 710 are somewhat different because the parking garage image 714 and the template image 710 are captured by different cameras and their perspectives are different. The parking garage image 714 is matched with the template image 710 to provide a transformation matrix to convert the perspective of the template image 710 into the perspective of the parking garage image 714. The three-dimensional structure 712 is converted into a perspective of the parking garage image 714 using the transformation matrix to obtain a transformed image 716. Because the template image 710, the three-dimensional structure 712, and the parking garage image 714 have high resolution, the transformed image 716 will also have high accuracy. This transformation allows for determining the location of the vehicle and determining the occupancy space 812, 816 provided by the cameras 810, 814 and as shown in fig. 8.
FIG. 9 depicts an exemplary method of locating objects in a parking lot with high accuracy, comprising: template images and access videos of vehicles entering the parking lot are captured 910 by an access stereo camera. The method then measures 912 a time-varying distance of the vehicle entering the parking lot based on the entry video to obtain a distance versus time relationship, and line scans 914 the time-varying contour of the vehicle entering the parking lot via at least one LIDAR to obtain a contour versus time relationship. The method then constructs 916 a scanned image based on the measured distance versus time and the contour versus time, and forms 918 a three-dimensional structure of the vehicle based on the template image and the scanned image. The transformed image is constructed using the template image and the three-dimensional structure, and the precise location of the vehicle in the parking lot can be determined.
Fig. 10 depicts a second portion of an exemplary method, the method further comprising: a parking lane image of the vehicle is captured 1010 with at least one parking lane camera and matched 1012 with the template image. Based on the difference between the template image and the parking garage image, the method further determines 1014 an angle of a vehicle in the parking garage image associated with the template image and determines 1016 a scale of the vehicle in the parking garage image associated with the template image. The method then determines 1018 a transformation matrix based on the angle of the vehicle and the scale of the vehicle, and determines 1020 the position of the vehicle based on the transformation matrix and the three-dimensional structure of the vehicle.
Communication with vehicles located in the parking lot may be accomplished by one of several means, and the position data may be transmitted directly to an autopilot system, a global positioning system, etc.
Those of skill in the art will appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. The various components and blocks may be arranged differently (e.g., arranged in a different order, or divided in a different manner) without departing from the scope of the subject technology.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is one illustration of example approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Some steps may be performed simultaneously. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. The foregoing description provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more". The term "some" means one or more unless specifically stated otherwise. The pronouns in males (e.g., his) include females and neutrals (e.g., she and its) and vice versa headings and subheadings (if any) are used for convenience only and do not limit the invention. Predicates "configured to", "operable to", and "programmed to" do not imply any particular tangible or intangible modification to the subject, but are intended to be used interchangeably. For example, a processor configured to monitor and control operations or components may also mean that the processor is programmed to monitor and control operations, or that the processor is operable to monitor and control operations. Likewise, a processor configured to execute code may be interpreted as a processor programmed to execute code or a processor operable to execute code.
Phrases such as "an aspect" do not imply that such aspect is essential to the present technology or that such aspect applies to all configurations of the subject technology. The disclosure relating to an aspect may apply to all configurations, or one or more configurations. One aspect may provide one or more examples. A phrase such as an "aspect" may refer to one or more aspects and vice versa. Phrases such as "an embodiment" do not imply that such an embodiment is essential to the subject technology or that such an embodiment applies to all configurations of the subject technology. The disclosure directed to one embodiment may be applicable to all embodiments, or one or more embodiments. One embodiment may provide one or more examples. A phrase such as an "embodiment" may refer to one or more embodiments and vice versa. Phrases such as "configuration" do not imply that such configuration is essential to the subject technology, or that such configuration applies to all configurations of the subject technology. The disclosure relating to one configuration may apply to all configurations, or one or more configurations. One or more examples may be provided for one configuration. A phrase such as "configured" may refer to one or more configurations and vice versa.
The word "example" is used herein to mean "serving as an example or illustration. Any aspect or design described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects or designs.
All structural and functional equivalents to the elements of the various aspects described throughout this invention that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Furthermore, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No element of any claim should be construed as limited to the articles 35u.s.c. ≡112, unless the term "means for …" is used to expressly state the element, or in the case of method claims, the term "step for …" is used to state the element. Furthermore, to the extent that the term "includes," "including," "has," or similar terms is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
Reference to "one embodiment," "an embodiment," "some embodiments," "various embodiments," or similar language means that a particular element or feature is included in at least one embodiment of the present invention. Although a phrase may appear in multiple places, the phrase does not necessarily refer to the same embodiment. In connection with the present invention, those skilled in the art will be able to devise and incorporate any of a variety of mechanisms suitable for carrying out the functions described above.
It should be understood that the present invention teaches only one example of the illustrative embodiment and that many variations of the invention can be readily devised by those skilled in the art after reading the present invention and that the scope of the invention is determined by the claims that follow.

Claims (14)

1. A method of locating objects in a parking lot, comprising:
capturing a template image and an entry video of a vehicle entering a parking lot by an entry stereo camera;
measuring a time-varying distance of a vehicle entering a parking lot based on the entry video to obtain a distance-to-time relationship;
line scanning a time-varying profile of the vehicle entering the parking lot by at least one LIDAR to obtain a profile versus time;
constructing a scanned image based on the measured distance versus time relationship and the profile versus time relationship;
forming a three-dimensional structure of the vehicle based on the template image and the scan image;
capturing a parking lane image of the vehicle with at least one parking lane camera;
matching the parking lane image with the template image;
determining an angle of the vehicle in the parking garage image relative to the template image;
determining a proportion of the vehicle in the parking garage image relative to the template image;
determining a transformation matrix based on the angle of the vehicle and the proportion of the vehicle; and
a position of the vehicle is determined based on the transformation matrix and a three-dimensional structure of the vehicle.
2. The method of claim 1, wherein the line scan is performed to capture a top profile of the vehicle and a profile of at least one side of the vehicle.
3. The method of claim 1, wherein the line scan is performed with at least two LIDARs.
4. A method according to claim 3, wherein the at least two LIDARs are arranged to the right and left of the path of travel of the vehicle as the vehicle enters the parking lot.
5. The method of claim 1, wherein the at least one parking lane camera is a stereo camera.
6. The method of claim 1, wherein the at least one parking lane camera is a monocular camera.
7. The method of claim 1, wherein the at least one parking lane camera is a plurality of parking lane cameras forming an array.
8. A system for locating objects in a parking lot, comprising:
an entry stereo camera that captures a template image and an entry video of a vehicle entering a parking lot;
at least one LIDAR that line scans a time-varying profile of the vehicle entering the parking lot to obtain a profile versus time;
at least one parking lane camera capturing a parking lane image of the vehicle; and
a non-transitory computer readable medium comprising instructions that, when read by a processor, cause the processor to:
measuring a time-varying distance of a vehicle entering a parking lot based on the entry video to obtain a distance-to-time relationship;
constructing a scanned image based on the measured distance versus time relationship and the profile versus time relationship;
forming a three-dimensional structure of the vehicle based on the template image and the scan image;
matching the parking lane image with the template image;
determining an angle of the vehicle in the parking garage image relative to the template image;
determining a proportion of the vehicle in the parking garage image relative to the template image;
determining a transformation matrix based on the angle of the vehicle and the proportion of the vehicle; and
a position of the vehicle is determined based on the transformation matrix and a three-dimensional structure of the vehicle.
9. The system of claim 8, wherein the at least one LIDAR is configured to capture a profile of a roof of the vehicle and a profile of at least one side of the vehicle.
10. The system of claim 8, wherein the line scan is performed with at least two LIDARs.
11. The system of claim 10, wherein the at least two LIDARs are disposed to the right and left of the path of travel of the vehicle as the vehicle enters the parking lot.
12. The system of claim 8, wherein the at least one parking lane camera is a stereo camera.
13. The system of claim 8, wherein the at least one parking lane camera is a monocular camera.
14. The system of claim 8, wherein the at least one parking lane camera is a plurality of parking lane cameras forming an array.
CN202010564075.8A 2020-06-19 2020-06-19 System and method for locating objects in a parking lot with high accuracy Active CN113822930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010564075.8A CN113822930B (en) 2020-06-19 2020-06-19 System and method for locating objects in a parking lot with high accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010564075.8A CN113822930B (en) 2020-06-19 2020-06-19 System and method for locating objects in a parking lot with high accuracy

Publications (2)

Publication Number Publication Date
CN113822930A CN113822930A (en) 2021-12-21
CN113822930B true CN113822930B (en) 2024-02-09

Family

ID=78911960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010564075.8A Active CN113822930B (en) 2020-06-19 2020-06-19 System and method for locating objects in a parking lot with high accuracy

Country Status (1)

Country Link
CN (1) CN113822930B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064065A (en) * 2007-03-29 2007-10-31 汤一平 Parking inducing system based on computer visual sense
CN101409019A (en) * 2007-10-11 2009-04-15 罗伯特·博世有限公司 Spatial resolution driver assist system
CN101915991A (en) * 2009-04-02 2010-12-15 通用汽车环球科技运作公司 Rear parking on the full rear-window head-up display is auxiliary
CN101976512A (en) * 2010-10-11 2011-02-16 上海交通大学 System and method for automatically responding vehicle position in underground parking garage
CN109739243A (en) * 2019-01-30 2019-05-10 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle positioning method, automatic Pilot control method and related system
CN110095061A (en) * 2019-03-31 2019-08-06 唐山百川智能机器股份有限公司 Vehicle morpheme detection system and method based on profile scan
CN110119138A (en) * 2018-02-07 2019-08-13 百度(美国)有限责任公司 For the method for self-locating of automatic driving vehicle, system and machine readable media
CN110285793A (en) * 2019-07-08 2019-09-27 中原工学院 A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System
KR20190136238A (en) * 2018-05-30 2019-12-10 에이치디씨아이콘트롤스 주식회사 A System Tracking The Parking Area in Parking Lots devided in More Than One Area, a Method, and a Computer-readable Medium thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2107503A1 (en) * 2008-03-31 2009-10-07 Harman Becker Automotive Systems GmbH Method and device for generating a real time environment model for vehicles
US20160110999A1 (en) * 2014-10-15 2016-04-21 Xerox Corporation Methods and systems for parking monitoring with vehicle identification
US10606274B2 (en) * 2017-10-30 2020-03-31 Nio Usa, Inc. Visual place recognition based self-localization for autonomous vehicles

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101064065A (en) * 2007-03-29 2007-10-31 汤一平 Parking inducing system based on computer visual sense
CN101409019A (en) * 2007-10-11 2009-04-15 罗伯特·博世有限公司 Spatial resolution driver assist system
CN101915991A (en) * 2009-04-02 2010-12-15 通用汽车环球科技运作公司 Rear parking on the full rear-window head-up display is auxiliary
CN101976512A (en) * 2010-10-11 2011-02-16 上海交通大学 System and method for automatically responding vehicle position in underground parking garage
CN110119138A (en) * 2018-02-07 2019-08-13 百度(美国)有限责任公司 For the method for self-locating of automatic driving vehicle, system and machine readable media
KR20190136238A (en) * 2018-05-30 2019-12-10 에이치디씨아이콘트롤스 주식회사 A System Tracking The Parking Area in Parking Lots devided in More Than One Area, a Method, and a Computer-readable Medium thereof
CN109739243A (en) * 2019-01-30 2019-05-10 东软睿驰汽车技术(沈阳)有限公司 A kind of vehicle positioning method, automatic Pilot control method and related system
CN110095061A (en) * 2019-03-31 2019-08-06 唐山百川智能机器股份有限公司 Vehicle morpheme detection system and method based on profile scan
CN110285793A (en) * 2019-07-08 2019-09-27 中原工学院 A kind of Vehicular intelligent survey track approach based on Binocular Stereo Vision System

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于路侧设备的无线测距与车辆组合定位算法的研究;刘建圻;《中国博士学位论文全文数据库 工程科技Ⅱ辑》(第8期);第C034-47页 *
智能停车场反向寻车设计及管理系统实现;褚鸿锐;《中国优秀硕士学位论文全文数据库 信息科技辑》(第1期);第I138-1234页 *

Also Published As

Publication number Publication date
CN113822930A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
US10859682B2 (en) Telematics using a light ranging system
US11754721B2 (en) Visualization and semantic monitoring using lidar data
JP6552729B2 (en) System and method for fusing the outputs of sensors having different resolutions
CN110146869B (en) Method and device for determining coordinate system conversion parameters, electronic equipment and storage medium
EP3378033B1 (en) Systems and methods for correcting erroneous depth information
CN110927708B (en) Calibration method, device and equipment of intelligent road side unit
WO2019179417A1 (en) Data fusion method and related device
US8280107B2 (en) Method and apparatus for identification and position determination of planar objects in images
EP2458336B1 (en) Method and system for reporting errors in a geographic database
JP2019526101A (en) System and method for identifying camera posture in a scene
CN108603933B (en) System and method for fusing sensor outputs with different resolutions
CN110570449B (en) Positioning and mapping method based on millimeter wave radar and visual SLAM
KR102006291B1 (en) Method for estimating pose of moving object of electronic apparatus
US20180137607A1 (en) Processing apparatus, imaging apparatus and automatic control system
KR101880185B1 (en) Electronic apparatus for estimating pose of moving object and method thereof
US10928930B2 (en) Transparent display device and control method using the same
CN112105892A (en) Identifying map features using motion data and bin data
US11043001B2 (en) High precision object location in a parking lot
Fremont et al. Circular targets for 3d alignment of video and lidar sensors
CN111833443A (en) Landmark position reconstruction in autonomous machine applications
CN113822930B (en) System and method for locating objects in a parking lot with high accuracy
CN111238490A (en) Visual positioning method and device and electronic equipment
CN110243357A (en) A kind of unmanned plane localization method, device, unmanned plane and storage medium
CN105763859A (en) Method and system for improving aerial survey accuracy of unmanned aerial vehicle and unmanned aerial vehicle
JP2023125005A (en) Calibration processing device, calibration processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant