CN113299104A - Augmented reality reverse vehicle searching system and method - Google Patents

Augmented reality reverse vehicle searching system and method Download PDF

Info

Publication number
CN113299104A
CN113299104A CN202110423187.6A CN202110423187A CN113299104A CN 113299104 A CN113299104 A CN 113299104A CN 202110423187 A CN202110423187 A CN 202110423187A CN 113299104 A CN113299104 A CN 113299104A
Authority
CN
China
Prior art keywords
module
parking lot
vehicle
information
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110423187.6A
Other languages
Chinese (zh)
Other versions
CN113299104B (en
Inventor
李波
谭庆平
彭飞
王畅
李向涛
戴芹文
周蓉
赵文燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Hailong International Intelligent Technology Co ltd
Original Assignee
Hunan Hailong International Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Hailong International Intelligent Technology Co ltd filed Critical Hunan Hailong International Intelligent Technology Co ltd
Priority to CN202110423187.6A priority Critical patent/CN113299104B/en
Publication of CN113299104A publication Critical patent/CN113299104A/en
Application granted granted Critical
Publication of CN113299104B publication Critical patent/CN113299104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/586Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/123Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams
    • G08G1/127Traffic control systems for road vehicles indicating the position of vehicles, e.g. scheduled vehicles; Managing passenger vehicles circulating according to a fixed timetable, e.g. buses, trains, trams to a central station ; Indicators in a central station
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/148Management of a network of parking areas

Abstract

The invention belongs to the technical field of vehicle searching, and discloses an augmented reality reverse vehicle searching system and method, wherein the augmented reality reverse vehicle searching system comprises the following components: the system comprises a parking lot information acquisition module, a parking lot model construction module, a vehicle positioning module, a vehicle owner dynamic acquisition module, a central control module, an information feedback module, a cloud identification module, a position and direction transformation module, a three-dimensional imaging module and an AR management module. According to the invention, the vehicle positioning module is arranged to acquire the position of the vehicle, the approximate area where the vehicle is located is determined, and then the cloud identification module and the position and direction conversion module are used for identifying the vehicle, so that the dynamic information feedback of the vehicle position and the vehicle owner position in the parking lot can be realized, namely, when the vehicle owner utilizes the system to search the vehicle position, the direction and the like can be continuously converted, and the three-dimensional registration technology is utilized to realize positioning, implementing tracking and three-dimensional imaging, thereby solving the problem that the positioning information acquired by the conventional vehicle searching system due to signal interference is inaccurate.

Description

Augmented reality reverse vehicle searching system and method
Technical Field
The invention belongs to the technical field of vehicle searching, and particularly relates to a reverse vehicle searching system and method for augmented reality.
Background
At present, with the rapid development of the construction and traffic industry in China, a large underground parking lot is established for meeting the parking requirements of people, so that the problem of difficulty in vehicle searching is caused. The market today has intensified the research into low-cost, simply deployed reverse car-finding systems, such as augmented reality technology that superimposes a virtual image with a real environment, which helps the owner navigate the specific location of the vehicle. The reverse car searching system mainly analyzes the common reverse car searching system and the defects of the large-scale parking lot, explains the AR technical application characteristics of the large-scale underground parking lot, and discusses the AR technical design and practice of the large-scale underground parking lot so as to improve the car searching efficiency of car owners.
Due to the fact that the underground parking lot is complex in structure and large in area, specific positions cannot be determined after a car owner parks the car, and the situations of easiness in parking and difficulty in car finding can occur. Particularly, as the shapes and colors of the vehicles are different, the parking positions cannot be determined quickly, a large amount of time is needed to search for the vehicles, and the vehicle traffic rate is reduced. Aiming at the situation, the application of a reverse car searching system is strengthened in a part of parking lots, but certain disadvantages exist respectively, and the implementation situation is poor. And the augmented reality technology can put in virtual information in the real environment, can directly utilize the cell-phone to carry out the navigation of parking position, and has authenticity, convenience. However, the existing reverse vehicle-searching system does not combine with the augmented reality technology, and has poor accuracy in determining the position of the vehicle and poor vehicle navigation effect. Therefore, a new augmented reality reverse car-finding system and method are needed.
Through the above analysis, the problems and defects of the prior art are as follows: the existing reverse vehicle searching system is not combined with an augmented reality technology, the accuracy of determining the position of a vehicle is poor, and the vehicle navigation effect is poor.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a reverse vehicle searching system and method for augmented reality.
The invention is realized in such a way that an augmented reality reverse vehicle searching system comprises:
parking area information acquisition module is connected with central control module for carry out the collection of parking area information through parking factory information acquisition procedure, obtain parking area information, include:
determining a database where the parking lot information is located; the database comprises one or more data elements;
acquiring information of a parking lot corresponding to the data element based on an extraction path of the information corresponding to the data element;
based on the parking lot name, associating the information of the parking lot according to the corresponding parking lot name;
obtaining corresponding structured data based on the associated information; converting the structured data based on the corresponding relation between the data elements and the information of the parking lot to obtain standard data corresponding to the data elements;
based on the parking lot name, respectively storing each standard data corresponding to the same parking lot name and each data element corresponding to each standard data in an associated manner;
the parking lot model building module is connected with the central control module and used for building the parking lot three-dimensional model through a parking lot model building program to obtain the parking lot three-dimensional model, and the parking lot three-dimensional model building module comprises:
acquiring a plurality of two-dimensional images of a parking lot, and extracting image features of the two-dimensional images by using a two-dimensional CNN;
splicing the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure;
carrying out convolution deformation on the vertex feature vector of the graph structure by using GCN to obtain a new vertex and a corresponding three-dimensional coordinate;
obtaining an image three-dimensional model according to the three-dimensional coordinates of the new vertex;
the vehicle positioning module is connected with the central control module and used for determining vehicle position information through a positioning device arranged in the vehicle to obtain the vehicle position information;
the vehicle owner dynamic acquisition module is connected with the central control module and is used for acquiring vehicle owner dynamic information through a vehicle owner dynamic acquisition program to obtain dynamic information of the position of a vehicle owner;
the central control module is connected with the parking lot information acquisition module, the parking lot model construction module, the vehicle positioning module and the vehicle owner dynamic acquisition module and is used for controlling the operation of each connection module through a main control computer so as to ensure the normal operation of each module;
wherein, control each connection module's operation through the main control computer, include:
the method comprises the steps of obtaining an error signal by making a difference between an input signal and an output signal of a controlled object;
judging whether the error signal is larger than a preset error threshold value or not; wherein the preset error threshold is:
ef=k1r(t);
wherein e isfSetting the preset error threshold value; k is a radical of1Is a preset error threshold coefficient; r (t) is the input signal; the preset error threshold coefficient k1∈[0,0.2];
If so, calculating a first PID control quantity by adopting a first PID control algorithm with integral regulation so as to serve as a total control quantity;
if not, calculating a second PID control quantity by adopting a second PID control algorithm for canceling integral regulation, calculating a compensation control quantity by adopting a preset compensation algorithm, and taking the sum of the second PID control quantity and the compensation control quantity as the total control quantity;
the calculating the compensation control quantity by adopting the preset compensation algorithm comprises the following steps:
according to ub=k2u0Calculating the compensation control quantity; wherein u isbThe compensation control quantity is used as the compensation control quantity; k is a radical of2Is a preset compensation coefficient; u. of0A preset basic compensation amount corresponding to the controlled object;
outputting the total control quantity to the controlled object so as to adjust the output signal of the controlled object;
the transfer function of the controlled object is as follows:
Figure BDA0003028571660000031
wherein, b0Is a molecular constant term; b is a denominator constant term; a is a denominator first-order term coefficient; the preset basic compensation amount is u0=b/b0
The information feedback module is connected with the central control module and used for feeding back the dynamic information of the position of the vehicle owner through an information feedback program;
the cloud identification module is connected with the central control module and used for acquiring the area where the vehicle is located on the basis of the acquired vehicle position information through a cloud identification program and performing cloud identification on the position of the vehicle in the area where the vehicle is located to obtain vehicle identification information;
the position and direction conversion module is connected with the central control module and is used for converting the cloud identification position and the cloud identification direction through a position conversion program;
the three-dimensional imaging module is connected with the central control module and is used for carrying out three-dimensional imaging on the position of the vehicle in the parking lot three-dimensional model according to the acquired vehicle identification information through a three-dimensional imaging program;
and the AR management module is connected with the central control module and is used for carrying out AR management through the Web-based AR management background.
Further, in the parking lot information collection module, the information of the parking lot includes a name of the parking lot, position information of the parking lot, and a plurality of images of the parking lot.
Further, in the parking lot information acquisition module, the associating the information of the parking lot according to the corresponding name of the parking lot includes:
analyzing the parking lot information to obtain attribute field information;
combining the attribute field information and the parking lot name information to generate a source data set;
selecting fields needing to be correlated in the attribute field information to generate a correlated data set;
generating a multi-level data association model, which comprises a matching field set and a condition field; the matching field set comprises at least one matching field, each matching field is composed of a writing field and a reading field, the writing field corresponds to a field in the source data set, the reading field corresponds to a field in the associated data set, the condition field comprises a key value and a value attribute, the key value is a field in the associated data set, and the value attribute is a field in the source data set;
and associating the information of the parking lot according to the corresponding name of the parking lot according to the generated multi-level data association model.
Further, in the central control module, the first PID control algorithm and the second PID control algorithm are both parameter self-tuning PID control algorithms.
Further, in the parking lot model building module, the two-dimensional CNN includes N convolution modules connected in sequence, each convolution module includes a plurality of convolution layers connected in sequence, and each convolution module outputs an image feature matrix of a specific size.
Further, in the parking lot model building module, the step of splicing the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure includes:
carrying out square operation on each element in each image feature matrix respectively, assigning the obtained square operation value to the position of an original element to form a new image feature matrix, wherein the size of the new image feature matrix is [ m, m, k ], m is the size of the new image feature matrix, and k is the number of channels of the image features;
projecting the three-dimensional coordinates (x, y, z) of the vertices into two-dimensional coordinates (x, y);
splicing the new image characteristic matrix with the size of [ M, M, k ] with the two-dimensional coordinates (x, y) of the vertex respectively aiming at each new image characteristic matrix to obtain a pre-spliced matrix with the size of [ M, k ]; wherein M represents the number of vertexes of the grid model, and k is the number of channels of image features in a new image feature matrix;
and splicing the three-dimensional coordinates (x, y, z) of the grid model and the N pre-splicing matrixes on the dimensionality of matrix columns to form a vertex characteristic vector with the size of [ M, K ], wherein K represents the sum of the channel number of the N new image characteristic matrixes and the sum of the coordinate dimensionality of the vertex.
Further, the projecting the three-dimensional coordinates (x, y, z) of the vertex into two-dimensional coordinates (x, y) comprises:
calculating the height h of the volume occupied by each vertex of the mesh modeliAnd width wi
hi=L×[-y/(-z)]+H;
wi=L×[x/(-z)]+H;
According to the height h of the volume occupied by each vertexiAnd width wiObtaining two-dimensional coordinates of each vertex on a two-dimensional plane:
xi=hi/(224/56);
yi=wi/(224/56);
where 224 is the length and width of the input image size; 56 is a set value, if the feature matrix requires more channels, the value is decreased, if the feature matrix requires less channels, the value is increased, i represents the index of the vertex; l and H are the length and height, respectively, of the volume occupied by the initial mesh model.
Further, the stitching the three-dimensional coordinates (x, y, z) of the mesh model with the N pre-stitching matrices in the dimension of the matrix column includes:
taking out the elements of all channels with the positions (x, y) from a new image feature matrix with the size [ m, m, k ] according to the two-dimensional coordinates (x, y) of the vertex;
and respectively converting the elements of all the channels into pre-splicing matrixes with specific sizes through reshape functions.
It is another object of the present invention to provide a computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface to apply the augmented reality reverse car-hunting system when executed on an electronic device.
It is another object of the present invention to provide a computer-readable storage medium storing instructions that, when executed on a computer, cause the computer to apply the augmented reality reverse vehicle seeking system.
By combining all the technical schemes, the invention has the advantages and positive effects that: according to the augmented reality reverse vehicle searching system provided by the invention, the vehicle positioning module is arranged to acquire the position of the vehicle, the approximate area where the vehicle is located is determined, the vehicle is identified through the cloud identification module and the position and direction conversion module, the dynamic information feedback of the vehicle position and the vehicle owner position in the parking lot can be realized, namely when the vehicle owner uses the system to search the vehicle position, the direction and the like can be continuously converted, the positioning, implementing tracking and three-dimensional imaging are realized by using the three-dimensional registration technology, and the problem that the positioning information acquired by the existing vehicle searching system due to signal interference is inaccurate is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained from the drawings without creative efforts.
Fig. 1 is a block diagram of a reverse car finding system for augmented reality according to an embodiment of the present invention;
in the figure: 1. a parking lot information acquisition module; 2. a parking lot model building module; 3. a vehicle positioning module; 4. a vehicle owner dynamic acquisition module; 5. a central control module; 6. an information feedback module; 7. a cloud identification module; 8. a position direction changing module; 9. a stereoscopic imaging module; 10. and an AR management module.
Fig. 2 is a flowchart of a reverse car finding method for augmented reality according to an embodiment of the present invention.
Fig. 3 is a flowchart of a method for acquiring parking lot information by a parking lot information acquisition module using a parking lot information acquisition program to obtain parking lot information according to an embodiment of the present invention.
Fig. 4 is a flowchart of a method for constructing a parking lot three-dimensional model by using a parking lot model construction program through a parking lot model construction module to obtain the parking lot three-dimensional model according to the embodiment of the present invention.
Fig. 5 is a flowchart of a method for controlling operations of each connection module by a central control module and using a master controller to ensure normal operations of each connection module according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In view of the problems in the prior art, the present invention provides an augmented reality reverse car-searching system and method, which will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the augmented reality reverse car finding system provided in the embodiment of the present invention includes:
the parking lot information acquisition module 1 is connected with the central control module 5 and is used for acquiring parking lot information through a parking factory information acquisition program to obtain parking lot information; the parking lot information comprises a parking lot name, position information of the parking lot and a plurality of images of the parking lot;
the parking lot model building module 2 is connected with the central control module 5 and used for building a parking lot three-dimensional model through a parking lot model building program to obtain the parking lot three-dimensional model;
the vehicle positioning module 3 is connected with the central control module 5 and used for determining vehicle position information through a positioning device arranged in the vehicle to obtain the vehicle position information;
the vehicle owner dynamic acquisition module 4 is connected with the central control module 5 and is used for acquiring vehicle owner dynamic information through a vehicle owner dynamic acquisition program to obtain dynamic information of the position of a vehicle owner;
the central control module 5 is connected with the parking lot information acquisition module 1, the parking lot model construction module 2, the vehicle positioning module 3, the vehicle owner dynamic acquisition module 4, the information feedback module 6, the cloud identification module 7, the position and direction conversion module 8, the stereo imaging module 9 and the AR management module 10, and is used for controlling the operation of each connection module through a main control computer and ensuring the normal operation of each module;
the information feedback module 6 is connected with the central control module 5 and used for feeding back the dynamic information of the position of the vehicle owner through an information feedback program;
the cloud identification module 7 is connected with the central control module 5 and used for acquiring the area where the vehicle is located on the basis of the acquired vehicle position information through a cloud identification program and performing cloud identification on the position of the vehicle in the area where the vehicle is located to obtain vehicle identification information;
the position and direction conversion module 8 is connected with the central control module 5 and is used for converting the cloud identification position and the cloud identification direction through a position conversion program;
the stereo imaging module 9 is connected with the central control module 5 and is used for carrying out stereo imaging on the vehicle position in the parking lot three-dimensional model according to the acquired vehicle identification information through a stereo imaging program;
and the AR management module 10 is connected with the central control module 5 and is used for performing AR management through an AR management background based on Web.
As shown in fig. 2, the augmented reality reverse car finding method provided by the embodiment of the present invention includes the following steps:
s101, collecting parking lot information by using a parking lot information collecting module and a parking factory information collecting program to obtain parking lot information;
s102, building a parking lot three-dimensional model by using a parking lot model building program through a parking lot model building module to obtain the parking lot three-dimensional model;
s103, determining vehicle position information by using a positioning device arranged in the vehicle through a vehicle positioning module to obtain the vehicle position information;
s104, acquiring dynamic information of the vehicle owner by using a vehicle owner dynamic acquisition program through a vehicle owner dynamic acquisition module to obtain the dynamic information of the position of the vehicle owner;
s105, controlling the operation of each connecting module by using a main control computer through a central control module to ensure the normal operation of each module; feeding back the dynamic information of the position of the vehicle owner by using an information feedback program through an information feedback module;
s106, acquiring the area where the vehicle is located on the basis of the acquired vehicle position information by using a cloud identification program through a cloud identification module; carrying out cloud identification on the position of the vehicle in the area where the vehicle is located to obtain vehicle identification information;
s107, the cloud identification position and the cloud identification direction are converted by a position direction conversion module through a position conversion program;
s108, stereo imaging of the vehicle position is carried out in the parking lot three-dimensional model by the stereo imaging module according to the acquired vehicle identification information by utilizing a stereo imaging program; and performing AR management by using an AR management background based on Web through an AR management module.
In step S101 provided by the embodiment of the present invention, the parking lot information includes a name of the parking lot, position information of the parking lot, and a plurality of images of the parking lot.
The invention is further described with reference to specific examples.
Example 1
As shown in fig. 1 and fig. 3, the method for acquiring parking lot information by using a parking lot information acquisition module and a parking lot information acquisition program according to the embodiment of the present invention includes:
s201, determining a database where the parking lot information is located; wherein the database comprises one or more data elements;
s202, acquiring information of the parking lot corresponding to the data element based on the extraction path of the information corresponding to the data element; the information of the parking lot comprises a name of the parking lot, position information of the parking lot and a plurality of images of the parking lot;
s203, associating the information of the parking lot according to the corresponding name of the parking lot based on the name of the parking lot;
s204, obtaining corresponding structured data based on the associated information; converting the structured data based on the corresponding relation between the data elements and the information of the parking lot to obtain standard data corresponding to the data elements;
and S205, based on the parking lot name, respectively associating and storing each standard data corresponding to the same parking lot name with each data element corresponding to each standard data.
In step S203 provided in the embodiment of the present invention, the associating the information of the parking lot according to the corresponding name of the parking lot includes:
analyzing the parking lot information to obtain attribute field information;
combining the attribute field information and the parking lot name information to generate a source data set;
selecting fields needing to be correlated in the attribute field information to generate a correlated data set;
generating a multi-level data association model, which comprises a matching field set and a condition field; the matching field set comprises at least one matching field, each matching field is composed of a writing field and a reading field, the writing field corresponds to a field in the source data set, the reading field corresponds to a field in the associated data set, the condition field comprises a key value and a value attribute, the key value is a field in the associated data set, and the value attribute is a field in the source data set;
and associating the information of the parking lot according to the corresponding name of the parking lot according to the generated multi-level data association model.
Example 2
As shown in fig. 1 and fig. 4, the method for finding a car in a reverse direction by using augmented reality according to the embodiment of the present invention, as a preferred embodiment, constructs a three-dimensional model of a parking lot by using a parking lot model construction program through a parking lot model construction module, and the method for obtaining the three-dimensional model of the parking lot includes:
s301, acquiring a plurality of two-dimensional images of the parking lot, and extracting image features of the two-dimensional images by using a two-dimensional CNN;
s302, splicing the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure;
s303, carrying out convolution deformation on the vertex feature vector of the graph structure by using GCN to obtain a new vertex and a corresponding three-dimensional coordinate;
and S304, obtaining an image three-dimensional model according to the three-dimensional coordinates of the new vertex.
In step S301 provided in the embodiment of the present invention, the two-dimensional CNN includes N convolution modules connected in sequence, each convolution module includes a plurality of convolution layers connected in sequence, and each convolution module outputs an image feature matrix of a specific size.
In step S302 provided in the embodiment of the present invention, the stitching the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure includes:
carrying out square operation on each element in each image feature matrix respectively, assigning the obtained square operation value to the position of an original element to form a new image feature matrix, wherein the size of the new image feature matrix is [ m, m, k ], m is the size of the new image feature matrix, and k is the number of channels of the image features;
projecting the three-dimensional coordinates (x, y, z) of the vertices into two-dimensional coordinates (x, y);
splicing the new image characteristic matrix with the size of [ M, M, k ] with the two-dimensional coordinates (x, y) of the vertex respectively aiming at each new image characteristic matrix to obtain a pre-spliced matrix with the size of [ M, k ]; wherein M represents the number of vertexes of the grid model, and k is the number of channels of image features in a new image feature matrix;
and splicing the three-dimensional coordinates (x, y, z) of the grid model and the N pre-splicing matrixes on the dimensionality of matrix columns to form a vertex characteristic vector with the size of [ M, K ], wherein K represents the sum of the channel number of the N new image characteristic matrixes and the sum of the coordinate dimensionality of the vertex.
In step S401 provided in the embodiment of the present invention, the projecting the three-dimensional coordinates (x, y, z) of the vertex into two-dimensional coordinates (x, y) includes:
calculating the height h of the volume occupied by each vertex of the mesh modeliAnd width wi
hi=L×[-y/(-z)]+H;
wi=L×[x/(-z)]+H;
According to the height h of the volume occupied by each vertexiAnd width wiObtaining two-dimensional coordinates of each vertex on a two-dimensional plane:
xi=hi/(224/56);
yi=wi/(224/56);
where 224 is the length and width of the input image size; 56 is a set value, if the feature matrix requires more channels, the value is decreased, if the feature matrix requires less channels, the value is increased, i represents the index of the vertex; l and H are the length and height, respectively, of the volume occupied by the initial mesh model.
In step S401 provided in the embodiment of the present invention, the splicing the three-dimensional coordinates (x, y, z) of the grid model and the N pre-splice matrices in the dimension of the matrix column includes:
taking out the elements of all channels with the positions (x, y) from a new image feature matrix with the size [ m, m, k ] according to the two-dimensional coordinates (x, y) of the vertex;
and respectively converting the elements of all the channels into pre-splicing matrixes with specific sizes through reshape functions.
Example 3
The augmented reality reverse car-searching method provided by the embodiment of the invention is shown in fig. 1, as a preferred embodiment, as shown in fig. 5, the method for controlling the operation of each connection module by a central control module and using a main control computer to ensure the normal operation of each module provided by the embodiment of the invention comprises the following steps:
s401, obtaining an error signal by making a difference between an input signal and an output signal of a controlled object;
s402, judging whether the error signal is larger than a preset error threshold value;
and S403, outputting the total control quantity to the controlled object so as to adjust the output signal of the controlled object.
In step S402 provided in the embodiment of the present invention, the determining whether the error signal is greater than a preset error threshold includes:
if so, calculating a first PID control quantity by adopting a first PID control algorithm with integral regulation so as to serve as a total control quantity;
if not, calculating a second PID control quantity by adopting a second PID control algorithm for canceling integral regulation, calculating a compensation control quantity by adopting a preset compensation algorithm, and taking the sum of the second PID control quantity and the compensation control quantity as the total control quantity;
the preset error threshold provided by the embodiment of the invention is as follows:
ef=k1r(t);
wherein e isfSetting the preset error threshold value; k is a radical of1Is a preset error threshold coefficient; r (t) is the input signal; the preset error threshold coefficient k1∈[0,0.2]。
The method for calculating the compensation control quantity by adopting the preset compensation algorithm provided by the embodiment of the invention comprises the following steps:
according to ub=k2u0Calculating the compensation control quantity; wherein u isbThe compensation control quantity is used as the compensation control quantity; k is a radical of2Is a preset compensation coefficient; u. of0Is a preset basic compensation amount corresponding to the controlled object.
In step S403 provided in the embodiment of the present invention, a transfer function of the controlled object is:
Figure BDA0003028571660000131
wherein, b0Is a molecular constant term; b is a denominator constant term; a is a denominator first-order term coefficient; the preset basic compensation amount is u0=b/b0
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", "front", "rear", "head", "tail", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any modification, equivalent replacement, and improvement made by those skilled in the art within the technical scope of the present invention disclosed herein, which is within the spirit and principle of the present invention, should be covered by the present invention.

Claims (10)

1. An augmented reality reverse car locating system, comprising:
parking area information acquisition module is connected with central control module for carry out the collection of parking area information through parking factory information acquisition procedure, obtain parking area information, include:
determining a database where the parking lot information is located; the database comprises one or more data elements;
acquiring information of a parking lot corresponding to the data element based on an extraction path of the information corresponding to the data element;
based on the parking lot name, associating the information of the parking lot according to the corresponding parking lot name;
obtaining corresponding structured data based on the associated information; converting the structured data based on the corresponding relation between the data elements and the information of the parking lot to obtain standard data corresponding to the data elements;
based on the parking lot name, respectively storing each standard data corresponding to the same parking lot name and each data element corresponding to each standard data in an associated manner;
the parking lot model building module is connected with the central control module and used for building the parking lot three-dimensional model through a parking lot model building program to obtain the parking lot three-dimensional model, and the parking lot three-dimensional model building module comprises:
acquiring a plurality of two-dimensional images of a parking lot, and extracting image features of the two-dimensional images by using a two-dimensional CNN;
splicing the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure;
carrying out convolution deformation on the vertex feature vector of the graph structure by using GCN to obtain a new vertex and a corresponding three-dimensional coordinate;
obtaining an image three-dimensional model according to the three-dimensional coordinates of the new vertex;
the vehicle positioning module is connected with the central control module and used for determining vehicle position information through a positioning device arranged in the vehicle to obtain the vehicle position information;
the vehicle owner dynamic acquisition module is connected with the central control module and is used for acquiring vehicle owner dynamic information through a vehicle owner dynamic acquisition program to obtain dynamic information of the position of a vehicle owner;
the central control module is connected with the parking lot information acquisition module, the parking lot model construction module, the vehicle positioning module and the vehicle owner dynamic acquisition module and is used for controlling the operation of each connection module through a main control computer so as to ensure the normal operation of each module;
wherein, control each connection module's operation through the main control computer, include:
the method comprises the steps of obtaining an error signal by making a difference between an input signal and an output signal of a controlled object;
judging whether the error signal is larger than a preset error threshold value or not; wherein the preset error threshold is:
ef=k1r(t);
wherein e isfSetting the preset error threshold value; k is a radical of1Is a preset error threshold coefficient; r (t) is the input signal; the preset error threshold coefficient k1∈[0,0.2];
If so, calculating a first PID control quantity by adopting a first PID control algorithm with integral regulation so as to serve as a total control quantity;
if not, calculating a second PID control quantity by adopting a second PID control algorithm for canceling integral regulation, calculating a compensation control quantity by adopting a preset compensation algorithm, and taking the sum of the second PID control quantity and the compensation control quantity as the total control quantity;
the calculating the compensation control quantity by adopting the preset compensation algorithm comprises the following steps:
according to ub=k2u0Calculating the compensation control quantity; wherein u isbThe compensation control quantity is used as the compensation control quantity; k is a radical of2Is a preset compensation coefficient; u. of0A preset basic compensation amount corresponding to the controlled object;
outputting the total control quantity to the controlled object so as to adjust the output signal of the controlled object;
the transfer function of the controlled object is as follows:
Figure FDA0003028571650000021
wherein, b0Is a molecular constant term; b is a denominator constant term; a is a denominator first-order term coefficient; the preset basic compensation amount is u0=b/b0
The information feedback module is connected with the central control module and used for feeding back the dynamic information of the position of the vehicle owner through an information feedback program;
the cloud identification module is connected with the central control module and used for acquiring the area where the vehicle is located on the basis of the acquired vehicle position information through a cloud identification program and performing cloud identification on the position of the vehicle in the area where the vehicle is located to obtain vehicle identification information;
the position and direction conversion module is connected with the central control module and is used for converting the cloud identification position and the cloud identification direction through a position conversion program;
the three-dimensional imaging module is connected with the central control module and is used for carrying out three-dimensional imaging on the position of the vehicle in the parking lot three-dimensional model according to the acquired vehicle identification information through a three-dimensional imaging program;
and the AR management module is connected with the central control module and is used for carrying out AR management through the Web-based AR management background.
2. The augmented reality reverse car seeking system according to claim 1, wherein the information of the parking lot comprises a name of the parking lot, position information of the parking lot, and a plurality of images of the parking lot in the parking lot information collecting module.
3. The augmented reality reverse vehicle searching system according to claim 1, wherein the associating the information of the parking lot according to the corresponding name of the parking lot in the parking lot information collecting module comprises:
analyzing the parking lot information to obtain attribute field information;
combining the attribute field information and the parking lot name information to generate a source data set;
selecting fields needing to be correlated in the attribute field information to generate a correlated data set;
generating a multi-level data association model, which comprises a matching field set and a condition field; the matching field set comprises at least one matching field, each matching field is composed of a writing field and a reading field, the writing field corresponds to a field in the source data set, the reading field corresponds to a field in the associated data set, the condition field comprises a key value and a value attribute, the key value is a field in the associated data set, and the value attribute is a field in the source data set;
and associating the information of the parking lot according to the corresponding name of the parking lot according to the generated multi-level data association model.
4. The augmented reality reverse hunting system according to claim 1, wherein in the central control module, the first PID control algorithm and the second PID control algorithm are both parameter self-tuning PID control algorithms.
5. The augmented reality reverse vehicle seeking system according to claim 1, wherein the two-dimensional CNN comprises N convolution modules connected in sequence, each convolution module comprises a plurality of convolution layers connected in sequence, and each convolution module outputs an image feature matrix with a specific size.
6. The augmented reality reverse car seeking system according to claim 1, wherein the stitching the three-dimensional coordinates of the original mesh model and the image features into vertex feature vectors of a graph structure in a parking lot model building module comprises:
carrying out square operation on each element in each image feature matrix respectively, assigning the obtained square operation value to the position of an original element to form a new image feature matrix, wherein the size of the new image feature matrix is [ m, m, k ], m is the size of the new image feature matrix, and k is the number of channels of the image features;
projecting the three-dimensional coordinates (x, y, z) of the vertices into two-dimensional coordinates (x, y);
splicing the new image characteristic matrix with the size of [ M, M, k ] with the two-dimensional coordinates (x, y) of the vertex respectively aiming at each new image characteristic matrix to obtain a pre-spliced matrix with the size of [ M, k ]; wherein M represents the number of vertexes of the grid model, and k is the number of channels of image features in a new image feature matrix;
and splicing the three-dimensional coordinates (x, y, z) of the grid model and the N pre-splicing matrixes on the dimensionality of matrix columns to form a vertex characteristic vector with the size of [ M, K ], wherein K represents the sum of the channel number of the N new image characteristic matrixes and the sum of the coordinate dimensionality of the vertex.
7. The augmented reality reverse hunting system according to claim 6 wherein said projecting the three-dimensional coordinates (x, y, z) of the vertices into two-dimensional coordinates (x, y) comprises:
calculating the height h of the volume occupied by each vertex of the mesh modeliAnd width wi
hi=L×[-y/(-z)]+H;
wi=L×[x/(-z)]+H;
According to the height h of the volume occupied by each vertexiAnd width wiObtaining two-dimensional coordinates of each vertex on a two-dimensional plane:
xi=hi/(224/56);
yi=wi/(224/56);
where 224 is the length and width of the input image size; 56 is a set value, if the feature matrix requires more channels, the value is decreased, if the feature matrix requires less channels, the value is increased, i represents the index of the vertex; l and H are the length and height, respectively, of the volume occupied by the initial mesh model.
8. The augmented reality reverse hunting system according to claim 6 wherein said stitching three-dimensional coordinates (x, y, z) of the mesh model with N pre-stitched matrices in a dimension of a matrix column comprises:
taking out the elements of all channels with the positions (x, y) from a new image feature matrix with the size [ m, m, k ] according to the two-dimensional coordinates (x, y) of the vertex;
and respectively converting the elements of all the channels into pre-splicing matrixes with specific sizes through reshape functions.
9. A computer program product stored on a computer readable medium, comprising a computer readable program for providing a user input interface for applying an augmented reality reverse hunting system according to any one of claims 1-8 when executed on an electronic device.
10. A computer readable storage medium storing instructions which, when executed on a computer, cause the computer to apply an augmented reality reverse vehicle seeking system according to any one of claims 1 to 8.
CN202110423187.6A 2021-04-20 2021-04-20 Augmented reality reverse vehicle searching system and method Active CN113299104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110423187.6A CN113299104B (en) 2021-04-20 2021-04-20 Augmented reality reverse vehicle searching system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110423187.6A CN113299104B (en) 2021-04-20 2021-04-20 Augmented reality reverse vehicle searching system and method

Publications (2)

Publication Number Publication Date
CN113299104A true CN113299104A (en) 2021-08-24
CN113299104B CN113299104B (en) 2022-05-06

Family

ID=77319996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110423187.6A Active CN113299104B (en) 2021-04-20 2021-04-20 Augmented reality reverse vehicle searching system and method

Country Status (1)

Country Link
CN (1) CN113299104B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310489A (en) * 2013-06-24 2013-09-18 中南大学 Three-dimensional model interactive method based on dynamitic depth hierarchy structure
CN105976636A (en) * 2016-05-31 2016-09-28 上海美迪索科电子科技有限公司 Parking lot vehicle searching system using augmented reality technology and vehicle searching method
CN107065196A (en) * 2017-06-16 2017-08-18 京东方科技集团股份有限公司 A kind of augmented reality display device and augmented reality display methods
WO2018066352A1 (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Image generation system, program and method, and simulation system, program and method
CN109872556A (en) * 2018-12-27 2019-06-11 福建农林大学 Vehicle system is sought in a kind of parking garage based on augmented reality
CN111489582A (en) * 2020-03-27 2020-08-04 南京翱翔信息物理融合创新研究院有限公司 Indoor vehicle finding guiding system and method based on augmented reality
US10832476B1 (en) * 2018-04-30 2020-11-10 State Farm Mutual Automobile Insurance Company Method and system for remote virtual visualization of physical locations
US10909349B1 (en) * 2019-06-24 2021-02-02 Amazon Technologies, Inc. Generation of synthetic image data using three-dimensional models
US20210090301A1 (en) * 2019-09-24 2021-03-25 Apple Inc. Three-Dimensional Mesh Compression Using a Video Encoder

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103310489A (en) * 2013-06-24 2013-09-18 中南大学 Three-dimensional model interactive method based on dynamitic depth hierarchy structure
CN105976636A (en) * 2016-05-31 2016-09-28 上海美迪索科电子科技有限公司 Parking lot vehicle searching system using augmented reality technology and vehicle searching method
WO2018066352A1 (en) * 2016-10-06 2018-04-12 株式会社アドバンスド・データ・コントロールズ Image generation system, program and method, and simulation system, program and method
CN107065196A (en) * 2017-06-16 2017-08-18 京东方科技集团股份有限公司 A kind of augmented reality display device and augmented reality display methods
US10832476B1 (en) * 2018-04-30 2020-11-10 State Farm Mutual Automobile Insurance Company Method and system for remote virtual visualization of physical locations
CN109872556A (en) * 2018-12-27 2019-06-11 福建农林大学 Vehicle system is sought in a kind of parking garage based on augmented reality
US10909349B1 (en) * 2019-06-24 2021-02-02 Amazon Technologies, Inc. Generation of synthetic image data using three-dimensional models
US20210090301A1 (en) * 2019-09-24 2021-03-25 Apple Inc. Three-Dimensional Mesh Compression Using a Video Encoder
CN111489582A (en) * 2020-03-27 2020-08-04 南京翱翔信息物理融合创新研究院有限公司 Indoor vehicle finding guiding system and method based on augmented reality

Also Published As

Publication number Publication date
CN113299104B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
KR102126724B1 (en) Method and apparatus for restoring point cloud data
CN101996420B (en) Information processing device, information processing method and program
US20200293751A1 (en) Map construction method, electronic device and readable storage medium
US20200250484A1 (en) System and method using augmented reality for efficient collection of training data for machine learning
CN104111812B (en) display control method and device
CN111639663B (en) Multi-sensor data fusion method
US20120128205A1 (en) Apparatus for providing spatial contents service and method thereof
US20120155778A1 (en) Spatial Image Index and Associated Updating Functionality
CN112559534B (en) Remote sensing image data filing management system and method
CN112765720B (en) BIM+GIS-based water conservancy and hydropower engineering multisource data fusion method
CN111928866B (en) Robot map difference updating method and device
US20110018865A1 (en) Method for providing three dimensional map service and geographic information system
CN102779165A (en) Building method of grid map picture base
CN111024089A (en) Indoor positioning navigation method based on BIM and computer vision technology
WO2023241097A1 (en) Semantic instance reconstruction method and apparatus, device, and medium
CN105183154A (en) Interactive display method for virtual object and real image
CN111738040A (en) Deceleration strip identification method and system
CN113299104B (en) Augmented reality reverse vehicle searching system and method
CN115048478B (en) Construction method, equipment and system of geographic information map of intelligent equipment
CN115727854A (en) VSLAM positioning method based on BIM structure information
CN113808186B (en) Training data generation method and device and electronic equipment
CN114943766A (en) Relocation method, relocation device, electronic equipment and computer-readable storage medium
CN114464007A (en) Unmanned aerial vehicle-based smart city parking monitoring method and system and cloud platform
CN114115231A (en) Mobile robot space pose point cloud correction method and system suitable for hospital scene
CN113158803A (en) Classroom vacant seat query system, real-time video analysis system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant