CN110942485B - Scene perception method and device based on artificial intelligence and electronic equipment - Google Patents

Scene perception method and device based on artificial intelligence and electronic equipment Download PDF

Info

Publication number
CN110942485B
CN110942485B CN201911184312.1A CN201911184312A CN110942485B CN 110942485 B CN110942485 B CN 110942485B CN 201911184312 A CN201911184312 A CN 201911184312A CN 110942485 B CN110942485 B CN 110942485B
Authority
CN
China
Prior art keywords
scene
sensing
calibration
perception
relative pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911184312.1A
Other languages
Chinese (zh)
Other versions
CN110942485A (en
Inventor
祝磊
陈乐�
凌永根
迟万超
刘威
张正友
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911184312.1A priority Critical patent/CN110942485B/en
Publication of CN110942485A publication Critical patent/CN110942485A/en
Application granted granted Critical
Publication of CN110942485B publication Critical patent/CN110942485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a scene perception method, a scene perception device, electronic equipment and a storage medium based on artificial intelligence; the method comprises the following steps: acquiring sensing data obtained by sensing a calibration scene by sensing equipment, and determining plane equation parameters of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images; determining a rotation matrix between the first sensing equipment and the second sensing equipment according to the plane equation parameters, and determining a displacement vector according to the rotation matrix; determining a relative pose according to the rotation matrix and the displacement vector; and fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model. By the method and the device, the accuracy of the obtained relative pose can be improved.

Description

Scene perception method and device based on artificial intelligence and electronic equipment
Technical Field
The present invention relates to artificial intelligence technology, and in particular, to a scene sensing method and apparatus based on artificial intelligence, an electronic device, and a storage medium.
Background
Artificial Intelligence (AI) is a theory, method and technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. Computer Vision technology (CV) is an important branch of artificial intelligence that attempts to build artificial intelligence systems that can obtain information from images or multidimensional data.
Scene perception is an important application of computer vision technology, and under the condition that two perception devices are adopted for scene perception, the relative pose of the two perception devices needs to be calibrated. In the scheme provided by the related technology, the relative pose is usually calibrated in a reflection intensity assisted mode, for example, the relative pose between the laser radar and the camera is calibrated, firstly, angular points in radar point cloud are determined in an assisted mode through reflection intensity, and the angular points detected in a camera image form 3D-2D lattice point constraint, so that the relative pose is obtained through optimization and solution. However, due to the sparsity of the radar point cloud, the constraint condition can still be met under some incorrect conditions, and the obtained relative pose is not accurate.
Disclosure of Invention
The embodiment of the invention provides a scene sensing method and device based on artificial intelligence, electronic equipment and a storage medium, which can improve the accuracy of relative pose calibration and the accuracy of an obtained scene model.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a scene perception method based on artificial intelligence, which comprises the following steps:
acquiring sensing data obtained by sensing a calibration scene by sensing equipment, and determining plane equation parameters of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images;
determining a rotation matrix between the first sensing equipment and the second sensing equipment according to the plane equation parameters, and determining a displacement vector according to the rotation matrix;
determining a relative pose between the first sensing device and the second sensing device according to the rotation matrix and the displacement vector;
and fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
The embodiment of the invention provides a scene perception device based on artificial intelligence, which comprises:
the first parameter determination module is used for acquiring sensing data obtained by sensing a calibration scene by sensing equipment and determining plane equation parameters of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images;
the second parameter determining module is used for determining a rotation matrix between the first sensing equipment and the second sensing equipment according to the plane equation parameters and determining a displacement vector according to the rotation matrix;
a relative pose determination module, configured to determine a relative pose between the first sensing device and the second sensing device according to the rotation matrix and the displacement vector;
and fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
An embodiment of the present invention provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the scene perception method based on artificial intelligence provided by the embodiment of the invention when the executable instructions stored in the memory are executed.
The embodiment of the invention provides a storage medium, which stores executable instructions and is used for causing a processor to execute so as to realize the scene perception method based on artificial intelligence provided by the embodiment of the invention.
The embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the plane equation parameters corresponding to the first sensing equipment and the second sensing equipment are determined, so that the rotation matrix and the displacement vector are determined according to the plane equation parameters, and the relative pose between the first sensing equipment and the second sensing equipment is established.
Drawings
FIG. 1A is a schematic diagram illustrating a calibration result of a reflection intensity assisted manner provided by an embodiment of the present invention;
FIG. 1B is a schematic diagram of a calibration result of the reflection intensity assisted mode provided by the embodiment of the invention on the front surface of a calibration board;
FIG. 1C is a schematic view of the calibration result of the reflection intensity assisted mode provided by the embodiment of the invention on the side of the calibration board;
FIG. 2A is a schematic diagram of sensing in a fitted edge mode provided by an embodiment of the present invention;
FIG. 2B is a schematic diagram of an interface for selecting a boundary point of a calibration board according to an embodiment of the present invention;
FIG. 3 is an alternative architecture diagram of an artificial intelligence based scene awareness system according to an embodiment of the present invention;
FIG. 4 is an alternative architecture diagram of an artificial intelligence based scene awareness system incorporating blockchains according to an embodiment of the present invention;
FIG. 5 is an alternative architecture diagram of a server provided by an embodiment of the invention;
FIG. 6 is an alternative architecture diagram of an artificial intelligence based scene awareness apparatus according to an embodiment of the present invention;
FIG. 7A is a schematic flow chart of an alternative method for scene awareness based on artificial intelligence according to an embodiment of the present invention;
FIG. 7B is a schematic flow chart of an alternative artificial intelligence-based scene awareness method according to an embodiment of the present invention;
FIG. 7C is a schematic flow chart of an alternative artificial intelligence based scene awareness method according to an embodiment of the present invention;
FIG. 8 is a schematic flow chart of an alternative artificial intelligence based scene awareness method according to an embodiment of the present invention;
FIG. 9 is an alternative schematic illustration of a calibration plate provided by embodiments of the present invention;
FIG. 10 is an alternative schematic diagram of a calibration scenario provided by an embodiment of the present invention;
FIG. 11 is an alternative schematic diagram of a lidar point cloud interface provided by embodiments of the present invention;
fig. 12 is an alternative schematic diagram of a calibration scene after re-projection according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the description that follows, references to the terms "first", "second", and the like, are intended only to distinguish similar objects and not to indicate a particular ordering for the objects, it being understood that "first", "second", and the like may be interchanged under certain circumstances or sequences of events to enable embodiments of the invention described herein to be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) Relative pose: also called relative external parameter, is a parameter matrix with 6 degrees of freedom
Figure BDA0002292047530000041
Wherein R is a rotation matrix of 3 degrees of freedom,
Figure BDA0002292047530000051
is a displacement vector of 3 degrees of freedom. The relative pose is used to describe the relative transformation relationship between the two coordinate systems, e.g.,
Figure BDA0002292047530000052
is the coordinate of any point in the source coordinate system, and the coordinate in the target coordinate system is
Figure BDA0002292047530000053
The relative pose between the source coordinate system and the target coordinate system is
Figure BDA0002292047530000054
Then
Figure BDA0002292047530000055
2) Degree of freedom: refers to the number of independent movements that can be made relative to a coordinate system. An object can perform 3 translational and 3 rotational movements relative to a coordinate system, i.e. the object has 6 degrees of freedom.
3) Calibrating a plate: the flat plate with the fixed-spacing pattern array is used for machine vision, image measurement, photogrammetry, three-dimensional reconstruction and other directions, and particularly can be used for correcting lens distortion, determining a conversion relation between a physical size and a pixel, determining a mutual relation between a three-dimensional geometric position of a certain point on the surface of a space object and a corresponding point in an image and the like.
4) Principal Component Analysis (PCA), Principal Component Analysis): can be used for searching a low-dimensional space, so that the variance of the original data can be explained to the maximum extent by new data generated after the data of the high-dimensional space is projected to the low-dimensional space.
5) The perception device: devices for emulating human perception of external scenes, such as lidar and cameras.
6) Point cloud: refers to a collection of vectors in a three-dimensional coordinate system, typically used to represent the shape of the outer surface of an object.
7) Blockchain (Blockchain): an encrypted, chained transactional memory structure formed of blocks (blocks).
8) Block chain Network (Blockchain Network): the new block is incorporated into the set of a series of nodes of the block chain in a consensus manner.
The inventor finds that, in the process of implementing the embodiment of the present invention, for the calibration of the relative pose between different sensing devices, the related art mainly provides two ways, and for the convenience of understanding, the calibration of the relative pose between the lidar and the camera is exemplified. In the first mode, after a same checkerboard calibration plate is sensed by a laser radar and a camera, angular points of checkerboards in radar point cloud are determined by aid of reflection intensity, 3D-2D lattice point constraint is formed by the angular points detected in an image, and then the relative pose is obtained through optimization solution. Fig. 1A shows a visualization schematic diagram of a calibration result, where fig. 1A includes a coordinate system formed by an X axis, a Y axis, and a Z axis, a rendered checkerboard calibration plate, and a point cloud in the checkerboard calibration plate; FIG. 1B shows a schematic visualization of the front side of the calibration plate, in a first way, the points with greater reflection intensity should fall into the white squares of the calibration plate, and the points with lesser reflection intensity should fall into the black squares of the calibration plate (the darker squares in FIG. 1B); FIG. 1C shows a visual schematic of the side of the calibration plate.
For the first mode, because the radar point cloud itself has sparsity, even if there is a small amount of relative motion between the radar point cloud and the calibration plate, the reflection intensity constraint can still be satisfied, i.e. the point with larger reflection intensity falls into the white square, and the point with smaller reflection intensity falls into the black square, resulting in lower accuracy of corner point fitting and further inaccurate calibration result. Furthermore, in order to make the reflection intensities of the black and white squares distinguishable, strict requirements are placed on the material of the calibration plate coating, i.e. the material requirements are high.
In the second edge fitting manner, as shown in fig. 2A, a calibration plate 23 and a calibration plate 24 are sensed by a laser radar 21 and a camera 22, calibration plate boundary points are selected from the obtained radar point cloud, four edges are determined by fitting the calibration plate boundary points, and then four corner points of the calibration plate are determined; in a camera coordinate system, a calibration plate plane is detected through machine vision, and then three-dimensional coordinates of four corner points are determined. And then, solving the relative pose between the laser radar and the camera by constructing 3D-3D corner point constraints. FIG. 2B is an interface for selecting calibration plate boundary points. For this way, also because of the sparsity of the radar point cloud, it is almost impossible that exactly the point located at the boundary of the calibration plate appears in the radar point cloud, and the boundary point of the calibration plate in the radar point cloud is actually always located inside the edge, so that the fitting accuracy of the edge and the corner point has a systematic error, and the determined relative pose is inaccurate.
The embodiment of the invention provides a scene sensing method and device based on artificial intelligence, an electronic device and a storage medium, which can improve the accuracy of an obtained relative pose and the accuracy of an obtained scene model, and have low requirements on a calibration scene.
Referring to fig. 3, fig. 3 is an alternative architecture diagram of the artificial intelligence based scene awareness system 100 according to an embodiment of the present invention, in which a terminal device 400 is connected to the server 200 through a network 300 for implementing supporting an artificial intelligence based scene awareness application, fig. 3 exemplarily shows a terminal device 400-1 (shown in the form of a legged robot) and a terminal device 400-2 (shown in the form of a ground cart), the terminal device 400 is provided with a first sensing device and a second sensing device, and the network 300 may be a wide area network or a local area network, or a combination of the two.
The terminal device 400 is configured to sense a calibration scene according to a first sensing device and a second sensing device to obtain sensing data, and send the sensing data to the server 200, where the calibration scene includes at least three calibration boards (fig. 1 exemplarily shows a calibration board 500-1, a calibration board 500-2, and a calibration board 500-3 that are not parallel to each other), and the sensing data includes point clouds and/or images; the server 200 is configured to obtain sensing data and determine a plane equation parameter of a calibration scene according to the sensing data; determining a rotation matrix between the first sensing equipment and the second sensing equipment according to the plane equation parameters, and determining a displacement vector according to the rotation matrix; determining the relative pose between the first sensing equipment and the second sensing equipment according to the rotation matrix and the displacement vector; fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model; performing path planning processing according to the scene model to obtain a traveling route, and sending the traveling route to the terminal device 400; the terminal device 400 is configured to display a travel route on a graphical interface 410 (a graphical interface 410-1 and a graphical interface 410-2 are exemplarily shown) and move according to the travel route.
It should be noted that the scene model obtained from the fused perception data can be used for navigation, planning, control, and the like, and is not limited to the use of determining the travel route shown in fig. 3.
The embodiment of the invention can also be realized by combining a block chain technology, and the block chain (Blockchain) is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. The blockchain is essentially a decentralized database, which is a string of data blocks associated by using cryptography, each data block contains information of a batch of network transactions, and the information is used for verifying the validity (anti-counterfeiting) of the information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
Referring to fig. 4, fig. 4 is an alternative architecture diagram of the artificial intelligence based scene awareness system 110 according to the embodiment of the present invention, which includes a blockchain network 200 (exemplarily shown to include nodes 210-1 to 210-3), an authentication center 300, a service entity 40-1 (exemplarily shown to be a terminal device 400-1 belonging to the service entity 40-1 and a graphical interface 410-1 thereof) and a service entity 40-2 (exemplarily shown to be a terminal device 400-2 belonging to the service entity 40-2 and a graphical interface 410-2 thereof), which are respectively described below.
The type of blockchain network 200 is flexible and may be, for example, any of a public chain, a private chain, or a federation chain. Taking a public link as an example, electronic devices such as user terminals and servers of any service entity can access the blockchain network 200 without authorization; taking a federation chain as an example, an electronic device (e.g., a terminal/server) under the jurisdiction of a service entity after obtaining authorization may access the blockchain network 200, and at this time, become a special type of node in the blockchain network 200, i.e., a client node.
Note that the client node may provide only functionality that supports the initiation of transactions by the business entity (e.g., for uplink storage of data or for querying of data on the chain), and may be implemented by default or selectively (e.g., depending on the specific business requirements of the business entity) for the functionality of the conventional (native) node 210 of the blockchain network 200, such as the ranking functionality, consensus services, ledger functionality, etc., described below. Therefore, the data and the service processing logic of the service subject can be migrated into the block chain network 200 to the maximum extent, and the credibility and traceability of the data and service processing process are realized through the block chain network 200.
Blockchain network 200 receives transactions submitted by client nodes (e.g., terminal device 400-1 attributed to business entity 40-1, and terminal device 400-2 attributed to business entity 40-2, shown in fig. 1) from different business entities (e.g., business entity 40-1 and business entity 40-2, shown in fig. 4), performs the transactions to update or query the ledger, and displays various intermediate or final results of performing the transactions at the user interfaces of the terminal devices (e.g., graphical interface 410-1 of terminal device 400-1, graphical interface 410-2 of terminal device 400-2). It is to be understood that, in the above, the blockchain network 200 receiving the transaction and executing the transaction specifically refers to the native node 210 in the blockchain network 200, and of course, when the client node of the service subject has the function (e.g., the consensus function, the ledger function) of the native node 210 in the blockchain network 200, the corresponding client node may also be included.
An exemplary application of the blockchain network is described below by taking an example that a business subject accesses the blockchain network to realize management of relative poses.
Referring to fig. 4, the terminal device 400-1 generates a transaction corresponding to the update operation according to the internal identifiers of the sensing devices (where the identifiers include the identifier of the first sensing device and the identifier of the second sensing device) and the determined relative pose, specifies an intelligent contract that needs to be invoked to implement the update operation and parameters transferred to the intelligent contract in the transaction, and broadcasts the transaction to the blockchain network 200, where the transaction also carries a digital signature signed by the business entity 40-1 (for example, obtained by encrypting a digest of the transaction using a private key in a digital certificate of the business entity 40-1), and the digital certificate can be obtained by the business entity 40-1 by registering with the authentication center 300.
When a transaction is received in a node 210 in the blockchain network 200, a digital signature carried by the transaction is verified, after the digital signature is successfully verified, whether the business main body 40-1 has a transaction right is determined according to the identity of the business main body 40-1 carried in the transaction, and the transaction fails due to verification judgment of any one of the digital signature and the right verification. After successful verification, node 210 signs its own digital signature (e.g., by encrypting the digest of the transaction using the private key of node 210-1) and continues to broadcast in blockchain network 200.
After the node 210 with the sorting function in the blockchain network 200 receives the transaction successfully verified, the transaction is filled into a new block and broadcasted to the node providing the consensus service in the blockchain network 200.
The node 210 providing the consensus service in the blockchain 200 performs the consensus process on the new block to reach agreement, the node 210 providing the ledger function appends the new block to the tail of the blockchain, and performs the transaction in the new block: and for the submitted transaction for updating the identification and the relative pose of the sensing equipment, updating the key value pair between the identification and the relative pose of the sensing equipment in the state database, and setting a timestamp corresponding to the key value pair in the state database.
Similarly, the terminal device 400-2 may generate a relative pose query request according to the identifier of the internal sensing device, generate a transaction of a corresponding query operation, specify an intelligent contract that needs to be invoked to implement the query operation and parameters transferred to the intelligent contract in the transaction, and after the verification, broadcast and consensus of the transaction by the node 210 in the block chain network 200 are consistent, the node 210 providing the ledger function adds a new block to the tail of the block chain and executes the transaction in the new block: for the transaction for querying the relative pose, a key value pair corresponding to the identifier of the sensing device in the transaction is queried from the state database, and the relative pose in the key value pair is returned to the terminal device 400-2 as a query result. It is worth noting that when the state database stores at least two key value pairs corresponding to the identifications of the sensing devices in the transaction, the relative pose in the key value pair with the latest timestamp is used as a query result. In addition to sensing the device identifier and the relative pose, the scene model sensed by the terminal device 400-1 or the terminal device 400-2 may be managed through the block chain network 200, and the management process is not described herein again.
It will be appreciated that the type of data that a business entity can query/update in blockchain network 200 can be achieved by constraining the authority of transactions that the business entity can initiate, for example, when business entity 40-1 has the authority to initiate transactions that update the identity and relative pose of a sensing device, business personnel of business entity 40-1 can enter the identity and relative pose of a sensing device in graphical interface 410-1 of terminal device 400-1 and generate corresponding transactions from terminal device 400-1 that are broadcast to blockchain network 200 to add the identity and relative pose of a sensing device to the blockchain and status database.
The following continues to illustrate exemplary applications of the electronic device provided by embodiments of the present invention. The electronic device may be implemented as various types of terminal devices such as a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), and the like, and may also be implemented as a server. Next, an electronic device will be described as an example of a server.
Referring to fig. 5, fig. 5 is a schematic diagram of an architecture of a server 200 (for example, the server 200 shown in fig. 3) provided by an embodiment of the present invention, where the server 200 shown in fig. 5 includes: at least one processor 210, memory 250, at least one network interface 220, and a user interface 230. The various components in server 200 are coupled together by a bus system 240. It is understood that the bus system 240 is used to enable communications among the components. The bus system 240 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 240 in fig. 5.
The Processor 210 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 230 includes one or more output devices 231, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 250 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 250 optionally includes one or more storage devices physically located remotely from processor 210.
The memory 250 includes volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 250 described in embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 250 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, to support various operations, as exemplified below.
An operating system 251 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 252 for communicating to other computing devices via one or more (wired or wireless) network interfaces 220, exemplary network interfaces 220 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 253 to enable presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 231 (e.g., a display screen, speakers, etc.) associated with the user interface 230;
an input processing module 254 for detecting one or more user inputs or interactions from one of the one or more input devices 232 and translating the detected inputs or interactions.
In some embodiments, the artificial intelligence based scene awareness apparatus provided by the embodiments of the present invention can be implemented in software, and fig. 5 shows an artificial intelligence based scene awareness apparatus 255 stored in a memory 250, which can be software in the form of programs and plug-ins, and the like, and includes the following software modules: the first parameter determination module 2551, the second parameter determination module 2552, the relative pose determination module 2553 and the fusion module 2554 are logical, and therefore can be arbitrarily combined or further split according to the implemented functions. The functions of the respective modules will be explained below.
In other embodiments, the artificial intelligence based scene sensing apparatus provided by the embodiments of the present invention may be implemented in a hardware manner, for example, the artificial intelligence based scene sensing apparatus provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the artificial intelligence based scene sensing method provided by the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The scene awareness method based on artificial intelligence provided by the embodiment of the present invention may be executed by the server, or may be executed by a terminal device (for example, the terminal device 400-1 and the terminal device 400-2 shown in fig. 3), or may be executed by both the server and the terminal device.
In the following, a process of implementing the artificial intelligence based scene awareness method by an embedded artificial intelligence based scene awareness apparatus in an electronic device will be described in conjunction with the exemplary application and structure of the electronic device described above.
Referring to fig. 6 and fig. 7A, fig. 6 is an alternative architecture schematic diagram of the artificial intelligence based scene sensing apparatus 255 provided in the embodiment of the present invention, and illustrates a process of implementing scene sensing through a series of modules, and fig. 7A is a flowchart schematic diagram of an artificial intelligence based scene sensing method provided in the embodiment of the present invention, and the steps illustrated in fig. 7A will be described with reference to fig. 6.
In step 101, obtaining perception data obtained by perception of a calibration scene by a perception device, and determining a plane equation parameter of the calibration scene according to the perception data; wherein the perception device comprises a first perception device and a second perception device, and the perception data comprises point clouds and/or images.
For example, referring to fig. 6, in a first parameter determining module 2551, sensing data obtained by sensing the same calibration scene by a first sensing device and a second sensing device is obtained, where the calibration scene is a scene in which a calibration board is located, and the calibration boards included in the calibration scene are not parallel to each other. And then, according to the sensing data of each sensing device, constructing a plane equation corresponding to the calibration plate, and further determining plane equation parameters corresponding to the calibration plate. It is worth mentioning that the sensing device may be one of a lidar and a camera, for example, in one case the first sensing device is a lidar and the second sensing device is a camera, in another case both the first sensing device and the second sensing device are lidar. Correspondingly, the perception data comprises point clouds and/or images, the point clouds are obtained after the laser radar conducts perception, and the images are obtained after the camera conducts perception.
In step 102, a rotation matrix between the first sensing device and the second sensing device is determined according to the plane equation parameters, and a displacement vector is determined according to the rotation matrix.
As an example, referring to fig. 6, in the second parameter determining module 2552, a plane-based geometric constraint is constructed, a rotation matrix between the first sensing device and the second sensing device is determined according to plane equation parameters of the first sensing device and the second sensing device, and a displacement vector is determined according to the rotation matrix, where the rotation matrix and the displacement vector together represent a relative transformation relationship between the first sensing device and the second sensing device, which is described in detail later.
In step 103, a relative pose between the first sensing device and the second sensing device is determined according to the rotation matrix and the displacement vector.
As an example, referring to fig. 6, in the relative pose determination module 2553, a relative pose of 6 degrees of freedom is constructed according to the rotation matrix of 3 degrees of freedom and the displacement vector of 3 degrees of freedom, and the calibration of the relative pose between the first sensing device and the second sensing device is completed.
In some embodiments, after step 103, further comprising: determining a left eye relative pose between the first sensing equipment and left eye sensing equipment, and determining a right eye relative pose between the first sensing equipment and right eye sensing equipment; wherein the second perception device comprises the left eye perception device and the right eye perception device; carrying out inverse processing on the relative pose of the left eye, and carrying out product processing on the relative pose of the left eye and the relative pose of the right eye after inverse processing to obtain a binocular relative pose between the left eye sensing equipment and the right eye sensing equipment; and fusing the perception data of the left eye perception device and the perception data of the right eye perception device according to the binocular relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
Here, the second perception device includes a left eye perception device and a right eye perception device, and the second perception device is, for example, a binocular camera. In this case, if the actual purpose is to determine the relative pose between the left eye sensing device and the right eye sensing device, the left eye relative pose from the first sensing device to the left eye sensing device may be determined, the right eye relative pose from the first sensing device to the right eye sensing device may be determined at the same time, the left eye relative pose may be subjected to inversion processing, the left eye relative pose subjected to inversion processing and the right eye relative pose may be subjected to multiplication processing to obtain the binocular relative pose, and the obtained binocular relative pose is the relative pose from the left eye sensing device to the right eye sensing device. Therefore, the perception data of the left eye perception device and the perception data of the right eye perception device can be fused according to the binocular relative pose, and a scene model is obtained by modeling the corresponding scene according to the fused perception data. Of course, this approach may also be extended to the case of a multi-view sensing device, which is not limited in this embodiment of the present invention. Through the mode, the relative pose in the sensing equipment with more than two eyes can be effectively determined.
In some embodiments, after step 103, further comprising: and sending the identification of the sensing equipment and the corresponding relative pose to a block chain network so that the node of the block chain network fills the identification of the sensing equipment and the corresponding relative pose into a new block, and when the new blocks are identified in common, adding the new block to the tail of the block chain and responding to a relative pose query request carrying the identification of the sensing equipment.
In the embodiment of the present invention, the identifier of the sensing device and the corresponding relative pose may be sent to the blockchain network by combining with a blockchain technology, so that the node in the blockchain network stores the identifier of the sensing device and the corresponding relative pose to the blockchain and the state database, and in addition, the identifier of the terminal device loaded with the sensing device may be linked together, for example, when the terminal device loaded with the sensing device is a robot, the KEY value pair stored in the state database may be "KEY: robot type number identification-identification of first perception device-identification of second perception device VALUE: relative pose ". Therefore, other robots with the same model and loaded with the same sensing equipment can send relative pose query requests carrying the model number identification of the robot, the identification of the first sensing equipment and the identification of the second sensing equipment to the blockchain network, and nodes of the blockchain network search in a state database according to the query requests and return the searched relative poses. On the basis, the relative pose may be updated, so that the update time of the relative pose can be linked together, and when receiving the query request, the node of the blockchain network returns the relative pose corresponding to the query request and with the latest update time. In addition, the chaining object is not limited to the relative pose, for example, after data fusion is performed according to the relative pose, so as to obtain a scene model of the scene, the identifier of the scene and the scene model can be jointly chained, so that the query is facilitated. By the aid of the uplink mode, the relative pose in the block chain network can be guaranteed not to be tampered, and accuracy of the query result is improved.
In step 104, the sensing data of the first sensing device and the sensing data of the second sensing device are fused according to the relative pose, and a scene model is obtained by modeling the corresponding scene according to the fused sensing data.
For example, referring to fig. 6, in the fusion module 2554, on the basis of obtaining the relative pose between the first sensing device and the second sensing device, other scenes may be sensed according to the first sensing device and the second sensing device, the sensing data of the first sensing device and the second sensing device may be fused according to the relative pose, a scene model may be obtained by modeling the corresponding scene according to the fused sensing data, and thus the scene may be visualized at the machine vision layer. The obtained scene model can be directly output, and can also be used for navigation, planning, control and the like. It should be noted that, when data fusion is performed according to the relative pose, the sensing data of the first sensing device and the second sensing device may be superimposed, different weights may also be set for the sensing data of the first sensing device and the second sensing device, and fusion may be performed according to the weights.
As can be seen from the above exemplary implementation of fig. 7A, in the embodiment of the present invention, by constructing the geometric constraint based on the plane, the error in the calculation process is reduced, the accuracy of the determined relative pose is improved, and meanwhile, the accuracy of the determined scene model is also improved.
In some embodiments, referring to fig. 7B, fig. 7B is an optional flowchart of the artificial intelligence based scene awareness method provided in the embodiment of the present invention, and step 101 shown in fig. 7A may be implemented through step 201 to step 202, which will be described in conjunction with the steps.
In step 201, sensing data obtained by sensing a calibration scene by a sensing device is obtained, and in a coordinate system of the sensing data, planes corresponding to at least three calibration plates included in the calibration scene are determined.
Here, the calibration scene includes at least three calibration plates that are not parallel to each other, and for each sensing device, a plane corresponding to each calibration plate is determined in a coordinate system of corresponding sensing data.
In some embodiments, the determining, in the coordinate system of the perception data, the planes corresponding to at least three calibration plates included in the calibration scene may be implemented in such a manner that: when the sensing equipment is a laser radar, acquiring size characteristics of at least three calibration plates included in the calibration scene, and determining a plane corresponding to the calibration plate according to the size characteristics in a coordinate system of the sensing data; or, when the sensing device is a laser radar, acquiring a set election frame, and determining a plane covered by the set election frame in a coordinate system of the sensing data as a plane corresponding to the calibration plate.
In the case that the sensing device is a laser radar, the plane corresponding to the calibration plate can be determined in the laser radar coordinate system in the following two ways. The first method is to obtain the dimension features, such as the length-width ratio, of each calibration plate included in the calibration scene, then extract all planes in the point cloud (sensing data), match the extracted dimension features of the planes with the dimension features of the calibration plates, and if the matching is successful, determine the plane as the plane of the calibration plate in the laser radar coordinate system. In the second method, a set frame corresponding to the calibration plate, which is set manually, is obtained, and the plane covered by the set frame in the laser radar coordinate system is determined as the plane corresponding to the calibration plate. In an actual application scene, any one of the above modes can be selected to determine the plane corresponding to the calibration plate, so that the accuracy of determining the plane is improved.
In step 202, a plane equation of a plane corresponding to the calibration plate is determined, and a unit normal vector of the plane equation is determined.
As an example, referring to fig. 6, in the first parameter determining module 2551, after determining the plane corresponding to the calibration board, a plane equation of the plane in the coordinate system of the sensing data is determined, and the plane equation parameters of the plane equation are further determined, specifically including the unit normal vector and the constant term of the plane equation, but only the unit normal vector may be determined here. It should be noted that, in the case that the sensing device is a laser radar, the point cloud subsets belonging to a plane in the laser radar coordinate system may be analyzed by a principal component analysis method to obtain plane equation parameters of the plane, and in addition, unit normal vectors all point to one side of the sensing device.
Step 102 shown in fig. 7A can be implemented by steps 203 to 207, and will be described with reference to each step.
In step 203, at least three unit normal vectors corresponding to the first sensing device are spliced into a first matrix, and at least three unit normal vectors corresponding to the second sensing device are spliced into a second matrix.
As an example, referring to fig. 6, in the second parameter determination module 2552, unit normal vectors of all plane equations corresponding to the first sensing device are spliced into a first matrix, and unit normal vectors of all plane equations corresponding to the second sensing device are spliced into a second matrix.
In step 204, an objective function is constructed from the first matrix and the second matrix.
As an example, referring to FIG. 6, in a second parameter determination module 2552, an objective function is constructed from a first matrix and a second matrix, e.g., the first matrix is represented by N, the second matrix is represented by M, and the relative pose from the first perceiving device to the second perceiving device is represented by M
Figure BDA0002292047530000171
Expressed, then the form of the objective function may be as follows:
Figure BDA0002292047530000172
wherein min represents a minimum function, | | | | | non-calculationFRepresents the euclidean norm of the matrix,
Figure BDA0002292047530000173
representing the square of the euclidean norm of the matrix.
In step 205, the objective function is optimized, and a rotation matrix corresponding to the result of the optimization is determined.
For example, referring to FIG. 6, in the second parameter determination module 2552, the constructed objective function is optimized to obtain the minimum value
Figure BDA0002292047530000174
Determined as a rotation matrix.
In step 206, in the coordinate system of the perception data, the plane intersection point of the planes corresponding to at least three calibration plates included in the calibration scene is determined.
Here, in the coordinate system of the sensing data of the first sensing device, the plane intersection points of the planes corresponding to at least three calibration plates included in the calibration scene are determined, and for convenience of description, taking the example that the calibration scene includes three calibration plates that are not parallel to each other, the determined plane intersection points of the planes corresponding to three calibration plates are taken as
Figure BDA0002292047530000181
Similarly, in the coordinate system of the sensing data of the second sensing device, the plane intersection point of the corresponding planes of the three calibration plates included in the calibration scene is determined as
Figure BDA0002292047530000182
In step 207, a coordinate transformation equation is solved according to the rotation matrix, the plane intersection point corresponding to the first sensing device, and the plane intersection point corresponding to the second sensing device, so as to obtain a displacement vector between the first sensing device and the second sensing device.
As an example, referring to FIG. 6, in the second parameter determination module 2552, since the rotation matrix is known, the rotation matrix, the plane intersection corresponding to the first sensing device, and the plane intersection corresponding to the second sensing device are substituted into the coordinate transformation equation, i.e., into the coordinate transformation equation
Figure BDA0002292047530000183
Thereby solving for the displacement vector
Figure BDA0002292047530000184
Thereby establishing a relative pose
Figure BDA0002292047530000185
It is worth noting that the relative pose determined here
Figure BDA0002292047530000186
Refers to the relative pose from a first sensing device to a second sensing device, from the secondRelative pose of sensing device to first sensing device
Figure BDA0002292047530000187
The determination process is similar to the above process, and is not described herein again.
As can be seen from the above exemplary implementation of fig. 7B, in the embodiment of the present invention, the relative pose is determined based on the unit normal vector of the plane equation of the calibration plate and the plane intersection point between the planes by constructing the geometric constraint based on the planes, so that the accuracy of the determined relative pose is improved.
In some embodiments, referring to fig. 7C, fig. 7C is an optional flowchart of the scene sensing method based on artificial intelligence provided in the embodiments of the present invention, and based on fig. 7A, after step 104, in step 301, a scene model of a scene to be detected may be identified to obtain an identification result.
On the basis that the relative pose between the first sensing equipment and the second sensing equipment is determined, the first sensing equipment and the second sensing equipment can be used for sensing the same scene to be detected to obtain sensing data, the sensing data are fused according to the relative pose, modeling is carried out according to the fused sensing data, and a scene model of the scene to be detected is obtained. And then, identifying the scene model of the scene to be detected through the machine learning model to obtain an identification result, wherein the identification result is used for indicating whether an obstacle exists in the scene model. The embodiment of the invention does not limit the Machine learning model used for the recognition processing, for example, the Machine learning model can be a Support Vector Machine (SVM) or a neural network model, and in the training stage of the Machine learning model, the scene model of the known scene and the labeled recognition result can be used as input parameters for training.
In step 302, when the recognition result indicates that an obstacle exists in the scene model, performing feedback processing according to the obstacle in the recognition result.
Here, when the recognition result indicates that an obstacle exists in the scene model, feedback processing is performed according to information related to the obstacle in the recognition result, and a mechanism of the feedback processing may change according to a difference of the scene to be measured, which is not limited in the embodiment of the present invention.
In some embodiments, when the recognition result indicates that there is an obstacle in the scene model, the above-mentioned feedback processing according to the obstacle in the recognition result may be implemented in such a manner that: performing path planning processing according to the scene model to obtain a traveling route; and when the recognition result indicates that an obstacle exists in the scene model, updating the travel route according to the coordinates of the obstacle in the recognition result so as to ensure that the updated travel route and the coordinates of the obstacle do not intersect.
When the equipment loaded with the sensing equipment is movable equipment, path planning processing can be carried out in real time according to a scene model to obtain a traveling route, wherein the scene model is equivalent to a two-dimensional map or a three-dimensional map of a scene to be measured. Since the recognition process usually takes a certain amount of time, the recognition result may not be obtained after the travel route is determined. In this case, when the recognition result indicates that no obstacle exists in the scene model, moving is performed according to the determined travel route; and when the recognition result indicates that the obstacle exists in the scene model, updating the traveling route according to the coordinates of the obstacle in the recognition result, and moving according to the updated traveling route, wherein the updated traveling route is not intersected with the coordinates of the obstacle. By means of the method, real-time route planning is effectively achieved, meanwhile, obstacles in a scene to be detected can be avoided, and obstacle avoidance accuracy is improved.
In some embodiments, the feedback processing according to the obstacle in the recognition result may be implemented in such a manner that: acquiring an environment map, and performing path planning processing according to the environment map to obtain a traveling route; wherein the environment map comprises a plurality of obstacles; comparing the obstacles in the identification result with the obstacles in the environment map to obtain the current position of the sensing equipment in the environment map; and generating a navigation instruction according to the traveling route and the current position.
Besides real-time route planning, another obstacle avoidance mechanism exists, namely an environment map of an environment to be measured is obtained in advance, and a plurality of obstacles are marked on the environment map. And then, carrying out path planning processing according to the environment map to obtain a traveling route which is not intersected with the global coordinate of the obstacle, wherein the global coordinate refers to the coordinate of the obstacle in the environment map. And on the basis that the travel route is determined, when the identification result indicates that the obstacles exist in the scene model, comparing the similarity of the features of the obstacles in the scene model with the features of the obstacles in the environment map. When the obtained similarity with the highest numerical value exceeds a similarity threshold, determining an obstacle in the scene model as an obstacle corresponding to the similarity in the environment map, determining the current position (global coordinate) of the sensing device according to the global coordinate of the obstacle in the environment map, and generating a navigation instruction according to the traveling route and the current position so as to enable the movable device loaded with the sensing device to move towards the traveling route, wherein the navigation instruction can comprise a steering angle, a moving speed and the like. By the mode, on the basis of the existing global travelling route, the navigation instruction is generated by determining the global coordinate, the effective obstacle avoidance is realized at another angle, and the method is suitable for application scenes such as storage.
In some embodiments, when the recognition result indicates that there is an obstacle in the scene model, the above-mentioned feedback processing according to the obstacle in the recognition result may be implemented in such a manner that: when the identification results corresponding to the n continuous scene models indicate that the obstacles exist, carrying out similarity comparison on the obstacles in the n identification results; wherein n is an integer greater than 1; when the similarity comparison of the obstacles in the n recognition results is successful, determining a traveling route of the obstacles according to the coordinates of the obstacles in the n recognition results; acquiring an alarm area of the scene to be detected; and when the travelling route points to the alarm area, carrying out alarm processing according to the travelling route.
When sensing is performed through the sensing device, sensing can be set to be performed once every specific time interval, such as 1 second, and corresponding scene models are generated. When the identification results corresponding to the n continuous scene models indicate that the obstacles exist, extracting the characteristics of the obstacles in the n identification results, and performing similarity comparison, wherein n is an integer greater than 1 and can be set according to the actual application scene. And when the similarity comparison of the obstacles in the n recognition results is successful, namely the n recognition results all indicate the same obstacle, determining a traveling route of the obstacle traveling according to the time sequence from front to back according to the coordinates of the obstacles in the n recognition results. And acquiring an alarm area in the scene to be detected while or before determining the traveling route, wherein the alarm area is an area in which obstacles are forbidden to enter. And when the travelling route points to the alarm area, carrying out alarm processing according to the travelling route so that related personnel can process the barrier in time. Through the mechanism of the prediction alarm, the barrier can be effectively prevented from falling into an alarm area, and the method is suitable for application scenes such as dangerous goods detection and the like, for example, a sensing device is fixedly arranged at a certain position to sense a scene that dangerous goods are transported on a conveyor belt.
In some embodiments, the feedback processing according to the obstacle in the recognition result may be implemented in such a manner that: acquiring an alarm area of the scene to be detected; and when the coordinates of the obstacles in the identification result fall into the alarm area, carrying out alarm processing according to the coordinates of the obstacles.
When the identification result indicates that the obstacle exists in the scene model, besides determining the traveling route of the obstacle, the embodiment of the invention can also obtain the coordinate of the obstacle in the identification result and compare the coordinate of the obstacle with the coordinate range of the alarm area in the scene to be detected. And when the coordinates of the obstacle fall into the coordinate range of the alarm area, carrying out alarm processing according to the coordinates of the obstacle. On the basis, two alarm modes can be combined, for example, when the traveling route of the obstacle points to the alarm area and the coordinates of the obstacle do not fall into the coordinate range of the alarm area, the alarm processing of low urgency degree is carried out; and when the coordinates of the obstacle fall into the coordinate range of the alarm area, carrying out alarm processing with high urgency. The low-level emergency warning process may be sending a warning mail or short message to the relevant person, and the high-level emergency warning process may be automatically dialing the telephone of the relevant person or triggering an alarm bell. By the method, the alarm diversity is improved, and a specific alarm mechanism can be determined according to an actual application scene.
As can be seen from the above exemplary implementation of fig. 7C, in the embodiment of the present invention, when the scenes to be detected belong to different types, different feedback processes are performed according to the scene models, so that the flexibility of applying the scene models is improved.
In the following, an exemplary application of the embodiments of the present invention in a practical application scenario will be described.
Referring to fig. 8, fig. 8 is a schematic view of an optional process of the artificial intelligence based scene sensing method according to the embodiment of the present invention, in fig. 8, the first sensing device is a laser radar, the second sensing device is a camera, three or more non-parallel calibration plates are placed in an overlapping area of the fields of view of the laser radar and the camera, and a scene formed by the calibration plates is the above calibration scene. It should be noted that the calibration board herein may be a ch Aruco calibration board, as shown in fig. 9, the ch Aruco calibration board is a chessboard-like calibration board, and the difference is that white squares in the chessboard are replaced by Aruco marks, i.e. two-dimensional code marks, and the ch Aruco calibration board can facilitate to identify different planes in the image and can realize the angular point positioning with higher precision. Fig. 10 is an optional schematic diagram of the calibration scenario provided in the embodiment of the present invention, where fig. 10 includes three parallel ch aruco calibration boards, which are calibration boards 100, 101, and 102 in sequence, and subsequent relative pose calibration can be performed according to the calibration scenario.
The calibration scene shown in fig. 10 is sensed according to the laser radar to obtain a frame of laser radar point cloud, and the same calibration scene is sensed according to the camera to obtain a frame of image. For the image, identifying the ChAruco calibration plate in the image according to the camera internal parameters and the calibration plate parameters, and acquiring the plane equation parameters of each calibration plate in the camera coordinate system
Figure BDA0002292047530000221
Wherein the calibration board parameters comprise the row, column, lattice length and the like of the ChAruco calibration board,
Figure BDA0002292047530000222
is a unit normal vector of the plane equation corresponding to the ith calibration plate in the camera coordinate system, the unit normal vector is a unit normal vector pointing to one side of the camera,
Figure BDA0002292047530000223
is the constant term of the corresponding plane equation of the ith calibration plate in the camera coordinate system, wherein i is a number of 1, 2 and 3. The plane equation of the ith calibration plate in the camera coordinate system can be expressed as
Figure BDA0002292047530000224
Herein, the
Figure BDA0002292047530000225
Refers to the point in the plane corresponding to the ith calibration plate.
For the laser radar point cloud, the point cloud subsets belonging to the planes corresponding to the calibration plates can be determined according to the matching result, or the point cloud subsets belonging to the planes corresponding to the calibration plates can be determined according to the manually set selection frame. Fig. 11 is an alternative schematic diagram of the point cloud interface provided by the embodiment of the present invention, and fig. 11 shows a marquee 110, which corresponds to the calibration board 100 in fig. 10, that is, the point cloud subset covered by the marquee 110 belongs to the plane corresponding to the calibration board 100. Fig. 12 is an optional schematic diagram of a re-projection result provided in the embodiment of the present invention, where the re-projection result is a result of re-projecting the point cloud subset belonging to the corresponding plane of the calibration plate to the calibration scene. After the point cloud subsets belonging to the plane corresponding to the calibration plate are determined, the plane equation coefficients of the plane formed by the point cloud subsets are further determined by a principal component analysis method
Figure BDA0002292047530000226
Wherein,
Figure BDA0002292047530000227
is the unit normal vector of the plane equation corresponding to the ith calibration plate in the laser radar coordinate system, the unit normal vector is the unit normal vector pointing to one side of the laser radar,
Figure BDA0002292047530000231
is a constant term of the corresponding plane equation of the ith calibration plate in the laser radar coordinate system.
Besides determining unit normal vector, the plane intersection point of the corresponding planes of the three calibration plates is also determined in the laser radar coordinate system
Figure BDA0002292047530000232
Determining the plane intersection point of the corresponding planes of the three calibration plates in the camera coordinate system
Figure BDA0002292047530000233
Then, constructing a constraint based on unit normal vectors and plane intersection points, optimizing and solving the relative pose, and specifically constructing a first matrix
Figure BDA0002292047530000234
Constructing a second matrix
Figure BDA0002292047530000235
And the following objective functions were constructed:
Figure BDA0002292047530000236
wherein min represents a minimum function, | | | | | non-calculationFRepresents the euclidean norm of the matrix,
Figure BDA0002292047530000237
for restraining
Figure BDA0002292047530000238
Constraint in one 3 degree of freedomA space. Carrying out optimization solution on the objective function and obtaining the objective function
Figure BDA0002292047530000239
Determined as a rotation matrix.
Then, the rotation matrix, the plane intersection point in the laser radar coordinate system and the plane intersection point in the camera coordinate system are substituted into a coordinate transformation equation
Figure BDA00022920475300002310
Can obtain the displacement vector
Figure BDA00022920475300002311
Figure BDA00022920475300002312
Based on the obtained rotation matrix
Figure BDA00022920475300002313
And a displacement vector
Figure BDA00022920475300002314
Establishing the relative pose of the lidar to the camera
Figure BDA00022920475300002315
And finishing the calibration of the relative pose. The relative pose of the two sensor systems can be calibrated through the method, so that the data of the two sensor systems can be fused and further used for perception, navigation, planning, control and the like, and the robot platform can be ground trolleys, unmanned vehicles, foot robots, unmanned aerial vehicles and the like.
It should be noted that the calibration scenario shown in fig. 10 is for convenience of illustration only, and in an actual application scenario, the scenario can be extended to more than three calibration boards. When there are n calibration plates, there are n unit normal vectors for constraining the rotation matrix in the coordinate system of the sensing data,is provided with
Figure BDA00022920475300002316
The intersection points of the planes are used for solving the displacement vector, wherein n is an integer not less than 3. In addition, the object for solving the relative pose is not limited to the lidar-camera, and the relative pose between two radars can also be solved in the above manner, for example.
In order to facilitate understanding of the beneficial effects of the embodiment of the invention, the determined relative pose is verified by using a laser radar and a binocular camera. The laser radar used has 32 laser beams, the horizontal angle is 360 ° (degrees), the vertical angle is 40 °, and the resolution in the vertical direction is up to 0.33 °; the base line (Baseline) of the binocular camera used was about 12 cm with a resolution of 1280 × 720.
The relative pose from the left eye camera to the right eye camera provided by a manufacturer of the binocular camera is used as a true value of reference. By utilizing the scene perception method based on artificial intelligence provided by the embodiment of the invention, the relative pose of the laser radar to the right-eye camera is obtained
Figure BDA0002292047530000241
The obtained relative pose of the laser radar reaching the left eye camera is
Figure BDA0002292047530000242
The relative pose of the eye camera to the eye camera can be determined
Figure BDA0002292047530000243
Comparing the relative pose with the reference true value to obtain the results shown in the following table:
type of parameter Reference truth value provided by manufacturer Calibration result of the method Absolute error
x direction displacement (rice) -0.12 -0.1270 0.007
y direction displacement (rice) 0 -0.0053 -0.0053
z direction displacement (rice) 0 0.0079 0.0079
Angle of rotation (degree) about the x-axis 0 -0.0531 -0.0531
Angle of rotation around the y-axis (°) 0 -0.2641 -0.2641
Angle of rotation (degree) about the z-axis 0 0.1017 0.1017
Based on the above table, it can be determined that the absolute error obtained by the scene sensing method based on artificial intelligence provided by the embodiment of the invention is small, the accuracy of the calibrated relative pose is high, and the subsequent accurate data fusion is convenient.
Continuing with the exemplary structure in which the artificial intelligence based context awareness apparatus 255 provided by the embodiments of the present invention is implemented as software modules, in some embodiments, as shown in fig. 5, the software modules stored in the artificial intelligence based context awareness apparatus 255 of the memory 250 may include: a first parameter determining module 2551, configured to obtain sensing data obtained by sensing a calibration scene by a sensing device, and determine a plane equation parameter of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images; a second parameter determining module 2552, configured to determine a rotation matrix between the first sensing device and the second sensing device according to the plane equation parameters, and determine a displacement vector according to the rotation matrix; a relative pose determination module 2553, configured to determine a relative pose between the first perceiving device and the second perceiving device according to the rotation matrix and the displacement vector; and the fusion module 2554 is configured to fuse the sensing data of the first sensing device and the sensing data of the second sensing device according to the relative pose, and perform modeling processing on a corresponding scene according to the fused sensing data to obtain a scene model.
In some embodiments, the first parameter determination module 2551 is further configured to: determining planes corresponding to at least three calibration plates included in the calibration scene in a coordinate system of the perception data; and determining a plane equation of a plane corresponding to the calibration plate, and determining a unit normal vector of the plane equation.
In some embodiments, the first parameter determination module 2551 is further configured to: when the sensing equipment is a laser radar, acquiring size characteristics of at least three calibration plates included in the calibration scene, and determining a plane corresponding to the calibration plate according to the size characteristics in a coordinate system of the sensing data; or, when the sensing device is a laser radar, acquiring a set election frame, and determining a plane covered by the set election frame in a coordinate system of the sensing data as a plane corresponding to the calibration plate.
In some embodiments, the second parameter determination module 2552 is further configured to: splicing at least three unit normal vectors corresponding to the first sensing equipment into a first matrix, and splicing at least three unit normal vectors corresponding to the second sensing equipment into a second matrix; constructing an objective function according to the first matrix and the second matrix; and optimizing the objective function, and determining a rotation matrix corresponding to the result of the optimization.
In some embodiments, the second parameter determination module 2552 is further configured to: determining plane intersection points of planes corresponding to at least three calibration plates included in the calibration scene in a coordinate system of the perception data; and solving a coordinate transformation equation according to the rotation matrix, the plane intersection point corresponding to the first sensing equipment and the plane intersection point corresponding to the second sensing equipment to obtain a displacement vector between the first sensing equipment and the second sensing equipment.
In some embodiments, the artificial intelligence based scene awareness apparatus 255 further comprises: the extended pose determining module is used for determining a left eye relative pose between the first sensing equipment and left eye sensing equipment and determining a right eye relative pose between the first sensing equipment and right eye sensing equipment; wherein the second perception device comprises the left eye perception device and the right eye perception device; the binocular pose determining module is used for carrying out inverse extraction processing on the relative pose of the left eye and carrying out product processing on the inverse extracted relative pose of the left eye and the inverse extracted relative pose of the right eye to obtain a binocular relative pose between the left eye sensing equipment and the right eye sensing equipment; and fusing the perception data of the left eye perception device and the perception data of the right eye perception device according to the binocular relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
In some embodiments, the artificial intelligence based scene awareness apparatus 255 further comprises: the recognition module is used for recognizing the scene model of the scene to be detected to obtain a recognition result; and the feedback module is used for performing feedback processing according to the obstacles in the recognition result when the recognition result indicates that the obstacles exist in the scene model.
In some embodiments, the feedback module is further to: performing path planning processing according to the scene model to obtain a traveling route; and when the recognition result indicates that an obstacle exists in the scene model, updating the travel route according to the coordinates of the obstacle in the recognition result so as to ensure that the updated travel route and the coordinates of the obstacle do not intersect.
In some embodiments, the feedback module is further to: acquiring an environment map, and performing path planning processing according to the environment map to obtain a traveling route; wherein the environment map comprises a plurality of obstacles; comparing the obstacles in the identification result with the obstacles in the environment map to obtain the current position of the sensing equipment in the environment map; and generating a navigation instruction according to the traveling route and the current position.
In some embodiments, the feedback module is further to: when the identification results corresponding to the n continuous scene models indicate that the obstacles exist, carrying out similarity comparison on the obstacles in the n identification results; wherein n is an integer greater than 1; when the similarity comparison of the obstacles in the n recognition results is successful, determining a traveling route of the obstacles according to the coordinates of the obstacles in the n recognition results; acquiring an alarm area of the scene to be detected; and when the travelling route points to the alarm area, carrying out alarm processing according to the travelling route.
In some embodiments, the feedback module is further to: acquiring an alarm area of the scene to be detected; and when the coordinates of the obstacles in the identification result fall into the alarm area, carrying out alarm processing according to the coordinates of the obstacles.
In some embodiments, the artificial intelligence based scene awareness apparatus 255 further comprises: and the uplink module is used for sending the identification of the sensing equipment and the corresponding relative pose to a block chain network so as to enable the node of the block chain network to fill the identification of the sensing equipment and the corresponding relative pose to a new block, and when the new block is identified consistently, the new block is added to the tail of the block chain, and a relative pose query request carrying the identification of the sensing equipment is responded.
Embodiments of the present invention provide a storage medium having stored therein executable instructions that, when executed by a processor, cause the processor to perform an artificial intelligence based scene awareness method provided by embodiments of the present invention, for example, an artificial intelligence based scene awareness method as shown in fig. 7A, 7B or 7C.
In some embodiments, the storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In summary, the embodiment of the present invention can be performed only by at least three calibration boards, and there is no special requirement for the coating material of the calibration boards, and the calibration boards can be printed and then pasted on a cardboard, so as to construct a calibration scene, that is, the hardware requirement for the calibration scene is lower. Compared with the mode of reflection intensity assistance and edge fitting, the method and the device have the advantages that the system error is avoided by constructing the geometric constraint based on the plane, when the noise is not considered, the points determined in the point cloud are all located on the plane corresponding to the calibration plate, in addition, more points can be utilized, the calibrated relative pose is more accurate, and the accuracy of subsequent modeling is improved.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (14)

1. A scene perception method based on artificial intelligence is characterized by comprising the following steps:
acquiring sensing data obtained by sensing a calibration scene comprising at least three non-parallel calibration plates by sensing equipment, and determining plane equation parameters of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images;
determining a rotation matrix between the first sensing device and the second sensing device according to the plane equation parameters;
determining plane intersection points of planes corresponding to at least three calibration plates included in the calibration scene in a coordinate system of the perception data;
solving a coordinate transformation equation according to the rotation matrix, the plane intersection point corresponding to the first sensing equipment and the plane intersection point corresponding to the second sensing equipment to obtain a displacement vector between the first sensing equipment and the second sensing equipment;
determining a relative pose between the first sensing device and the second sensing device according to the rotation matrix and the displacement vector;
and fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
2. The scene awareness method according to claim 1, wherein said determining the plane equation parameters of the calibration scene according to the awareness data comprises:
determining planes corresponding to at least three calibration plates included in the calibration scene in a coordinate system of the perception data;
and determining a plane equation of a plane corresponding to the calibration plate, and determining a unit normal vector of the plane equation.
3. The scene awareness method according to claim 2, wherein determining, in the coordinate system of the awareness data, the planes corresponding to at least three calibration plates included in the calibration scene comprises:
when the sensing equipment is a laser radar, acquiring size characteristics of at least three calibration plates included in the calibration scene, and determining a plane corresponding to the calibration plate according to the size characteristics in a coordinate system of the sensing data; or,
and when the sensing equipment is a laser radar, acquiring a set selecting frame, and determining a plane covered by the set selecting frame in a coordinate system of the sensing data as a plane corresponding to the calibration plate.
4. The method of scene awareness according to claim 2, wherein said determining a rotation matrix between said first perceiving device and said second perceiving device according to said plane equation parameters comprises:
splicing at least three unit normal vectors corresponding to the first sensing equipment into a first matrix, and splicing at least three unit normal vectors corresponding to the second sensing equipment into a second matrix;
constructing an objective function according to the first matrix and the second matrix;
and optimizing the objective function, and determining a rotation matrix corresponding to the result of the optimization.
5. The scene aware method of claim 1, further comprising:
determining a left eye relative pose between the first sensing equipment and left eye sensing equipment, and determining a right eye relative pose between the first sensing equipment and right eye sensing equipment; wherein the second perception device comprises the left eye perception device and the right eye perception device;
carrying out inverse processing on the relative pose of the left eye, and carrying out product processing on the relative pose of the left eye and the relative pose of the right eye after inverse processing to obtain a binocular relative pose between the left eye sensing equipment and the right eye sensing equipment;
and fusing the perception data of the left eye perception device and the perception data of the right eye perception device according to the binocular relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
6. The scene aware method of any one of claims 1 to 5, further comprising:
identifying a scene model of a scene to be detected to obtain an identification result;
and when the recognition result indicates that the obstacle exists in the scene model, performing feedback processing according to the obstacle in the recognition result.
7. The scene perception method according to claim 6, wherein when the recognition result indicates that an obstacle exists in the scene model, performing feedback processing according to the obstacle in the recognition result includes:
performing path planning processing according to the scene model to obtain a traveling route;
and when the recognition result indicates that an obstacle exists in the scene model, updating the travel route according to the coordinates of the obstacle in the recognition result so as to ensure that the updated travel route and the coordinates of the obstacle do not intersect.
8. The scene awareness method according to claim 6, wherein the performing feedback processing according to the obstacle in the recognition result includes:
acquiring an environment map, and performing path planning processing according to the environment map to obtain a traveling route; wherein the environment map comprises a plurality of obstacles;
comparing the obstacles in the identification result with the obstacles in the environment map to obtain the current position of the sensing equipment in the environment map;
and generating a navigation instruction according to the traveling route and the current position.
9. The scene perception method according to claim 6, wherein when the recognition result indicates that an obstacle exists in the scene model, performing feedback processing according to the obstacle in the recognition result includes:
when the identification results corresponding to the n continuous scene models indicate that the obstacles exist, carrying out similarity comparison on the obstacles in the n identification results; wherein n is an integer greater than 1;
when the similarity comparison of the obstacles in the n recognition results is successful, determining a traveling route of the obstacles according to the coordinates of the obstacles in the n recognition results;
acquiring an alarm area of the scene to be detected;
and when the travelling route points to the alarm area, carrying out alarm processing according to the travelling route.
10. The scene awareness method according to claim 6, wherein the performing feedback processing according to the obstacle in the recognition result includes:
acquiring an alarm area of the scene to be detected;
and when the coordinates of the obstacles in the identification result fall into the alarm area, carrying out alarm processing according to the coordinates of the obstacles.
11. The scene aware method of any one of claims 1 to 5, further comprising:
sending the identification of the sensing equipment and the corresponding relative pose to a block chain network so as to enable the sensing equipment to be in a state of being in a state of
And the nodes of the block chain network fill the identification of the sensing equipment and the corresponding relative pose into a new block, and when the new block is identified consistently, the new block is added to the tail of the block chain, and a relative pose query request carrying the identification of the sensing equipment is responded.
12. A scene perception device based on artificial intelligence is characterized by comprising:
the first parameter determining module is used for acquiring sensing data obtained by sensing a calibration scene comprising at least three non-parallel calibration plates by sensing equipment and determining plane equation parameters of the calibration scene according to the sensing data; the sensing equipment comprises a first sensing equipment and a second sensing equipment, and the sensing data comprises point clouds and/or images;
a second parameter determining module, configured to determine a rotation matrix between the first sensing device and the second sensing device according to the plane equation parameter;
the second parameter determining module is further configured to determine, in the coordinate system of the sensing data, plane intersection points of planes corresponding to at least three calibration plates included in the calibration scene;
the second parameter determining module is further configured to solve a coordinate transformation equation according to the rotation matrix, the plane intersection point corresponding to the first sensing device, and the plane intersection point corresponding to the second sensing device, so as to obtain a displacement vector between the first sensing device and the second sensing device;
a relative pose determination module, configured to determine a relative pose between the first sensing device and the second sensing device according to the rotation matrix and the displacement vector;
and fusing the perception data of the first perception device and the perception data of the second perception device according to the relative pose, and modeling a corresponding scene according to the fused perception data to obtain a scene model.
13. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the artificial intelligence based scene awareness method of any one of claims 1 to 11 when executing executable instructions stored in the memory.
14. A storage medium having stored thereon executable instructions for causing a processor to perform the artificial intelligence based scene awareness method of any one of claims 1 to 11 when executed.
CN201911184312.1A 2019-11-27 2019-11-27 Scene perception method and device based on artificial intelligence and electronic equipment Active CN110942485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911184312.1A CN110942485B (en) 2019-11-27 2019-11-27 Scene perception method and device based on artificial intelligence and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911184312.1A CN110942485B (en) 2019-11-27 2019-11-27 Scene perception method and device based on artificial intelligence and electronic equipment

Publications (2)

Publication Number Publication Date
CN110942485A CN110942485A (en) 2020-03-31
CN110942485B true CN110942485B (en) 2021-03-19

Family

ID=69908328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911184312.1A Active CN110942485B (en) 2019-11-27 2019-11-27 Scene perception method and device based on artificial intelligence and electronic equipment

Country Status (1)

Country Link
CN (1) CN110942485B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815716A (en) * 2020-07-13 2020-10-23 北京爱笔科技有限公司 Parameter calibration method and related device
CN112016465B (en) * 2020-08-28 2024-06-28 北京至为恒通企业管理有限公司 Scene recognition method, device and system
CN112733877B (en) * 2020-11-27 2023-05-30 北京理工大学 Multi-laser radar three-dimensional imaging artificial intelligent ore identification method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110161485A (en) * 2019-06-13 2019-08-23 同济大学 A kind of outer ginseng caliberating device and scaling method of laser radar and vision camera
CN110244282A (en) * 2019-06-10 2019-09-17 于兴虎 A kind of multicamera system and laser radar association system and its combined calibrating method
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389232B (en) * 2017-12-04 2021-10-19 长春理工大学 Geometric correction method for irregular surface projection image based on ideal viewpoint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264416A (en) * 2019-05-28 2019-09-20 深圳大学 Sparse point cloud segmentation method and device
CN110244282A (en) * 2019-06-10 2019-09-17 于兴虎 A kind of multicamera system and laser radar association system and its combined calibrating method
CN110161485A (en) * 2019-06-13 2019-08-23 同济大学 A kind of outer ginseng caliberating device and scaling method of laser radar and vision camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Qilong Zhang."Extrinsic Calibration of a Camera and Laser Range Finder (improves camera calibration)".《Researchgate》.2015, *
姚文韬."一种自适应摄像机与激光雷达联合标定算法".《控制工程》.2017,第24卷(第S0期), *
彭梦."一种基于双平行平面的激光雷达和摄像机标定方法".《中南大学学报(自然科学版)》.2012,第43卷(第12期), *
徐喆."一种基于四层激光雷达和摄像机的外部标定方法".《Proceedings of the 33rd Chinese Control Conference》.2014, *

Also Published As

Publication number Publication date
CN110942485A (en) 2020-03-31

Similar Documents

Publication Publication Date Title
CN110942485B (en) Scene perception method and device based on artificial intelligence and electronic equipment
CN111694903B (en) Map construction method, device, equipment and readable storage medium
CN112419494B (en) Obstacle detection and marking method and device for automatic driving and storage medium
CN109946680B (en) External parameter calibration method and device of detection system, storage medium and calibration system
CN109690622A (en) Camera registration in multicamera system
CN108351791A (en) The computing device of accessory is inputted with user
CN103793936A (en) Automated frame of reference calibration for augmented reality
CN110501036A (en) The calibration inspection method and device of sensor parameters
CN110470333A (en) Scaling method and device, the storage medium and electronic device of sensor parameters
Adil et al. A novel algorithm for distance measurement using stereo camera
Liu et al. Mobile delivery robots: Mixed reality-based simulation relying on ros and unity 3D
CN114565916B (en) Target detection model training method, target detection method and electronic equipment
US20220301222A1 (en) Indoor positioning system and indoor positioning method
CN108367436A (en) Determination is moved for the voluntary camera of object space and range in three dimensions
KR20210129360A (en) System for providing 3D model augmented reality service using AI and method thereof
CN112053440A (en) Method for determining individualized model and communication device
Yin et al. CoMask: Corresponding mask-based end-to-end extrinsic calibration of the camera and LiDAR
Alaba et al. Multi-sensor fusion 3D object detection for autonomous driving
Zhang et al. Three-dimensional modeling and indoor positioning for urban emergency response
CN115357500A (en) Test method, device, equipment and medium for automatic driving system
CN115100257A (en) Sleeve alignment method and device, computer equipment and storage medium
Buck et al. Unreal engine-based photorealistic aerial data generation and unit testing of artificial intelligence algorithms
Kim et al. One shot extrinsic calibration of a camera and laser range finder using vertical planes
Li et al. Light field SLAM based on ray-space projection model
Fernandez-Cortizas et al. Multi S-Graphs: An Efficient Distributed Semantic-Relational Collaborative SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022990

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant