CN107145610B - Digital representation and retrieval method of environment or object based on visual perception - Google Patents

Digital representation and retrieval method of environment or object based on visual perception Download PDF

Info

Publication number
CN107145610B
CN107145610B CN201710447492.2A CN201710447492A CN107145610B CN 107145610 B CN107145610 B CN 107145610B CN 201710447492 A CN201710447492 A CN 201710447492A CN 107145610 B CN107145610 B CN 107145610B
Authority
CN
China
Prior art keywords
environment
visual
structured
visual perception
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710447492.2A
Other languages
Chinese (zh)
Other versions
CN107145610A (en
Inventor
王红军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing jiruixiang Aviation Technology Co., Ltd
Original Assignee
Beijing Jiruixiang Aviation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiruixiang Aviation Technology Co ltd filed Critical Beijing Jiruixiang Aviation Technology Co ltd
Priority to CN201710447492.2A priority Critical patent/CN107145610B/en
Publication of CN107145610A publication Critical patent/CN107145610A/en
Application granted granted Critical
Publication of CN107145610B publication Critical patent/CN107145610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/80Information retrieval; Database structures therefor; File system structures therefor of semi-structured data, e.g. markup language structured data such as SGML, XML or HTML
    • G06F16/83Querying
    • G06F16/835Query processing

Abstract

The invention relates to a digital representation and retrieval method of environment or object based on visual perception, and the visual principle of biology and human, which has not been studied clearly in the scientific and technological field so far, and for a three-dimensional object, after the three-dimensional object is seen from different angles, an 'open cover' of the real three-dimensional object is obtained, and the 'open cover' will not be a visual perception uniquely corresponding to the object Is this perception stored in the brain two-dimensional or multi-dimensional Is the stored information curvature or contour Is the above "unique correspondence" conformal mapping or ordinary bijective or otherwise Similar problems are still not felt, but the continuous observation and research are not hindered, and the invention takes the principle as the starting point to provide a visual perception model, the theoretical basis of which is in differential geometry and manifold, and the probability distribution of the curvature of the surface of an object is analyzed to be used as the digital representation of visual information, so that the visual perception model provides support for visual storage and retrieval and provides a basic technology for visual application.

Description

Digital representation and retrieval method of environment or object based on visual perception
Technical Field
The invention relates to technical concepts such as artificial intelligence and machine vision, in particular to how visual information acquired by a robot is represented, stored and searched after the robot senses the surrounding environment or objects.
Background
With the development of industrial robots in recent years, the service robot industry is driven to be gradually excavated, meanwhile, the intelligent hardware field started in 2014 is also protruded, the sales volume of service robots in 2015 reaches 85 hundred million dollars according to the statistics of the international robot alliance, the higher 20% -30% growth rate is kept, according to the ai rui research, the global intelligent hardware loading volume in 2014 reaches 60 million machines in the intelligent hardware field, and 140 million machines in 2017 is expected.
The problem is also obvious behind the high-speed development of the market, on one hand, the market potential is not excavated, and on the other hand, the technical difficulties exist when the robot and the intelligent hardware enter the service industry.
Although the visual recognition of objects is a big difficulty, some technical attempts have been made to overcome this problem, for example, in the patents "a method for recognizing an entire object based on a three-dimensional grid map" and "a method for assisting in recognizing an object based on color features", a method "how to recognize an object from the aspects of shape, color, material, etc. is mentioned. Here we are to solve: after visual perception, how the acquired information is represented, stored and retrieved.
Object of the Invention
The invention mainly aims to solve the problems of representation, storage and retrieval of visual perception information.
Technical scheme
The purpose of the invention is realized as follows: acquiring point cloud information of an actual environment or an object through related equipment and algorithms, such as a laser radar, a CCD (charge coupled device), a CMOS (complementary metal oxide semiconductor), a point cloud registration algorithm, a point cloud noise reduction algorithm and the like, and performing subsequent processing steps as follows:
(1) and calculating the global and local feature tensors and the feature spectrum values corresponding to the environment or the object through the point cloud data of the environment or the object.
(2) Organizing structured or semi-structured data by combining other related information, wherein an XML format can be selected to bear the data format;
(3) storing structured or semi-structured data of an environment or object;
(4) editing or auditing or retrieving environments and objects.
The system used by the invention consists of: the system comprises an acquisition system, a visual perception system, a storage system, an auditing system and a retrieval system. The five systems are software systems arranged according to functions, and the specific functions of each subsystem are as follows:
an acquisition system: the point cloud data can be output in real time or off line by collecting the point cloud information of the environment or the object through related equipment sensors,
visual perception system: receiving the point cloud data, calculating global and local feature tensors and feature spectrum values, outputting structured or semi-structured data,
a storage system: the structured or semi-structured data is stored,
editing and reviewing system: background personnel or machines can edit, audit and update and store the data of the environment or the object,
retrieval system: the human or machine may retrieve the stored environment or object.
Description of the drawings:
FIG. 1 is a diagram showing the system configuration used in the method of the present invention
FIG. 2 is a real object "door"
FIG. 3 is the feature tensor of the door
FIG. 4 is a signature spectrum of a door
FIG. 5 is a digitized representation of a door
FIG. 6 is a search page
Detailed Description
Embodiments of the present invention will be described below with reference to the drawings.
The overall structure of the system used in the method of the invention can refer to the attached figure 1, which comprises five subsystems and specifically comprises the following steps:
first step of
In the acquisition system, the information of the surrounding environment or the object is acquired through related equipment sensors, data is optimized through methods of noise reduction, filtering and the like, the point cloud data of the environment or the object is externally output, and real-time or off-line output can be realized.
Second step of
In the visual perception system, point cloud data transmitted by the acquisition system is received, visual perception calculation is performed, and global and local feature tensors and feature spectrum values of an environment or an object are obtained, wherein the calculation is not limited to a certain visual perception algorithm, and here, the point cloud data can be calculated and processed by using an algorithm described in a patent of a method for representing and identifying environmental features based on a three-dimensional grid map, as shown in fig. 2, a local feature tensor corresponding to an actual door is shown in fig. 3, and fig. 4 is a feature spectrum thereof.
The third step
According to the feature tensor and feature spectrum output in the visual perception system, together with other information such as position, resolution, description information, etc., organized into an XML form, as shown in fig. 5, each piece of XML information is called DOC (document), it should be noted that each DOC has a globally unique id corresponding to it, which facilitates subsequent operations of adding, deleting and modifying its DOC, the content of the tag "is to store the feature tensor information shown in fig. 3, and the content of the tag" is to store the feature spectrum information shown in fig. 4, which may be but is not limited to as many as that shown in fig. 5.
The fourth step
According to the organization form of the information output in the last step, the information can be stored in a single machine, or in a distributed cluster, or in a database, a file system or other forms.
The fifth step
The documentary personnel or related users can search the DOC according to the keywords or the point cloud data in the background, for example, a retrieval page shown in FIG. 6 can be retrieved according to point cloud information, namely the retrieval according to the point cloud information is performed by the aid of the "following up a clue" in FIG. 6, and can also be retrieved according to the keywords, namely the keywords are performed by the aid of the "walls" in an input box in FIG. 6, and the related DOC information can be manually updated and stored if necessary.
The sixth step
The retrieved operation may be performed by the system operator or a related user, or may be performed directly by the machine, for example, when the machine performs a task indoors or outdoors, the machine may use the visual perception data stored in the system, and when the current environment or object information collected by the machine is inconsistent with the information stored in the system, the machine may update the system storage directly.
In conclusion, the method successfully solves the problems of representation, storage and retrieval of visual perception information, can provide high-precision three-dimensional map service for related personnel or robots, and lays a solid foundation for the development of new directions of visual perception application.

Claims (3)

1. A digital representation and retrieval method of environment or object based on visual perception is disclosed, which analyzes the surface shape and color attribute information of space environment or object to obtain the characteristic tensor and characteristic spectrum value of these attribute information, organizing into structured or semi-structured data, convenient to store and retrieve, the realization is given by the following steps:
(1) acquiring two-dimensional or three-point cloud data of an actual object and an actual environment through a laser radar, a CCD, a CMOS or a visual sensor and a point cloud registration algorithm or a point cloud noise reduction algorithm;
(2) performing visual perception calculation on the point cloud data to obtain global and local feature tensors and feature spectral values of the environment or the object;
(3) organizing structured or semi-structured data in conjunction with location, description, time, resolution, or unique identification ID of an environment or object;
(4) storing structured or semi-structured environment or object data that has been organized;
(5) editing, auditing, retrieving environmental or object data.
2. The digital representation and retrieval method for environment or object based on visual perception as claimed in claim 1, wherein the point cloud data of environment or object includes spatial position information, color information, and material information.
3. The visual perception-based digital representation and retrieval method of an environment or an object as claimed in claim 1, wherein the structured or semi-structured data of the environment or the object includes global, local feature tensors and eigen-spectral values of the environment or the object.
CN201710447492.2A 2017-06-15 2017-06-15 Digital representation and retrieval method of environment or object based on visual perception Active CN107145610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710447492.2A CN107145610B (en) 2017-06-15 2017-06-15 Digital representation and retrieval method of environment or object based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710447492.2A CN107145610B (en) 2017-06-15 2017-06-15 Digital representation and retrieval method of environment or object based on visual perception

Publications (2)

Publication Number Publication Date
CN107145610A CN107145610A (en) 2017-09-08
CN107145610B true CN107145610B (en) 2021-11-26

Family

ID=59781513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710447492.2A Active CN107145610B (en) 2017-06-15 2017-06-15 Digital representation and retrieval method of environment or object based on visual perception

Country Status (1)

Country Link
CN (1) CN107145610B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184243A (en) * 2015-08-24 2015-12-23 王红军 Environment characteristic expression and identification method based on three dimensional grid map

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184243A (en) * 2015-08-24 2015-12-23 王红军 Environment characteristic expression and identification method based on three dimensional grid map

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
矿用救援机器人关键技术研究;刘建;《中国博士学位论文全文数据库信息科技辑》;20150415;第I140-34页 *
论机器⼈的环境感知与智主运动--兼谈基于微分几何;hjwang1;《作业部落》;20160926;第1-12页 *

Also Published As

Publication number Publication date
CN107145610A (en) 2017-09-08

Similar Documents

Publication Publication Date Title
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN102411779A (en) Image-based object model matching posture measurement method
Boughorbel et al. Laser ranging and video imaging for bin picking
CN107145610B (en) Digital representation and retrieval method of environment or object based on visual perception
Niemeyer et al. A spatio-temporal-semantic environment representation for autonomous mobile robots equipped with various sensor systems
EP1335307A3 (en) System and method for positioning records in a database
Kurtser et al. PointNet and geometric reasoning for detection of grape vines from single frame RGB-D data in outdoor conditions
CN115048478B (en) Construction method, equipment and system of geographic information map of intelligent equipment
CN114641795A (en) Object search device and object search method
Wang et al. Dimension fitting of wheat spikes in dense 3D point clouds based on the adaptive k-means algorithm with dynamic perspectives
JP2010068466A (en) Moving body tracking device and moving body tracking method
Giordano et al. Fully automatic analysis of archival aerial images current status and challenges
CN116843867A (en) Augmented reality virtual-real fusion method, electronic device and storage medium
CN113920020B (en) Human body point cloud real-time restoration method based on depth generation model
CN116778473A (en) Improved YOLOV 5-based mushroom offline real-time identification method and system
Kim et al. Mobile-based flower recognition system
CN112783962B (en) ETL technology-based time-space big data artificial intelligence analysis method and system
CN113256802A (en) Virtual three-dimensional reconstruction and scene creation method for building
Liu et al. 3D point cloud construction and display based on LiDAR
Ballouch et al. The contribution of Deep Learning to the semantic segmentation of 3D point-clouds in urban areas
CN102436487B (en) Optical flow method based on video retrieval system
CN112508866A (en) Rock mass fracture analysis system and method based on machine vision and digital image processing
AU2021106044A4 (en) A visualization of uncertainties and noise in dark data
KR20190113669A (en) Apparatus and method for data management for reconstruct in 3d object surface
CN114022784B (en) Method and device for screening landmark control points

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210114

Address after: 130022 J7 1702, Zhonghai international community, Nanguan District, Changchun, Jilin.

Applicant after: Li Na

Address before: Room A502, wankecheng qiutongju, Bantian street, Longgang District, Shenzhen City, Guangdong Province

Applicant before: Wang Hongjun

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211027

Address after: 400000 14-5, No.2, beichengsan Road, Jiangbei District, Chongqing

Applicant after: Chongqing Huazhi Tianxia Technology Co., Ltd

Address before: 130022 J7 1702, Zhonghai international community, Nanguan District, Changchun, Jilin.

Applicant before: Li Na

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211110

Address after: 100000 room 803-2, floor 8, building 2, yard 1, Hangfeng Road, Fengtai District, Beijing

Applicant after: Beijing jiruixiang Aviation Technology Co., Ltd

Address before: 400000 14-5, No. 2, Beicheng Third Road, Jiangbei District, Chongqing

Applicant before: Chongqing Huazhi Tianxia Technology Co., Ltd

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant