CN111640129B - Visual mortar recognition system applied to indoor wall construction robot - Google Patents

Visual mortar recognition system applied to indoor wall construction robot Download PDF

Info

Publication number
CN111640129B
CN111640129B CN202010447459.1A CN202010447459A CN111640129B CN 111640129 B CN111640129 B CN 111640129B CN 202010447459 A CN202010447459 A CN 202010447459A CN 111640129 B CN111640129 B CN 111640129B
Authority
CN
China
Prior art keywords
color
algorithm
acquisition module
module
mortar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010447459.1A
Other languages
Chinese (zh)
Other versions
CN111640129A (en
Inventor
于鸿洋
阎一鸣
王昭婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010447459.1A priority Critical patent/CN111640129B/en
Publication of CN111640129A publication Critical patent/CN111640129A/en
Application granted granted Critical
Publication of CN111640129B publication Critical patent/CN111640129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention discloses a visual mortar recognition system applied to an indoor wall construction robot, belonging to the technical field of visual recognition. The invention can reduce the manual intervention during the working of the robot, realize the automatic calculation of the working task of the robot and improve the working efficiency of the robot.

Description

Visual mortar identification system applied to indoor wall construction robot
Technical Field
The invention belongs to the technical field of visual identification, and particularly relates to a visual mortar identification technology applied to an indoor wall construction robot.
Background
Along with the process of industrial intelligence, due to the advantages of high recognition rate, high speed, low cost, continuous operation and the like, computer vision systems increasingly replace human beings to complete tasks such as recognition, classification and the like. Common technologies include edge detection, texture recognition, RCNN object recognition, etc., and these technologies are widely used in various fields, such as high-voltage line detection, face recognition, etc. However, in the field of indoor construction robots, the above-mentioned techniques are difficult to be effective in identifying "mortar" which is a raw material for robot construction, and the main reasons are:
1. the indoor construction environment is complicated, the wall surface to be constructed is of various types, and the illumination environment is uncontrollable. The generalization capability of the identification system is very weak simply depending on information such as edges, colors and the like, for example, the identification rate of mortar is very low when the edges are detected in the environment of red brick walls;
2. the wall surface texture information is weak, the mortar belongs to semifluid, and the surface texture characteristics can change under the action of tension and gravity; the distance between the robot and the wall surface is less than 30 cm during construction, and the reasonable application range of most depth cameras is exceeded;
3. limited by the power consumption and cost of the robot, and limited computational power. In order to ensure the working efficiency of the robot, the vision system must realize high-performance calculation under a limited hardware environment. Under the existing robot construction conditions, the processing time required for each frame of input image cannot exceed 100ms. A set of visual identification system consisting of Gaussian filtering, canny edge detection, binarization and morphological transformation has the processing time of 1.1s-1.2s for a frame of image (1920 x 1080), and the frame of image can not reach the standard;
4, the robot system is complex and has high real-time requirement. In order to ensure the normal work of the robot, the whole set of system comprises a motion control system, a navigation positioning system, a human-computer interaction system and a visual identification system, and a single control system is difficult to meet all the operation requirements at the same time.
Disclosure of Invention
The invention aims to: in order to solve the technical problem, the invention provides a visual mortar identification system applied to an indoor wall surface construction robot. The system introduces a distributed system to solve the problem of computing resource contention, introduces a visual saliency principle to solve the problem of poor generalization capability of a visual identification system, introduces OpenCL heterogeneous computing to solve the problem of limited computing capability of the robot, and simultaneously introduces a visual identification parameter optimization system to improve the identification purpose of the designated working environment according to the working characteristics of the robot.
The invention relates to a visual mortar identification system applied to an indoor wall construction robot, which comprises:
the indoor wall surface construction robot consists of at least two industrial personal computers which are connected by a wired or wireless network;
the industrial personal computer connected with the camera transmits the video stream to the industrial personal computer which is not connected with the camera in real time for display;
calculating a visual saliency value according to the color and the position information, segmenting the mortar part, and if the saliency value of the pixel point is greater than a preset threshold value, determining that the pixel point belongs to the mortar area, thereby segmenting a corresponding mortar area;
the method comprises the following steps that an industrial personal computer connected with a camera copies an image memory and decodes the image, the image memory copy and the image are distributed in two threads to be executed, and the decoding threads perform a decoding mode of requirement matching and are in differential matching with the copying threads;
and realizing the read-write mutual exclusion between the video stream transmission module and the video acquisition module by adopting a mode of atomic exchange of double buffer areas and read-write pointers.
Furthermore, the video acquisition module holds a semaphore, the connection error processing module monitors the semaphore, and when the video acquisition module has an error, the semaphore is triggered, and then the connection error processing module is triggered.
Further, the mortar identification treatment specifically comprises:
converting an image of the RGB color space to a Lab color space;
performing Means-Shift clustering on the quinary information (L, a, b, x, y) of each pixel point to obtain a superpixel image;
calculating Euclidean distance between color information and position information of each super pixel block and other super pixel blocks, and normalizing to obtain a significant value;
on the premise that a significant region is manually specified, system parameters are optimized by utilizing back propagation and partial derivative matrixes; and converting the mortar identification problem into a significance problem, and segmenting a mortar area according to the significance value of each pixel point.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the visual saliency algorithm is applied to the indoor wall surface construction robot for semi-fluid identification, and compared with the traditional edge and color detection, the visual saliency algorithm has higher accuracy and robustness.
2. The vision algorithm of the invention adopts OpenCL heterogeneous computation, accelerates the computation process by using the parallel computation capability of the GPU, and ensures the real-time performance and effectiveness of the computation result.
3. The method utilizes the idea of back propagation algorithm in the neural network, distributes parameter change values by utilizing partial derivatives under the condition of designating a significant region, and iterates until the result is stable.
4. The video stream acquisition module realizes differential matching of memory copy and image decoding, has low delay and low system resource occupation, simultaneously has error checking and recovery functions, and can automatically process hardware connection disconnection errors.
Drawings
FIG. 1 is a diagram illustrating the hardware architecture of the present invention in an exemplary embodiment.
FIG. 2 is a diagram of the overall system architecture of the present invention, in an exemplary embodiment.
FIG. 3 is a flowchart illustrating a multithreading module according to the present invention, in accordance with an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the embodiments and the accompanying drawings.
Referring to fig. 1, the hardware architecture of the present system includes: the system comprises a camera, two industrial personal computers, communication equipment and a display, wherein one industrial personal computer is connected with the camera and is connected with the other industrial personal computer through the communication equipment. The selection of relevant hardware is not limited, but at least the algorithm can run OpenCL on the GPU, and the maximum network transmission rate is 100Mbps.
The whole structure of the system is shown in figure 2, namely, an industrial personal computer connected with a camera comprises a video acquisition module, a connection error processing module, a video stream transmission module, a visual algorithm module and an interaction module with the control machine. The video acquisition module, the connection error processing module and the video stream transmission module are all implemented by adopting a multi-thread architecture, and a detailed flow chart can refer to fig. 3.
The image memory copy and the image decoding function in the video acquisition module are distributed in different threads, the image memory copy is realized by mmap () system call, and the call completion speed is high; the image decoding function adopts the strategy of differential matching and decoding as required because the pixel points in the traversed image participate in the decoding operation and the completion speed is low. And the connection error processing module realizes the daemon thread of the video acquisition module.
When the hardware terminal in the video acquisition module is interrupted by mistake, the monitoring semaphore connected with the error processing module is triggered, the connected error processing module requests the thread in the video acquisition module to quit, and the video acquisition module is restarted after the camera is reconfigured.
The video stream transmission module consists of a request monitoring thread and a plurality of service threads, wherein the request monitoring thread sends a video stream request task to the service threads and then returns the video stream request task to wait for monitoring the request of the next client. The service thread sends the image processed by the video acquisition module to the request end through a TCP or UDP protocol. In the process, the video acquisition module and the video stream transmission module have the problem of mutually exclusive reading and writing, and the problems can be solved through locking, mutually exclusive reading and writing and the like, but the performance is low. The invention innovatively uses a method for interacting the read-write pointer double buffer area and the pointer atom to solve the problems, and because the length of the critical area is greatly reduced, the performance is excellent.
The visual mortar identification system applied to the indoor wall surface construction robot is specifically realized as follows:
s1, a hardware system comprising a construction robot, a camera, two industrial personal computers (systems of industrial control computers), a router (or a switch) and a master control terminal is built.
And S2, building a set of distributed system under the local area network environment, and transmitting the real-time video and decomposing and calculating the pressure.
Still further, the S2 method further includes:
and S2.1, connecting the two industrial personal computers through a network formed by a router (or a switch), and connecting the camera to one of the industrial personal computers (hereinafter referred to as an algorithm machine) which is in charge of a visual recognition algorithm.
S2.2, the two industrial personal computers can be connected in a wired mode realized by a router or a switch, and can also be connected in a wireless mode such as Wi-Fi and Bluetooth.
S2.3, an algorithm machine initialization and visual recognition algorithm module and a video acquisition and transmission module;
and S3, the algorithm machine sends the video stream acquired by the camera to another industrial personal computer (hereinafter referred to as a control machine) which is responsible for robot motion and work flow control through a connecting medium.
Still further, the S3 method further includes:
and S3.1, separating a necessary memory copy flow from an image decoding flow in the process of collecting and transmitting the video stream. Decoding is only carried out when the controller requests the video stream, unnecessary resource consumption is avoided, and delay is reduced to ensure the effectiveness of information.
And S3.2, the algorithm machine receives commands sent by the control machine, such as closing, restarting and other commands.
And S3.3, due to high-intensity vibration during the working of the robot, the possibility of disconnection exists between the camera and the algorithm machine and between the algorithm machine and the control machine, the algorithm machine should actively recover to work, and error information is sent to the control end when necessary.
And S4, after the local visual recognition algorithm module of the algorithm machine receives the video stream, detecting the position and height of the mortar in the picture by using a recognition algorithm, and sending the position and height to the control machine.
Further, the S4 method further includes:
and S4.1, the algorithm identification algorithm module calculates the position and height information of the mortar by integrating the color characteristics and texture distribution position information in the image by adopting the principle of visual significance.
And S4.2, using OpenCL heterogeneous computing to realize an algorithm, and utilizing the parallel computing capability of a GPU in an algorithm machine.
And S5, identifying the mortar based on the visual saliency. Because the color and the position of the object to be identified are relatively fixed, the identification of the mortar is converted into the calculation of the significance value according to the significance definition combining the color information and the position information, and the unreliability of single-dimensional information is avoided.
The idea of vectorization and parallelization can be adopted, an OpenCL heterogeneous computing framework is adopted, and the operation processing of the visual saliency is computed by means of image blocking, loop training and vectorization in a parallel mode, so that the GPU version of the visual saliency algorithm is completed, and compared with the original CPU implementation, the execution speed is improved by 20 times.
The S5 method is further described as follows:
and S5.1, transferring the color space of each frame of input image from the RGB color gamut to the Lab color gamut, and separating gray expression and chrominance information (the gray expression and the chrominance information are collectively called color information hereinafter).
S5.2, for each pixel point P in the Lab color gamut image, taking the gray scale and chromaticity information L, a and b and the position information x and y to form a quinary number, and recording the quinary number as P (L, a, b, x and y). And giving a segmentation threshold O and a color position normalization coefficient R, and iterating by using a clustering algorithm to obtain a super-pixel image SP.
And S5.3, respectively calculating the color independence U and the distribution D of the quinary number SP (L, a, b, x, y) of each super pixel point, and normalizing the color independence U and the distribution D of the super pixel points to obtain the significance value S of the super pixel points.
The 5.3 method comprises the following specific calculation steps:
s5.3.1, for each super pixel SP, calculating the euclidean distance Wc between the gray level and chrominance information (L, a, b) of the point and other super pixels, calculating the euclidean distance Wl between the position information (x, y) of the point and other super pixels, and selecting a proper position information harmonic function Fu and a position information normalization parameter Ru, so that the color independence U = Wc + Wl Fu Ru of the point.
S5.3.2, for each super pixel SP, calculating the euclidean distance Wc between the gray level and chromaticity information (L, a, b) of the point and other super pixels, calculating the euclidean distance Wl between the position information (x, y) of the point and other super pixels, and normalizing the parameter Rd based on the set color information blending function Fd and color information, so that the color distribution U = Wc f Rd + Wl of the point.
S5.3.3, since the color information and the position information have different value spaces, a normalization function H is selected, and a significance value S = H (U, D) is selected, wherein U corresponds to the color information and D corresponds to the position information.
S5.4, based on the optimization method aiming at the parameters in the method S5, the system can obtain more purposiveness under the specified environment.
The S5.4 method is further described as follows:
s5.4.1, two regions A, B are manually delineated for robotic work purposes to represent the prominent and non-prominent regions, respectively.
S5.4.2 the significance values SA and SB for region A, B were calculated using the 5.3 method. Under the precondition that the salient value of the area A tends to 1,B and the salient value of the area A tends to 0, setting a loss function C and calculating partial derivatives of the loss function on parameters Fu, ru, fd and Rd to obtain a partial derivative matrix X.
The loss function C may be an absolute error loss function, a quadratic mean loss function, or a binary cross entropy loss function (logarithmic loss function).
S5.4.3, error is propagated back to Fu, ru, fd and Rd parameters using a gradient descent algorithm, each optimized to minimize the loss function C.
S5.4.4, the above method is repeated until the saliency value of the region a and the saliency value of the region B stabilize in a given threshold range.
And S5.5, giving a proper significance value segmentation threshold value, binarizing the super-pixel image to obtain a segmentation template, and obtaining a final mortar area after necessary morphological adjustment.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (1)

1. Be applied to vision mortar identification system of indoor wall construction robot, its characterized in that includes:
at least two industrial personal computers, wherein one industrial personal computer is connected with the camera and is defined as an algorithm machine, and the other industrial personal computer is used for controlling the motion and the work flow control of the indoor wall construction robot and is defined as a control machine; the algorithm machine is in communication connection with the control machine through communication equipment, and transmits the video stream acquired by the camera to the control machine in real time for display;
the algorithm machine comprises: the video acquisition module is connected with the error processing module, the video stream transmission module and the visual algorithm module;
the video acquisition module, the connection error processing module and the video stream transmission module are all realized by adopting a multi-thread architecture;
the image memory copy and the image decoding function of the video acquisition module are distributed in different threads, the image memory copy is realized by calling of a mmap () system, the image decoding function adopts a decoding mode of demand matching and is matched with the image memory copy in a differential speed manner, and an error processing module is connected to realize a daemon thread for the video acquisition module; when the hardware terminal in the video acquisition module is interrupted by mistake, triggering a monitoring semaphore connected with the error processing module, requesting the thread in the video acquisition module to quit by the connection error processing module, and restarting the video acquisition module after reconfiguring the camera;
the video stream transmission module consists of a request monitoring thread and a plurality of service threads, wherein the request monitoring thread sends a video stream request task to the service threads and then returns to wait for monitoring the request of the next client; the service thread sends the image processed by the video acquisition module to a request end through a TCP or UDP protocol; the method adopts a double-buffer area and read-write pointer atomic exchange mode to realize the read-write mutual exclusion between a video stream transmission module and a video acquisition module;
after receiving the video stream, a visual identification algorithm module of the algorithm machine calculates a visual significant value according to the color and the position information, and divides the mortar part, if the significant value of the pixel point is greater than a preset threshold value, the pixel point is considered to belong to the mortar area, so that the corresponding mortar area is divided and sent to a controller, specifically:
(1) Transferring the color space of each frame of input image from an RGB color gamut to a Lab color gamut, and separating gray expression and chrominance information;
(2) For each pixel point P in the Lab color gamut image, gray scale and chromaticity information L, a and b and position information x and y are taken to form a quintuple which is marked as P (L, a, b, x and y); giving a segmentation threshold O and a color position normalization coefficient R, and iterating by using a clustering algorithm to obtain a super-pixel image SP;
(3) Respectively calculating the color independence U and the distributivity D of the quinary number SP (L, a, b, x, y) of each super pixel point, normalizing the color independence U and the distributivity D of the super pixel points to obtain a significance value S:
for each super pixel point SP, calculating Euclidean distance Wc between the gray level and chromaticity information (L, a and b) of the point and other super pixel points, calculating Euclidean distance Wl between the position information (x and y) of the point and other super pixel points, and obtaining the color independence U = Wc + Wl Fux Ru of the point based on a position information harmonic function Fu and a position information normalization parameter Ru;
and obtaining the color distribution of the point D = Wc × Fd × Rd + Wl based on the set color information harmonic function Fd and the color information normalization parameter Rd;
obtaining a significance value S = H (U, D) based on the selected normalization function H;
the optimization method comprises the following steps of:
aiming at the operation of a robot, two areas A, B are manually defined to respectively represent a salient area and a non-salient area;
calculating the significance values of the areas A, B according to the significance values S = H (U, D), and respectively recording the significance values as SA and SB;
under the precondition that the A area significant value tends to 1,B area significant value tends to 0, calculating partial derivatives of the loss function on Fu, ru, fd and Rd based on the set loss function C to obtain a partial derivative matrix X; using a gradient descent algorithm to reversely propagate the errors to Fu, ru, fd and Rd, and optimizing each parameter to minimize a loss function C; the process is repeated until the significance value of the A area and the significance value of the B area are stabilized in a given threshold range.
CN202010447459.1A 2020-05-25 2020-05-25 Visual mortar recognition system applied to indoor wall construction robot Active CN111640129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010447459.1A CN111640129B (en) 2020-05-25 2020-05-25 Visual mortar recognition system applied to indoor wall construction robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010447459.1A CN111640129B (en) 2020-05-25 2020-05-25 Visual mortar recognition system applied to indoor wall construction robot

Publications (2)

Publication Number Publication Date
CN111640129A CN111640129A (en) 2020-09-08
CN111640129B true CN111640129B (en) 2023-04-07

Family

ID=72331651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010447459.1A Active CN111640129B (en) 2020-05-25 2020-05-25 Visual mortar recognition system applied to indoor wall construction robot

Country Status (1)

Country Link
CN (1) CN111640129B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580583B (en) * 2020-12-28 2024-03-15 深圳市普汇智联科技有限公司 Automatic calibration method and system for billiard ball color recognition parameters
CN113664043B (en) * 2021-08-10 2023-10-31 山东钢铁集团日照有限公司 Fan autonomous operation heat dissipation system of hot rolled steel coil warehouse

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model
CN110135435A (en) * 2019-04-17 2019-08-16 上海师范大学 A kind of conspicuousness detection method and device based on range learning system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9025880B2 (en) * 2012-08-29 2015-05-05 Disney Enterprises, Inc. Visual saliency estimation for images and video
CN103996189B (en) * 2014-05-05 2017-10-03 小米科技有限责任公司 Image partition method and device
CN107256547A (en) * 2017-05-26 2017-10-17 浙江工业大学 A kind of face crack recognition methods detected based on conspicuousness
CN107088028A (en) * 2017-06-29 2017-08-25 武汉洁美雅科技有限公司 A kind of new-type Wet-dry dust collector robot control system of intelligence
CN107871317B (en) * 2017-11-09 2020-04-28 杭州电子科技大学 Mortar plumpness detection method based on image processing technology
CN109859222A (en) * 2018-12-31 2019-06-07 常州轻工职业技术学院 Edge extracting method and system based on cascade neural network
CN113129216A (en) * 2019-12-30 2021-07-16 北京维博信通科技有限公司 Method for displaying image of backing car

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573221A (en) * 2018-03-28 2018-09-25 重庆邮电大学 A kind of robot target part conspicuousness detection method of view-based access control model
CN110135435A (en) * 2019-04-17 2019-08-16 上海师范大学 A kind of conspicuousness detection method and device based on range learning system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王姮 等.《基于PCA 模式和颜色特征的钢轨表面缺陷视觉显著性检测》.《自动化仪表》.2017,第第38卷卷(第第38卷期),73-76. *

Also Published As

Publication number Publication date
CN111640129A (en) 2020-09-08

Similar Documents

Publication Publication Date Title
DE102020100684B4 (en) MARKING OF GRAPHICAL REFERENCE MARKERS
WO2019170164A1 (en) Depth camera-based three-dimensional reconstruction method and apparatus, device, and storage medium
CN111640129B (en) Visual mortar recognition system applied to indoor wall construction robot
WO2019024863A1 (en) Control method, apparatus and system for robot, and applicable robot
US11644841B2 (en) Robot climbing control method and robot
WO2021175180A1 (en) Line of sight determination method and apparatus, and electronic device and computer-readable storage medium
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
US9025868B2 (en) Method and system for image processing to determine a region of interest
US11900676B2 (en) Method and apparatus for detecting target in video, computing device, and storage medium
JP2016095854A (en) Image processing and device
CN111105460B (en) RGB-D camera pose estimation method for three-dimensional reconstruction of indoor scene
CN107087104B (en) The image treatment method of facial area and the electronic device for using the method
US20220262093A1 (en) Object detection method and system, and non-transitory computer-readable medium
CN112652020B (en) Visual SLAM method based on AdaLAM algorithm
CN102324043B (en) Image matching method based on DCT (Discrete Cosine Transformation) through feature description operator and optimization space quantization
CN103093226B (en) A kind of building method of the RATMIC descriptor for characteristics of image process
CN117011660A (en) Dot line feature SLAM method for fusing depth information in low-texture scene
CN117011280A (en) 3D printed concrete wall quality monitoring method and system based on point cloud segmentation
WO2020135187A1 (en) Unmanned aerial vehicle recognition and positioning system and method based on rgb_d and deep convolutional network
WO2023109664A1 (en) Monitoring method and related product
CN110135224B (en) Method and system for extracting foreground target of surveillance video, storage medium and terminal
WO2022257778A1 (en) Method and apparatus for state recognition of photographing device, computer device and storage medium
US20220375134A1 (en) Method, device and system of point cloud compression for intelligent cooperative perception system
Cheng et al. Edge-assisted lightweight region-of-interest extraction and transmission for vehicle perception
Safin et al. Implementation of ROS package for simultaneous video streaming from several different cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant