CN113240739A - Excavator, pose detection method and device of accessory and storage medium - Google Patents

Excavator, pose detection method and device of accessory and storage medium Download PDF

Info

Publication number
CN113240739A
CN113240739A CN202110473662.0A CN202110473662A CN113240739A CN 113240739 A CN113240739 A CN 113240739A CN 202110473662 A CN202110473662 A CN 202110473662A CN 113240739 A CN113240739 A CN 113240739A
Authority
CN
China
Prior art keywords
accessory
video image
pose
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110473662.0A
Other languages
Chinese (zh)
Other versions
CN113240739B (en
Inventor
王威
孙鸿远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sany Heavy Machinery Ltd
Original Assignee
Sany Heavy Machinery Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sany Heavy Machinery Ltd filed Critical Sany Heavy Machinery Ltd
Priority to CN202110473662.0A priority Critical patent/CN113240739B/en
Publication of CN113240739A publication Critical patent/CN113240739A/en
Application granted granted Critical
Publication of CN113240739B publication Critical patent/CN113240739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a method and a device for detecting the pose of an excavator and an accessory and a storage medium, wherein the method for detecting the pose of the accessory comprises the following steps: the method comprises the steps of obtaining a video image of an accessory, determining the position of an inner boundary and an outer boundary of the accessory according to the video image, and determining the position and the posture of the accessory according to the position of the inner boundary and the outer boundary of the accessory. This is because in the video image of the accessory, when the poses of the accessories are different, the positions of the inner and outer boundary lines are different, so the pose of the accessory can be determined by using the scheme; meanwhile, the reliability of the video image shooting device is high, so that the detection method of the pose of the accessory is not limited by the working environment, and the applicability is high; compared with the mode of matching the IMU and the tilt angle sensor, the shooting device has lower cost; meanwhile, the invention also provides an optional scheme for equipment which cannot be provided with the tilt angle sensor.

Description

Excavator, pose detection method and device of accessory and storage medium
Technical Field
The invention relates to the technical field of mechanical equipment, in particular to a pose detection method, a pose detection device and a storage medium for an excavator and an accessory.
Background
At present, the position and posture of an excavator accessory (namely a working device at the tail end of an excavator) are detected by installing tilt sensors at a bucket rocker and a bucket rod and calculating the actual position and posture angle of the bucket according to detection data of an IMU (inertial measurement unit) installed on the bucket rod and the tilt sensors. However, when the system is in an extreme working environment, the reliability of the system is limited, for example, the tilt sensor is susceptible to water inflow due to moisture during underwater operation, or the tilt sensor cannot bear acceleration and is easily damaged when a high-frequency vibration mechanism such as a breaking hammer works.
Disclosure of Invention
In view of this, embodiments of the present invention provide an excavator, an accessory pose detection method, an accessory pose detection device, and a storage medium, so as to solve the problem of low applicability of current accessory pose detection.
According to a first aspect, an embodiment of the present invention provides an accessory pose detection method, including:
acquiring a video image of an accessory;
determining the position of the boundary line of the inner part and the outer part of the accessory in the video image;
and determining the pose of the accessory according to the positions of the inner boundary and the outer boundary.
According to the method for detecting the pose of the accessory, provided by the embodiment of the invention, the video image of the accessory is obtained, the position of the inner boundary and the outer boundary of the accessory is determined according to the video image, and the pose of the accessory is determined according to the position of the inner boundary and the outer boundary of the accessory; this is because in the video image of the accessory, when the poses of the accessories are different, the positions of the inner and outer boundary lines are different, so the pose of the accessory can be determined by using the scheme; meanwhile, the reliability of the video image shooting device is high, so that the detection method of the pose of the accessory is not limited by the working environment, and the applicability is high; compared with the mode of matching the IMU and the tilt angle sensor, the shooting device has lower cost; meanwhile, the invention also provides an optional scheme for equipment which cannot be provided with the tilt angle sensor.
With reference to the first aspect, in a first implementation manner of the first aspect, the determining the boundary position of the inner and outer parts of the accessory in the video image includes:
processing the video image to obtain a plurality of accessory candidate areas;
screening the plurality of accessory candidate areas by using a preset standard accessory template to obtain accessory areas;
and carrying out texture analysis on the accessory area to obtain the position of the boundary line of the inner part and the outer part of the accessory.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the processing the video image to obtain a plurality of accessory candidate regions includes:
analyzing the video image to obtain a picture sequence comprising a plurality of pictures;
processing edge characteristics of each picture in the picture sequence to obtain a plurality of characteristic graphs;
carrying out frequency domain transformation on the pictures in the picture sequence to obtain a geometric constraint graph;
and screening the plurality of feature maps by using the geometric constraint map to obtain a plurality of accessory candidate regions.
With reference to the first embodiment of the first aspect, in a third embodiment of the first aspect, the screening in the multiple accessory candidate areas by using a preset standard accessory template, and the obtaining of the accessory area includes: and screening out the accessory candidate area with the highest similarity to the standard accessory template from the plurality of accessory candidate areas to obtain the accessory area.
With reference to the first aspect or the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the obtaining the position of the boundary between the inner part and the outer part of the accessory by performing texture analysis on the accessory area includes: respectively acquiring texture values of a plurality of positions in the accessory area; and screening out positions with preset texture values from the positions, and taking the screened out positions as the positions of the inner and outer boundary lines of the accessory.
With reference to the first aspect, in a fifth implementation of the first aspect, the determining the pose of the accessory according to the inside and outside demarcation locations includes: dividing the inside and the outside of the accessory area according to the position of the boundary line of the inside and the outside of the accessory and the outer edge frame of the accessory area, and obtaining the length of the inside of the accessory and the length of the outside of the accessory according to the division result; and calculating the ratio of the internal length of the accessory to the external length of the accessory, and determining the pose of the accessory according to the ratio.
With reference to the third embodiment of the first aspect, in a sixth embodiment of the first aspect, after the screening out an accessory candidate region with the highest similarity to the standard accessory template from the plurality of accessory candidate regions and obtaining the accessory region, the method further includes: and when the outer edge frame of the accessory area is incomplete, the standard accessory template is used for completing the outer edge frame of the accessory area to obtain the outer edge frame of the accessory area.
According to a second aspect, an embodiment of the present invention provides a device for determining a posture of a bucket, including:
the acquisition module is used for acquiring a video image of the accessory;
the first processing module is used for determining the position of the boundary line of the inner part and the outer part of the accessory in the video image;
and the second processing module is used for determining the pose of the accessory according to the position of the boundary line of the inner part and the outer part.
According to a third aspect, an embodiment of the present invention provides an excavator, which includes a shooting device and a controller, the shooting device and the controller are communicatively connected, the shooting device is configured to obtain image information of a bucket, and the controller is configured to collect the image information of the shooting device and execute the computer instructions so as to execute the method for determining a posture of the bucket described in the first aspect or any one of the embodiments of the first aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the method for determining a bucket attitude according to the first aspect or any one of the embodiments of the first aspect.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
fig. 1 is a schematic flow chart of an accessory pose detection method according to embodiment 1 of the present invention;
fig. 2 is a flowchart illustrating a specific example of a bucket pose detection method;
fig. 3 is a schematic structural view of an attachment pose detection apparatus according to embodiment 2 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment 1 of the invention provides an accessory pose detection method, and fig. 1 is a flow diagram of the accessory pose detection method in the embodiment 1 of the invention. As shown in fig. 1, the accessory pose detection of embodiment 1 of the present invention includes the following steps:
s101: and acquiring a video image of the accessory.
In embodiment 1 of the present invention, the accessory can be understood as a working device at the end of an excavator, such as a bucket, a breaking hammer, and the like.
In embodiment 1 of the present invention, a video image of an accessory may be acquired by a camera in an excavator, and specifically, a front accessory image may be acquired by installing a camera (e.g., a camera) at an appropriate position on the top of a cab.
S102: and determining the position of the boundary line of the inner part and the outer part of the accessory in the video image.
As a specific implementation manner, the following technical solution may be adopted to determine the position of the boundary between the inner part and the outer part of the accessory in the video image: (1) processing the video image to obtain a plurality of accessory candidate areas; (2) screening the plurality of accessory candidate areas by using a preset standard accessory template to obtain accessory areas; (3) and performing texture analysis on the accessory area to obtain the position of the boundary line of the inner part and the outer part of the accessory in the video image.
More specifically, the following scheme may be adopted for processing the video image in step (1) to obtain a plurality of accessory candidate regions: analyzing the video image to obtain a picture sequence comprising a plurality of pictures; processing edge characteristics of each picture in the picture sequence to obtain a plurality of characteristic graphs; carrying out frequency domain transformation on the pictures in the picture sequence to obtain a geometric constraint graph; and screening the plurality of feature maps by using the geometric constraint map to obtain a plurality of accessory candidate regions.
For example, each picture in the picture sequence may be subjected to frequency domain transformation to obtain a geometric constraint map; and screening the feature map belonging to the same picture as the geometric constraint map by using any geometric constraint map, and traversing the geometric constraint map to obtain a plurality of accessory candidate regions.
In an example, a video image shot by a camera is analyzed, a plurality of continuous frames in stream data are processed, each image is filtered and anti-jittering is carried out, and a stable continuous picture sequence is obtained; after the image sequence is obtained, feature preprocessing is carried out to obtain a feature map corresponding to each frame of image.
In embodiment 1 of the present invention, the working device (i.e., the attachment) at the end of the excavator has inherent set characteristics in shape, for example, in the case of a bucket, the edge is straight and smooth, the bucket has bucket teeth, and the like, and therefore, a geometric constraint map is obtained by using a frequency domain transform algorithm such as FFT/DCT for each picture in a picture sequence.
In embodiment 1 of the present invention, a plurality of bucket candidate regions may be selected from a feature map by a certain image morphological constraint, that is, the feature map and the geometric constraint map are combined and screened to obtain the plurality of bucket candidate regions. Furthermore, the generated candidate regions are matched with the edge SIFT/BRIEF features in the feature map, and then the bucket candidate regions are screened by adopting a correlation statistical algorithm such as KMEANS and the like, so that the number of wrong candidate regions can be reduced preliminarily.
More specifically, for the step (2), a preset standard accessory template is used to perform screening in the multiple accessory candidate regions, and the following scheme may be adopted to obtain the accessory regions: and screening out the accessory candidate area with the highest similarity to the standard accessory template from the plurality of accessory candidate areas to obtain the accessory area.
It should be noted that, due to the harsh working environment of the working device (i.e. the accessory) at the end of the excavator, there are various noises in the video image acquisition, for example, the bucket is stuck with mud or paint is removed after long-term use, and the bucket does not necessarily present a smooth straight edge in the real working environment. Therefore, in embodiment 1 of the present invention, a plurality of candidate regions obtained in step (1) are screened using a standard attachment template, and the most accurate one bucket candidate region is obtained using the similarity with the template attachment template and is used as the bucket region. In embodiment 1 of the present invention, the standard accessory template may be a picture.
In addition, the outer edge frame of the obtained accessory area may be incomplete due to the fact that mud is adhered to the accessory or paint is dropped after long-term use, and when the outer edge frame of the accessory area is incomplete, the outer edge frame of the accessory area can be obtained by utilizing the standard accessory template to be filled.
More specifically, the following scheme may be adopted for performing texture analysis on the accessory region in step (3) to obtain the position of the boundary between the inner part and the outer part of the accessory: respectively acquiring texture values of a plurality of positions in the accessory area; and screening out positions with preset texture values from the positions, and taking the screened out positions as the positions of the inner and outer boundary lines of the accessory.
Since the outside and the inside of the attachment have different texture features, the step (3) may determine the position of the inner and outer boundary lines of the bucket area obtained in the step (2) from the texture values at a plurality of positions.
For example, the preset value may be determined as follows: performing OTSU (on-the-go) and other binary transformation on the original image of the accessory area to obtain a gray scale image, wherein texture features of each position in the gray scale image can be calculated by adopting a gray scale texture calculation method, texture values of different positions are determined by calculating gray scale values and distribution of pixel points, and the preset value can be obtained by collecting a plurality of images of the inner area and the outer area of the accessory, wherein the preset value is the gray scale value of the boundary position of the inner area and the outer area of the bucket.
S103: and determining the pose of the accessory according to the position of the boundary line of the inner part and the outer part.
As a specific embodiment, the following scheme may be adopted to determine the pose of the accessory according to the positions of the inner and outer boundary lines of the accessory: dividing the inside and the outside of the accessory area according to the position of the boundary line of the inside and the outside of the accessory and the outer edge frame of the accessory area, and obtaining the length of the inside of the accessory and the length of the outside of the accessory according to the division result; and calculating the ratio of the internal length of the accessory to the external length of the accessory, and determining the pose of the accessory according to the ratio.
For example, for a bucket, the inner part and the outer bottom plate of the bucket area may be segmented according to a tooth tip line of the bucket and an outer edge frame of the bucket area, and the length of the inner part of the bucket and the length of the outer bottom of the bucket may be obtained according to the segmentation result; and calculating the ratio of the length of the interior of the bucket to the length of the outer bottom of the bucket, and determining the posture of the bucket according to the ratio.
According to the method for detecting the pose of the accessory provided by the embodiment 1 of the invention, the video image of the accessory is obtained, the positions of the inner and outer boundary lines of the accessory are determined according to the video image, and the pose of the accessory is determined according to the positions of the inner and outer boundary lines of the accessory; compared with the mode of matching the IMU and the tilt angle sensor, the shooting device has lower cost; an alternative solution may also be provided for devices where a tilt sensor cannot be installed.
It should be noted that, in the picture, when the ratio of the length inside the bucket to the length outside the bottom of the bucket is different, the posture of the bucket is also different. The ratio-to-attitude correspondence can thus be established to determine the attitude of the bucket.
To describe the accessory posture detecting method of embodiment 1 of the present invention in more detail, a specific example is given. Fig. 2 is a flowchart showing a specific example of the bucket attitude detection method, as shown in fig. 2,
step 1: the system analyzes the video shot by the camera, processes a plurality of continuous frames in the stream data, and filters and resists jitter for each image to obtain a more stable continuous picture sequence; after the image sequence is obtained, carrying out characteristic preprocessing to obtain a characteristic image corresponding to each frame of image;
step 2: for the preprocessed bucket image, a morphological detection candidate region can be selected through a certain image morphological constraint, for example, morphological inherent set features of the bucket, such as a straight smooth edge, a bucket tooth, and the like. On the basis of the characteristic diagram, converting edge line segments of the original diagram into field intensity data in a frequency domain coordinate system by utilizing a coordinate system conversion algorithm such as frequency domain conversion and the like corresponding to the original diagram, combining the characteristic diagram and the field intensity diagram for screening by setting correlation detection parameters to obtain a plurality of bucket candidate areas, and recording the bucket candidate areas;
and step 3: due to the fact that the working environment of the bucket is severe, various noises exist in the image when the image is obtained, for example, the bucket occupies soil or paint is removed after long-term use, and the bucket does not necessarily present a smooth straight edge in the real working environment. Therefore, the system adopts a standard bucket template matching algorithm to screen a plurality of candidate areas generated in the step 2, and obtains a correct bucket outer edge frame by utilizing the similarity matched with the template, so that the interference of mud stones and the like on the bucket is eliminated;
and 4, step 4: since the outer bottom plate and the interior of the bucket have different texture features, the texture features are collected on the candidate area generated in the step 3, relevant threshold values are trained and set to judge whether the generated candidate area is the correct bucket interior area, and the outer edge part and the tooth tip line in the currently obtained candidate area can be updated according to the result.
And 5: the candidate areas of the bucket are divided through the tooth tips, the ratio of the length of the interior of the bucket to the length of the visible outer bottom plate can be established according to different tooth tip line positions, the current corner relation of the bucket is obtained through an angle corresponding ratio query mode, namely the corner pose of the bucket corresponding to the four-bar mechanism is deduced by combining the position of the working device, and finally the space pose of the bucket can be obtained.
As can be seen from the above, in the embodiment 1 of the present invention, the accessory image is processed through morphological constraint and texture constraint, so that the accessory detection accuracy is improved, and the pose of the accessory is obtained by using the visual image, so that the movement track of the accessory can be further obtained; meanwhile, the shooting device is high in reliability, so that the sensor damage caused by huge vibration generated during the operation of the excavator can be avoided, the use of precision sensing components such as an inclination angle sensor and an IMU (inertial measurement Unit) can be reduced, and the cost is reduced.
Example 2
An embodiment 2 of the present invention provides an accessory pose detection apparatus, and fig. 3 is a schematic structural diagram of the accessory pose detection apparatus according to the embodiment 2 of the present invention, and as shown in fig. 3, the accessory pose detection apparatus according to the embodiment 2 of the present invention includes an acquisition module 20, a first processing module 22, and a second processing module 24.
Specifically, the obtaining module 20 is configured to obtain a video image of the accessory.
A first processing module 22, configured to determine the location of the boundary between the inner and outer parts of the accessory in the video image.
And the second processing module 24 is used for determining the pose of the accessory according to the position of the boundary line of the inner part and the outer part.
Details of the above accessory pose detection apparatus can be understood by referring to corresponding related descriptions and effects in the embodiments shown in fig. 1 to fig. 2, which are not described herein again.
Example 3
The embodiment of the invention also provides an excavator, which can comprise a shooting device and a controller, wherein the shooting device is in communication connection with the controller, the shooting device is used for acquiring the image information of the bucket, and the controller is used for acquiring the image information of the shooting device and executing computer instructions. The controller includes a processor and a memory, wherein the processor and the memory may be connected by a bus or other means.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 20, the first processing module 22, and the second processing module 24 shown in fig. 3) corresponding to the bucket attitude determination method in the embodiment of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in the memory, that is, implements the bucket attitude determination method in the above method embodiments.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory and, when executed by the processor, perform a bucket attitude determination method as in the embodiment of FIGS. 1-2.
The details of the excavator can be understood by referring to the corresponding descriptions and effects in the embodiments shown in fig. 1 to fig. 3, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. An accessory pose detection method is characterized by comprising the following steps:
acquiring a video image of an accessory;
determining the position of the boundary line of the inner part and the outer part of the accessory in the video image;
and determining the pose of the accessory according to the positions of the inner boundary and the outer boundary.
2. The method of claim 1, wherein determining the location of the boundary between the inner and outer portions of the accessory in the video image comprises:
processing the video image to obtain a plurality of accessory candidate areas;
screening the plurality of accessory candidate areas by using a preset standard accessory template to obtain accessory areas;
and performing texture analysis on the accessory area to obtain the position of the boundary line of the inner part and the outer part of the accessory in the video image.
3. The method of claim 2, wherein the processing the video image into a plurality of accessory candidate regions comprises:
analyzing the video image to obtain a picture sequence comprising a plurality of pictures;
processing edge characteristics of each picture in the picture sequence to obtain a plurality of characteristic graphs;
carrying out frequency domain transformation on the pictures in the picture sequence to obtain a geometric constraint graph;
and screening the plurality of feature maps by using the geometric constraint map to obtain a plurality of accessory candidate regions.
4. The method of claim 2, wherein the screening of the plurality of accessory candidate areas using a predetermined standard accessory template comprises:
and screening out the accessory candidate area with the highest similarity to the standard accessory template from the plurality of accessory candidate areas to obtain the accessory area.
5. The method of claim 2, wherein the texture analyzing the accessory region to obtain the location of the boundary between the inner and outer portions of the accessory comprises:
respectively acquiring texture values of a plurality of positions in the accessory area;
and screening out positions with preset texture values from the positions, and taking the screened out positions as the positions of the inner and outer boundary lines of the accessory.
6. The method of claim 1, wherein the determining the pose of the accessory as a function of the inside and outside demarcation locations comprises:
dividing the inside and the outside of the accessory area according to the position of the boundary line of the inside and the outside of the accessory and the outer edge frame of the accessory area, and obtaining the length of the inside of the accessory and the length of the outside of the accessory according to the division result;
and calculating the ratio of the internal length of the accessory to the external length of the accessory, and determining the pose of the accessory according to the ratio.
7. The method according to claim 4, wherein after the screening of the accessory candidate region having the highest similarity to the standard accessory template among the plurality of accessory candidate regions and the obtaining of the accessory region, further comprising:
and when the outer edge frame of the accessory area is incomplete, the standard accessory template is used for completing the outer edge frame of the accessory area to obtain the outer edge frame of the accessory area.
8. An accessory pose detection apparatus, comprising:
the acquisition module is used for acquiring a video image of the accessory;
the first processing module is used for determining the position of the boundary line of the inner part and the outer part of the accessory in the video image;
and the second processing module is used for determining the pose of the accessory according to the position of the boundary line of the inner part and the outer part.
9. An excavator, comprising:
the shooting device is used for acquiring image information of the accessory;
a controller, the shooting device and the controller being in communication connection, the controller being configured to acquire image information of the shooting device and execute the computer instructions to perform the pose detection method of the accessory according to any one of claims 1 to 7.
10. A computer-readable storage medium characterized in that the computer-readable storage medium stores computer instructions for causing the computer to execute the pose detection method of an accessory according to any one of claims 1 to 7.
CN202110473662.0A 2021-04-29 2021-04-29 Pose detection method and device for excavator and accessory and storage medium Active CN113240739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110473662.0A CN113240739B (en) 2021-04-29 2021-04-29 Pose detection method and device for excavator and accessory and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110473662.0A CN113240739B (en) 2021-04-29 2021-04-29 Pose detection method and device for excavator and accessory and storage medium

Publications (2)

Publication Number Publication Date
CN113240739A true CN113240739A (en) 2021-08-10
CN113240739B CN113240739B (en) 2023-08-11

Family

ID=77131481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110473662.0A Active CN113240739B (en) 2021-04-29 2021-04-29 Pose detection method and device for excavator and accessory and storage medium

Country Status (1)

Country Link
CN (1) CN113240739B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130101170A1 (en) * 2011-10-21 2013-04-25 Industry-University Cooperation Foundation Hanyang University Method of image processing and device therefore
CN105760842A (en) * 2016-02-26 2016-07-13 北京大学 Station caption identification method based on combination of edge and texture features
CN107813313A (en) * 2017-12-11 2018-03-20 南京阿凡达机器人科技有限公司 The bearing calibration of manipulator motion and device
US20180285684A1 (en) * 2017-03-29 2018-10-04 Seiko Epson Corporation Object attitude detection device, control device, and robot system
CN109903337A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the pose of the scraper bowl of excavator
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
CN110807473A (en) * 2019-10-12 2020-02-18 浙江大华技术股份有限公司 Target detection method, device and computer storage medium
CN110956646A (en) * 2019-10-30 2020-04-03 北京迈格威科技有限公司 Target tracking method, device, equipment and storage medium
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
CN111639599A (en) * 2020-05-29 2020-09-08 北京百度网讯科技有限公司 Object image mining method, device, equipment and storage medium
CN111951211A (en) * 2019-05-17 2020-11-17 株式会社理光 Target detection method and device and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130101170A1 (en) * 2011-10-21 2013-04-25 Industry-University Cooperation Foundation Hanyang University Method of image processing and device therefore
CN105760842A (en) * 2016-02-26 2016-07-13 北京大学 Station caption identification method based on combination of edge and texture features
US20180285684A1 (en) * 2017-03-29 2018-10-04 Seiko Epson Corporation Object attitude detection device, control device, and robot system
CN107813313A (en) * 2017-12-11 2018-03-20 南京阿凡达机器人科技有限公司 The bearing calibration of manipulator motion and device
US20200250461A1 (en) * 2018-01-30 2020-08-06 Huawei Technologies Co., Ltd. Target detection method, apparatus, and system
CN109903337A (en) * 2019-02-28 2019-06-18 北京百度网讯科技有限公司 Method and apparatus for determining the pose of the scraper bowl of excavator
CN111951211A (en) * 2019-05-17 2020-11-17 株式会社理光 Target detection method and device and computer readable storage medium
CN110473259A (en) * 2019-07-31 2019-11-19 深圳市商汤科技有限公司 Pose determines method and device, electronic equipment and storage medium
CN110807473A (en) * 2019-10-12 2020-02-18 浙江大华技术股份有限公司 Target detection method, device and computer storage medium
CN110956646A (en) * 2019-10-30 2020-04-03 北京迈格威科技有限公司 Target tracking method, device, equipment and storage medium
CN111639599A (en) * 2020-05-29 2020-09-08 北京百度网讯科技有限公司 Object image mining method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113240739B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
JP6831414B2 (en) Methods for positioning, devices, devices and computers for positioning Readable storage media
US10996062B2 (en) Information processing device, data management device, data management system, method, and program
KR101643672B1 (en) Optical flow tracking method and apparatus
JP4887520B2 (en) Object detection apparatus, object detection method, and object detection program
JP5538868B2 (en) Image processing apparatus, image processing method and program
CN111322993B (en) Visual positioning method and device
CN109708658B (en) Visual odometer method based on convolutional neural network
US20220128358A1 (en) Smart Sensor Based System and Method for Automatic Measurement of Water Level and Water Flow Velocity and Prediction
EP4066162A1 (en) System and method for correspondence map determination
JP2017130067A (en) Automatic image processing system for improving position accuracy level of satellite image and method thereof
CN115097419A (en) External parameter calibration method and device for laser radar IMU
CN114219770A (en) Ground detection method, ground detection device, electronic equipment and storage medium
EP2791865B1 (en) System and method for estimating target size
CN103093481B (en) A kind of based on moving target detecting method under the static background of watershed segmentation
CN112837384A (en) Vehicle marking method and device and electronic equipment
CN110706257B (en) Identification method of effective characteristic point pair, and camera state determination method and device
JP2008269218A (en) Image processor, image processing method, and image processing program
CN113240739A (en) Excavator, pose detection method and device of accessory and storage medium
CN110969875B (en) Method and system for road intersection traffic management
CN110599542A (en) Method and device for local mapping of adaptive VSLAM (virtual local area model) facing to geometric area
JP4236154B2 (en) Method, apparatus and program for removing meteorological noise from power line fluctuation analysis images
JP2012230671A (en) Method for classifying objects in scene
JP2016122309A (en) Image processing device
Hotta et al. Search Region Correction via Spectrum Domain for Online Visual Tracking
CN116993797A (en) Depth map invalid point estimation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant