CN111325070B - Collision detection method and device based on image - Google Patents

Collision detection method and device based on image Download PDF

Info

Publication number
CN111325070B
CN111325070B CN201811540190.0A CN201811540190A CN111325070B CN 111325070 B CN111325070 B CN 111325070B CN 201811540190 A CN201811540190 A CN 201811540190A CN 111325070 B CN111325070 B CN 111325070B
Authority
CN
China
Prior art keywords
image
preset
determining
contour
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811540190.0A
Other languages
Chinese (zh)
Other versions
CN111325070A (en
Inventor
董秋伟
黄玉玺
张金凤
哈融厚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811540190.0A priority Critical patent/CN111325070B/en
Publication of CN111325070A publication Critical patent/CN111325070A/en
Application granted granted Critical
Publication of CN111325070B publication Critical patent/CN111325070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses a collision detection method and device based on images. One embodiment of the method comprises the following steps: acquiring an image to be detected, wherein the image to be detected comprises an image of a marker arranged on an elastic part outside a preset object; determining a first position of a preset reference point on the marker from an image to be detected; determining first position change data of the marker according to the first position and a pre-acquired initial position of a preset reference point; and determining whether the preset object collides or not based on the position change amount indicated by the first position change data. The method and the device realize that whether the preset object collides or not is determined according to the position change of the marker on the elastic part outside the preset object in the image. Compared with the method that the contact sensor is arranged in the elastic component to detect collision, the method provided by the embodiment can save the cost of using the contact sensor on one hand and can avoid false detection caused by noise of the contact sensor on the other hand.

Description

Collision detection method and device based on image
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to the technical field of image processing, and particularly relates to a collision detection method and device based on images.
Background
Collisions are a common phenomenon in everyday life. Collisions refer to the process in which objects moving relative to each other meet, and in a very short time, the motion state changes significantly through interaction. Collision detection refers to detecting whether a collision occurs between objects moving relative to each other. Collision detection may be used to detect whether an automobile collides with an external object, whether a robot collides with an external object, or the like.
In the conventional collision detection method, an impact beam is usually provided on the outer periphery of a detected object, and a collision detection member is provided on the impact beam. The component for collision detection here may be a touch sensor or the like. When a collision occurs, the collision detecting member deforms so that an electric signal change is generated on the touch sensor, thereby detecting whether or not a collision occurs to the inspected object.
Disclosure of Invention
The embodiment of the application provides a collision detection method and device based on images.
In a first aspect, an embodiment of the present application provides an image-based collision detection method, including: acquiring an image to be detected, wherein the image to be detected comprises an image of a marker arranged on an elastic part outside a preset object; determining a first position of a preset reference point on the marker from an image to be detected; determining first position change data of the marker according to the first position and a pre-acquired initial position of a preset reference point; and determining whether the preset object collides or not based on the position change amount indicated by the first position change data.
In some embodiments, the method further comprises: in response to determining that the collision object is deformed, extracting a first contour of the collision object colliding with the preset object from the image to be detected; extracting an initial contour of a collision object from at least one frame of first reference image acquired before the acquisition time of an image to be detected; based on the difference between the first profile and the initial profile, it is determined whether the collision object is deformed.
In some embodiments, determining whether the collision object is deformed based on the difference between the current profile and the initial profile includes: determining the difference between the areas of the patterns surrounded by the first contour and the areas of the patterns surrounded by the initial contour; determining whether the difference between the areas is greater than a second preset threshold; and determining that the collision object is deformed in response to the difference in areas being greater than a second preset threshold.
In some embodiments, the method further comprises: acquiring at least one frame of second reference image acquired after a preset time interval of acquisition time of an image to be detected, wherein the second reference image comprises an image of a collision object; extracting a second contour of the collision object from the second reference image; determining a difference between the second profile and the initial profile; and determining the damage degree of the collision to the preset object and the collision object according to the variation between the second contour and the initial contour and the variation between the first contour and the initial contour.
In some embodiments, the position change data includes an indication for indicating a change in the position of the marker in a direction toward the preset object, or includes an indication for indicating a change in the position of the marker in a direction away from the preset object; the method further comprises the following steps: in response to determining that a preset object collides, acquiring at least one frame of a third reference image acquired after a preset time interval of the acquisition time of the image to be detected; determining a second position of a preset reference point on the marker in the third reference image; determining second position change data of the marker according to the second position and the initial position; and determining whether the preset object is separated from the collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
In a second aspect, an embodiment of the present application provides an image-based collision detection apparatus, including: an acquisition unit configured to acquire an image to be detected, wherein the image to be detected includes an image of a marker provided on an elastic member outside a preset object; the first determining unit is configured to determine a first position of a preset reference point on the marker from the image to be detected; a second determining unit configured to determine first position change data of the marker according to the first position and an initial position of a preset reference point acquired in advance; and a third determining unit configured to determine whether or not a collision of the preset object occurs based on the amount of position change indicated by the first position change data.
In some embodiments, the third determining unit is further configured to: in response to determining that the preset object collides, extracting a first contour of a collision object colliding with the preset object from the image to be detected; extracting an initial contour of a collision object from at least one frame of first reference image acquired before the acquisition time of an image to be detected; based on the difference between the first profile and the initial profile, it is determined whether the collision object is deformed.
In some embodiments, the third unit is further configured to: determining the difference between the areas of the patterns surrounded by the first contour and the areas of the patterns surrounded by the initial contour; determining whether the difference between the areas is greater than a second preset threshold; and determining that the collision object is deformed in response to the difference in areas being greater than a second preset threshold.
In some embodiments, the third determining unit is further configured to: acquiring at least one frame of second reference image acquired after a preset time interval of acquisition time of an image to be detected, wherein the second reference image comprises an image of a collision object; extracting a second contour of the collision object from the second reference image; determining a difference between the second profile and the initial profile; and determining the damage degree of the collision to the preset object and the collision object according to the variation between the second contour and the initial contour and the variation between the first contour and the initial contour.
In some embodiments, the position change data includes an indication for indicating a change in the position of the marker in a direction toward the preset object, or includes an indication for indicating a change in the position of the marker in a direction away from the preset object; and the third determination unit is further configured to: in response to determining that a preset object collides, acquiring at least one frame of a third reference image acquired after a preset time interval of the acquisition time of the image to be detected; determining a second position of a preset reference point on the marker in the third reference image; determining second position change data of the marker under a preset coordinate system according to the second position and the initial position; and determining whether the preset object is separated from the collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as described in any of the implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements a method as described in any of the implementations of the first aspect.
According to the image-based collision detection method and device, the image to be detected is obtained, then the current position of the preset reference point on the marker is determined from the image to be detected, then the first position change data of the marker is determined according to the current position and the initial position of the preset reference point which is obtained in advance, and finally whether the preset object collides or not is determined based on the position change indicated by the first position change data. It is achieved that whether the preset object collides or not is determined according to the position change of the marker on the elastic part outside the preset object in the image. Compared with the method of arranging the contact sensor in the elastic component to perform collision detection, the method provided by the embodiment can save the cost of using the contact sensor on one hand and can avoid false detection caused by interference signals (such as electromagnetic interference signals) when the contact sensor is adopted for detection on the other hand. In addition, collision leakage detection and other phenomena caused by poor contact, breakage and the like of the contact sensor can be avoided.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which an image-based collision detection method of one embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of an image-based collision detection method according to the present application;
FIG. 3 is a schematic illustration of an application scenario of an image-based collision detection method according to the present application;
FIG. 4 is a flow chart of yet another embodiment of an image-based collision detection method according to the present application;
FIG. 5 is a flow chart of yet another embodiment of an image-based collision detection method according to the present application;
FIG. 6 is a schematic structural view of one embodiment of an image-based collision detection apparatus according to the present application;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 in which an image-based collision detection method of one embodiment of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The terminal devices 101, 102, 103 may interact with the server 105 via the network 104. Various client applications, such as a camera-type application, can be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various image capturing devices including, but not limited to, cameras, smartphones, laptop and desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the image capturing apparatuses enumerated above. Which may be implemented as multiple software or software modules (e.g., software or software modules for providing distributed services) or as a single software or software module. The present application is not particularly limited herein.
The server 105 may provide various services, for example, receiving images of preset objects captured by cameras of the terminal devices 101, 102, 103 transmitted from the terminal devices, analyzing the images of the preset objects, and determining whether the preset objects collide according to the processing results.
It should be noted that, the image-based collision detection method provided by the embodiment of the present application is generally executed by the server 105, and accordingly, the image-based collision detection device is generally disposed in the server 105.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., software or software modules for providing distributed services), or as a single software or software module. The present application is not particularly limited herein.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of an image-based collision detection method according to the present application is shown. The image-based collision detection method includes the steps of:
in step 201, an image to be detected is acquired, wherein the image to be detected includes an image of a marker disposed on an elastic member outside a preset object.
In the present embodiment, the execution subject of the image-based collision detection method (e.g., the server shown in fig. 1) may acquire the image of the preset object acquired by the image acquisition apparatus from the image acquisition apparatus (e.g., the terminal apparatus shown in fig. 1) through a wired connection or a wireless connection. The execution body selects an image to be detected from images of a preset object. The image to be detected here may be, for example, the last frame of image of the multiple frames of images of the preset object acquired by the image acquisition device at the current image acquisition time. The image to be detected includes an image of a marker provided on an elastic member outside the preset object.
In this embodiment, the upper preset object may be various objects, such as a movable object, or a stationary object. The movable object may include, for example, a robot, an automobile, or the like.
Further, an elastic member may be provided on the outside of the preset object, for example, an elastic member may be provided around the preset object on the outside of the preset object. Here, the outside of the preset object may be a surface of the preset object. In some application scenarios, the outer side of the preset object may be a surface of the preset object related to the movement direction of the preset object, for example, a surface of the preset object facing the movement direction of the preset object, or a surface of the preset object perpendicular to the movement direction of the preset object, etc. The elastic member may be an elastic member made of a material having elasticity. The material having elasticity herein may include, for example, but not limited to, at least one of rubber, plastic, and the like.
The elastic member may contract when subjected to pressure applied from the outside, and the elastic member will move in a direction approaching the predetermined object. If the pressure changes the deformation of the elastic component into elastic deformation, the elastic component stretches when the pressure applied by the outside is removed, and the elastic component moves along the direction away from the preset object.
A marker may be provided on the elastic member. The identifier here may be, for example, any object with a flat surface, such as a two-dimensional code or the like. Whether the elastic member is contracted or expanded may be determined by determining a change in the position of the marker, thereby judging whether the preset object is collided.
In the above-mentioned marker, at least one preset reference point may be preset. The preset reference point may be a vertex of a graph formed by the outline of the identifier, or may be any point on the identifier. Therefore, the change in the position of the judgment marker can be converted into the change in the position of the judgment preset reference point. Whether the collision of the preset object occurs may be determined according to a change in the position of the preset reference point on the marker.
The shape formed by the outline of the above-mentioned marker may be a two-dimensional pattern of regular shape, such as a triangle, rectangle, etc. In this embodiment, the image capturing device may be fixed on top of the preset object, and may also be disposed outside the preset object.
The image acquisition device can acquire images of the elastic component at regular time intervals. And transmits the image acquired each time to the execution subject. The image of the elastic member may include an image of a marker disposed on the elastic member.
In some application scenarios, the camera of the image acquisition device may form a preset angle with the surface of the marker, so as to facilitate acquisition of the image of the elastic component disposed outside the preset object.
Step 202, determining a first position of a preset reference point on the marker from an image to be detected.
In this embodiment, the execution body may perform various analysis processing on the image to be detected, so as to determine the first position of the pre-reference point on the identifier in the image to be detected.
In some application scenarios, the first position of the preset reference point may be a relative position of the preset reference point with respect to a preset relative reference point, where the preset relative reference point may be a point on the preset object. The preset relative reference point can be setAnd (3) points in the two-dimensional code attached to the preset object. In these application scenarios, the image acquisition device may be arranged outside the preset object. For example, the coordinates of the predetermined relative reference point on the predetermined object in the predetermined coordinate system are (X) 10 ,Y 10 ,Z 10 ) The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the preset reference point in the above-mentioned marker in the above-mentioned preset coordinate system are (x) 10 ,y 10 ,z 10 ) The relative position (first position) of the preset reference point may be (Δx) 1 ,Δy 1 ,Δz 1 ) Wherein Deltax is 1 =x 10 -X 10 ;Δy 1 =y 10 -Y 10 ;Δz 1 =z 10 -Z 10
In other application scenarios, the first position of the preset reference point of the identifier may be a coordinate of the preset reference point under a preset coordinate system. In these application scenarios, the image capturing device may be disposed on the preset object, for example, the image capturing device may be disposed on top of the preset object. The preset coordinate system may be the same as the coordinate system of the image capturing device, and may be a world coordinate system. The preset coordinate system may be a three-dimensional coordinate system.
When the first position corresponds to the coordinate of the preset reference point of the identifier in the world coordinate system, the execution body may convert the coordinate of the preset reference point of the identifier in the coordinate system of the image acquisition device to the coordinate in the world coordinate system through a preset conversion rule.
It should be noted that, the above-mentioned method for converting the coordinates of the preset reference point in the coordinate system under the image acquisition device into the coordinates under the world coordinate system is a well-known technology widely studied and applied at present, and will not be described herein.
Step 203, determining first position change data of the identifier according to the first position and the initial position of the preset reference point acquired in advance.
In this embodiment, the execution body may acquire the initial position of the preset reference point of the identifier in advance. The initial position may be analyzed from a predetermined image of the elastic member in which the elastic member is not deformed. The image of the elastic member in which the elastic member is not deformed includes an image of a marker provided on the elastic member. The execution body may determine the position of the preset reference point of the marker in an image of an arbitrarily determined elastic member in which the elastic member is not deformed.
In some application scenarios, the initial position of the preset reference point of the identifier may be, for example, a relative position of the preset reference point with respect to a preset relative reference point, where the preset relative reference point may be a point on the preset object. The preset relative reference point may be a point set in a two-dimensional code attached to a preset object.
In these application scenarios, the image acquisition device may be arranged outside the preset object. For example, the coordinates of the predetermined relative reference point on the predetermined object in the predetermined coordinate system are (X) 20 ,Y 20 ,Z 20 ) The method comprises the steps of carrying out a first treatment on the surface of the The coordinates of the preset reference point on the marker under the preset coordinate system are (x) 20 ,y 20 ,z 20 ) The relative position (first position) of the preset reference point may be (Δx) 2 ,Δy 2 ,Δz 2 ) Wherein Deltax is 2 =x 20 -X 20 ;Δy 2 =y 20 -Y 20 ;Δz 2 =z 20 -Z 20
The execution body may determine the first position change data of the identifier according to the initial position of the preset reference point of the identifier and the first position of the preset reference point of the identifier. The first position change data may include a position change amount of a preset reference point of the marker.
In some application scenarios, the initial position and the first position of the preset reference point of the identifier may be coordinates of the preset reference point in the same preset coordinate system. In these application scenarios, the image capturing device may be disposed on the preset object, for example, disposed on top of the preset object.
In these application scenarios, the first position change data may be, for example, first position change data calculated from coordinates under a preset coordinate system corresponding to a first position of a preset reference point of the identifier and coordinates under the preset coordinate system corresponding to an initial position of the preset reference point of the identifier.
For example, if the initial position of the preset reference point of the marker is (x) 0 ,y 0 ,z 0 ) The initial position of the preset reference point of the marker has a coordinate (x 1 ,y 1 ,z 1 ) The first position change data of the identifier includes a position change amount of:
in other application scenarios, the first position and the initial position of the preset reference point are both positions of the preset reference point on the identifier relative to the preset relative reference point on the preset object. The amount of position change included in the first position change data here may be:
as can be seen from the above description, the first position change data is position change data of a preset reference point on the marker with respect to a preset object.
Step 204, determining whether the preset object collides based on the position change indicated by the first position change data.
In this embodiment, the execution body may determine whether the preset object collides based on the amount of position change indicated by the first position change data.
Specifically, a preset position change threshold may be preset, and if the position change amount is greater than the preset position change threshold, it is determined that the preset object collides. Otherwise, determining that the preset object is not collided.
It should be noted that, the preset position change threshold may be set according to a specific application scenario, which is not limited herein.
With continued reference to fig. 3, fig. 3 is a schematic diagram 300 of an application scenario of the image-based collision detection method according to the present embodiment. In the application scenario of fig. 3, a server 301 acquires an image of a preset object 303 from an image acquisition device 302. And determines an image 304 to be detected from the image of the preset object 303. The image to be detected 303 includes an image of a marker 3032 provided on the elastic member 3031 outside the preset object 303; the server 301 then determines from the image to be detected 304 a first position 305 of a preset reference point on the identifier 3032. Next, the server 303 determines first position change data 306 of the identifier 3032 according to the first position and the initial position of the preset reference point acquired in advance, and finally, the server 301 determines whether the preset object collides 307 based on the position change amount indicated by the first position change data.
According to the method provided by the embodiment of the application, the image to be detected is obtained, the current position of the preset reference point on the marker is determined from the image to be detected, the first position change data of the marker is determined according to the current position and the initial position of the preset reference point which is obtained in advance, and finally whether the preset object collides or not is determined based on the position change quantity indicated by the first position change data. It is achieved that whether the preset object collides or not is determined according to the position change of the marker on the elastic part outside the preset object in the image. Compared with the method of arranging the contact sensor in the elastic component to perform collision detection, the method provided by the embodiment can save the cost of using the contact sensor on one hand and can avoid false detection caused by interference signals (such as electromagnetic interference signals) when the contact sensor is adopted for detection on the other hand. In addition, collision leakage detection and other phenomena caused by poor contact, breakage and the like of the contact sensor can be avoided.
With further reference to fig. 4, a flow 400 of yet another embodiment of an image-based collision detection method is shown. The flow 400 of the image-based collision detection method includes the steps of:
Step 401, acquiring an image to be detected, wherein the image to be detected comprises an image of a marker arranged on an elastic member outside a preset object.
In this embodiment, step 401 is the same as step 201 shown in fig. 2, and is not described here.
Step 402, determining the current position of a preset reference point on the marker from the image to be detected.
In this embodiment, step 402 is the same as step 202 shown in fig. 2, and is not described here.
Step 403, determining first position change data of the identifier according to the current position and the pre-acquired initial position of the preset reference point.
In this embodiment, step 403 is the same as step 203 shown in fig. 2, and is not repeated here.
Step 404, determining whether the preset object collides based on the position change amount indicated by the first position change data.
In this embodiment, step 404 is the same as step 204 shown in fig. 2, and is not described here.
In response to determining that the preset object collides, a first contour of the collision object that collides with the preset object is extracted from the image to be detected, step 405.
In the present embodiment, in response to determining that a preset object collides, an execution subject of the image-based collision detection method (e.g., a server shown in fig. 1) may extract a first contour of a collision object that collides with the preset object from an image to be detected.
The first contour of the collision object colliding with the preset object may be a contour of the collision object at the time of collision with the preset object.
When the preset object collides with the collision object, the distance between the collision object and the elastic part of the preset object is zero. Therefore, the image of the collision object may be included in the image to be detected.
The execution body may extract a first contour of the collision object when the collision with the preset object occurs from the image to be detected. It will be appreciated that in the event of a collision, the first profile of the collision object may be different from the profile of the collision object when it has not collided with the predetermined object, due to deformation.
The first contour of the collision object is extracted from the image to be detected, and the first contour of the collision object can be extracted using an existing image contour extraction method. Existing image contour extraction methods may include, for example, region-based methods, edge detection-based methods, active contour-based methods, visual feature-based methods, and the like.
It should be noted that the above-mentioned region-based method, edge-based method, activity contour-based method, and visual feature-based method are currently widely studied and used known techniques, and are not described here in detail.
Step 406, extracting an initial contour of the collision object from at least one frame of first reference image acquired before the acquisition time of the image to be detected.
In this embodiment, the execution body may acquire at least one frame of the first reference image acquired before the acquisition time of the image to be detected. The at least one frame of first reference image acquired before the time of acquiring the image to be detected may be a plurality of frames of images acquired in the same sampling time period as the image to be detected.
The execution body may determine whether an image of the collision object is included in the first reference images. If it is determined that the at least one frame of first reference image includes an image of the collision object, the execution body may determine, from the at least one frame of first reference image, an image of the collision object when a certain distance is between the collision object and the preset object. Further, the execution body may determine the initial contour of the collision object from the image of the collision object at a distance from the preset object using an existing image contour extraction method.
The initial profile of the collision object, that is, the profile of the collision object when the collision object does not collide with the preset object.
In step 407, it is determined whether the collision object is deformed according to the difference between the first profile and the initial profile.
In this embodiment, the execution body may determine whether the collision object is deformed according to a difference between the first profile and the initial profile.
The difference between the first contour and the initial contour here may include, for example, a difference between a contour line corresponding to the initial contour of the collision object and a contour line corresponding to the first contour.
Specifically, the execution body may compare the shape of the pattern surrounded by the first contour with the shape of the pattern surrounded by the initial contour, and if the shape of the pattern surrounded by the first contour is the same as the shape of the pattern surrounded by the initial contour, the first contour and the initial contour are unchanged. If the shape of the pattern surrounded by the first contour is different from the contour line of the shape of the pattern surrounded by the initial contour, the first contour and the initial contour are changed. When the first profile changes from the initial profile, it can be determined that the collision object is deformed.
In some optional implementations of the present embodiment, determining whether the collision object is deformed according to the difference between the first contour and the initial contour in the step 407 may include determining a difference between an area of an image surrounded by the first contour and an area of an image surrounded by the initial contour; determining whether the difference between the areas is greater than a second preset threshold; and determining that the collision object is deformed in response to the difference in areas being greater than a second preset threshold.
In these alternative implementations, the executing body may determine an area of an image surrounded by the first contour of the collision object, and an area of an image surrounded by the initial contour of the collision object. The difference between the area of the image surrounded by the first contour of the impacting object and the area of the image surrounded by the initial contour of the impacting object is then calculated. The execution body may determine whether the difference in the areas is greater than a second preset threshold value set in advance. And determining that the collision object is deformed in response to the difference in the areas being greater than the preset second threshold.
If the collision object is deformed, the material of the collision object can be determined to be elastic or softer. The method can provide reference for determining the judgment of the damage formed by the collision to the preset object according to the material of the collision object.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the flow 400 of the image-based collision detection method in this embodiment highlights the step of determining whether the collision object is deformed according to the difference between the first contour of the collision object extracted from the image to be detected and the initial contour of the object to be detected, so that the deformation of the collision object can be determined based on the image, and the material of the collision object can be primarily determined. Thus, a reference can be provided for judging damage to the object caused by collision.
In some optional implementations of this embodiment, the image-based collision detection method may further include:
and acquiring at least one frame of second reference image acquired after a preset time interval of the acquisition time of the image to be detected in response to determining that the collision object is deformed. The second reference image may comprise an image of the collision object.
A second contour of the collision object is extracted from the second reference image.
A difference between the second profile and the initial profile is determined.
And determining the damage degree of the collision to the preset object and the collision object according to the difference between the second contour and the initial contour and the variation of the difference between the first contour and the initial contour.
In these alternative implementations, if the area surrounded by the second contour is larger than the area surrounded by the first contour, it indicates that the material of the collision object is softer, and the damage degree of the collision to the preset object is smaller and the damage degree to the collision object is larger. If the area surrounded by the second contour is smaller than the area surrounded by the first contour, the material of the collision object is harder, and the damage degree of the collision to the preset object is larger and the damage degree to the collision object is smaller.
In these alternative implementations, the rules may be used to predict the extent of damage to a pre-set object and to the impacting object from the impact.
With further reference to fig. 5, a flow 500 of yet another embodiment of an image-based collision detection method is illustrated. The flow 500 of the image-based collision detection method includes the steps of:
step 501, an image to be detected is acquired, wherein the image to be detected comprises an image of a marker arranged on an elastic member outside a preset object.
In this embodiment, step 501 is the same as step 201 in the embodiment shown in fig. 2, and is not repeated here.
Step 502, determining the current position of a preset reference point on the marker from the image to be detected.
In this embodiment, step 502 is the same as step 202 in the embodiment shown in fig. 2, and is not repeated here.
Step 503, determining first position change data of the identifier according to the current position and the pre-acquired initial position of the preset reference point.
In this embodiment, step 503 is the same as step 203 in the embodiment shown in fig. 2, and is not repeated here.
Step 504, determining whether the preset object collides based on the position change amount indicated by the first position change data.
In this embodiment, step 504 is the same as step 204 in the embodiment shown in fig. 2, and is not repeated here.
In step 505, at least one frame of a third reference image acquired after a preset time interval of the acquisition time of the image to be detected is acquired in response to determining that the preset object collides.
In the present embodiment, in response to determining that a preset object collides, an execution subject of the image-based collision detection method (e.g., a server shown in fig. 1) may acquire at least one frame of a third reference image acquired after a preset time interval at which an image to be detected is acquired.
The third reference image is acquired by the main body after a preset time interval of the acquisition time of the image to be detected. The preset time interval here may be, for example, 10s, 20s, 30s, or the like. The value of the preset time interval may be set according to a specific application scenario, which is not limited herein.
In general, during the time interval between the preset time interval and the preset time interval, the elastic part of the preset object is elastically deformed, and under the action of elastic force, the elastic part moves in a direction away from the preset object. Therefore, the position of the preset reference point of the marker provided on the elastic member also changes.
Step 506, determining a second position of a preset reference point on the marker in the third reference image.
The executing body may determine a second position of a preset reference point on the marker in the third reference image.
The second position of the preset reference point on the identifier may be a relative position or an absolute position.
In some application scenarios, the second position of the preset reference point on the marker here may be, for example, the coordinates of the preset reference point on the marker in the third reference image under the preset coordinate system. The preset coordinate system here may be the same as the preset coordinate system corresponding to the first position described above.
Step 507, determining second position change data of the marker in a preset coordinate system according to the second position and the initial position.
In this embodiment, the execution body may determine the second position change data of the identifier according to the second position and the initial position.
In this embodiment, the above-mentioned position change data may include a flag for indicating a change in the position of the marker in a direction approaching the preset object, or include a flag for indicating a change in the position of the marker in a direction departing from the preset object. That is, the first position change data, or the second position change data, may include an indication for indicating a change in the position of the marker in a direction approaching the preset object; the first position change data, or the second position change data, may include an indication for indicating a change in the position of the marker in a direction away from the preset object.
Step 508, determining whether the preset object is out of collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
In this embodiment, if the identifier in the first position change data and the identifier in the second position change data change, it may be determined that the preset object is out of collision. Otherwise, it may be determined that the predetermined object is not out of collision.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 2, the flow 500 of the image-based collision detection method in this embodiment highlights that, in at least one frame of the third reference image acquired after the preset time interval at the time of acquisition of the image to be detected, the second position of the preset reference point of the identifier is determined, the second position change data of the preset reference point of the identifier is determined according to the second position and the initial position, and whether the preset object is separated from the collision is determined according to the first position change data and the identifier in the second position change data, so that the detection that whether the preset object is separated from the collision is determined according to the image is realized.
With further reference to fig. 6, as an implementation of the method shown in the above figures, the present application provides an embodiment of an image-based collision detection apparatus, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the image-based collision detection apparatus 600 of the present embodiment includes: an acquisition unit 601, a first determination unit 602, a second determination unit 603, and a third determination unit 604. Wherein the acquiring unit 601 is configured to acquire an image to be detected, wherein the image to be detected includes an image of a marker provided on an elastic member outside a preset object; a first determining unit 602 configured to determine a first position of a preset reference point on the identifier from the image to be detected; a second determining unit 603 configured to determine first position change data of the identifier according to the first position and an initial position of a preset reference point acquired in advance; the third determining unit 604 is configured to determine whether the preset object collides based on the amount of position change indicated by the first position change data.
In this embodiment, the specific processes and the technical effects of the acquiring unit 601, the first determining unit 602, the second determining unit 603, and the third determining unit 604 of the image-based collision detecting apparatus 600 may refer to the relevant descriptions of the steps 201, 202, 203, and 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the third determining unit 604 is further configured to: in response to determining that the collision object is deformed, extracting a first contour of the collision object colliding with the preset object from the image to be detected; extracting an initial contour of a collision object from at least one frame of first reference image acquired before the acquisition time of an image to be detected; based on the difference between the first profile and the initial profile, it is determined whether the collision object is deformed.
In some optional implementations of the present embodiment, the third unit 604 is further configured to: determining the difference between the areas of the patterns surrounded by the first contour and the areas of the patterns surrounded by the initial contour; determining whether the difference between the areas is greater than a second preset threshold; and determining that the collision object is deformed in response to the difference in areas being greater than a second preset threshold.
In some optional implementations of the present embodiment, the third determining unit 604 is further configured to: acquiring at least one frame of second reference image acquired after a preset time interval of acquisition time of an image to be detected, wherein the second reference image comprises an image of a collision object; extracting a second contour of the collision object from the second reference image; determining a difference between the second profile and the initial profile; and determining the damage degree of the collision to the preset object and the collision object according to the variation between the second contour and the initial contour and the variation between the first contour and the initial contour.
In some optional implementations of this embodiment, the location change data includes a flag for indicating a change in the location of the marker toward a direction closer to the preset object, or includes a flag for indicating a change in the location of the marker toward a direction farther from the preset object; and the third determination unit is further configured to: in response to determining that a preset object collides, acquiring at least one frame of a third reference image acquired after a preset time interval of the acquisition time of the image to be detected; determining a second position of a preset reference point on the marker in the third reference image; determining second position change data of the marker under a preset coordinate system according to the second position and the initial position; and determining whether the preset object is separated from the collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing an electronic device of an embodiment of the present application. The electronic device shown in fig. 7 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the application.
As shown in fig. 7, the computer system 700 includes a central processing unit (CPU, central Processing Unit) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a random access Memory (RAM, random Access Memory) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a liquid crystal display (LCD, liquid Crystal Display), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN (local area network ) card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 701. The computer readable medium according to the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented in software or in hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a first determination unit, a second determination unit, and a third determination unit. The names of these units do not constitute a limitation on the unit itself in some cases, and the acquisition unit may also be described as "a unit that acquires an image to be detected", for example.
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring an image to be detected, wherein the image to be detected comprises an image of a marker arranged on an elastic part outside a preset object; determining a first position of a preset reference point on the marker from an image to be detected; determining first position change data of the marker according to the first position and a pre-acquired initial position of a preset reference point; and determining whether the preset object collides or not based on the position change amount indicated by the first position change data.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept described above. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (10)

1. An image-based collision detection method, comprising:
acquiring an image to be detected, wherein the image to be detected comprises an image of a marker arranged on an elastic part outside a preset object;
determining a first position of a preset reference point on the marker from the image to be detected;
determining first position change data of the marker according to the first position and the initial position of the preset reference point, which is obtained in advance;
determining whether the preset object collides or not based on the position change amount indicated by the first position change data;
The position change data comprises a mark used for indicating the position of the marker to change towards the direction close to the preset object or comprises a mark used for indicating the position of the marker to change towards the direction far away from the preset object; and
the method further comprises the steps of: acquiring at least one frame of third reference image acquired after a preset time interval of the acquisition time of the image to be detected in response to determining that the preset object collides; determining a second position of a preset reference point on the marker in the third reference image; determining second position change data of the marker according to the second position and the initial position; and determining whether the preset object is separated from collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
2. The method of claim 1, wherein the method further comprises:
extracting a first contour of a collision object colliding with the preset object from the image to be detected in response to the deformation of the collision object;
extracting an initial contour of the collision object from at least one frame of first reference image acquired before the acquisition time of the image to be detected;
And determining whether the collision object is deformed according to the difference between the first contour and the initial contour.
3. The method of claim 2, wherein the determining whether the collision object is deformed based on the difference between the first profile and the initial profile comprises:
determining the difference between the areas of the patterns surrounded by the first contour and the areas of the patterns surrounded by the initial contour;
determining whether the difference in area is greater than a second preset threshold;
and determining that the collision object is deformed in response to the difference between the areas being greater than the second preset threshold.
4. The method of claim 2, wherein the method further comprises:
acquiring at least one frame of second reference image acquired after a preset time interval of the acquisition time of the image to be detected, wherein the second reference image comprises an image of the collision object;
extracting a second contour of the collision object from the second reference image;
determining a difference between the second profile and the initial profile;
and determining the damage degree of the collision to a preset object and the collision object according to the variation between the second contour and the initial contour and the variation between the first contour and the initial contour.
5. An image-based collision detection apparatus comprising:
an acquisition unit configured to acquire an image to be detected, wherein the image to be detected includes an image of a marker provided on an elastic member outside a preset object;
a first determining unit configured to determine a first position of a preset reference point on the identifier from the image to be detected;
a second determining unit configured to determine first position change data of the marker according to the first position and a pre-acquired initial position of the preset reference point;
a third determination unit configured to determine whether or not the preset object collides based on the amount of position change indicated by the first position change data;
the position change data comprises a mark used for indicating the position of the marker to change towards the direction close to the preset object or comprises a mark used for indicating the position of the marker to change towards the direction far away from the preset object; and
the third determination unit is further configured to: acquiring at least one frame of third reference image acquired after a preset time interval of the acquisition time of the image to be detected in response to determining that the preset object collides; determining a second position of a preset reference point on the marker in the third reference image; determining second position change data of the marker under a preset coordinate system according to the second position and the initial position; and determining whether the preset object is separated from collision according to whether the identification in the first position change data and the identification in the second position change data are changed.
6. The apparatus of claim 5, wherein the third determination unit is further configured to:
in response to determining that the preset object collides, extracting a first contour of a collision object colliding with the preset object from the image to be detected;
extracting an initial contour of the collision object from at least one frame of first reference image acquired before the acquisition time of the image to be detected;
and determining whether the collision object is deformed according to the difference between the first contour and the initial contour.
7. The apparatus of claim 6, wherein the third determination unit is further configured to:
determining the difference between the areas of the patterns surrounded by the first contour and the areas of the patterns surrounded by the initial contour;
determining whether the difference in area is greater than a second preset threshold;
and determining that the collision object is deformed in response to the difference between the areas being greater than the second preset threshold.
8. The apparatus of claim 6, wherein the third determination unit is further configured to:
acquiring at least one frame of second reference image acquired after a preset time interval of the acquisition time of the image to be detected, wherein the second reference image comprises an image of the collision object;
Extracting a second contour of the collision object from the second reference image;
determining a difference between the second profile and the initial profile;
and determining the damage degree of the collision to a preset object and the collision object according to the variation between the second contour and the initial contour and the variation between the first contour and the initial contour.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN201811540190.0A 2018-12-17 2018-12-17 Collision detection method and device based on image Active CN111325070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811540190.0A CN111325070B (en) 2018-12-17 2018-12-17 Collision detection method and device based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811540190.0A CN111325070B (en) 2018-12-17 2018-12-17 Collision detection method and device based on image

Publications (2)

Publication Number Publication Date
CN111325070A CN111325070A (en) 2020-06-23
CN111325070B true CN111325070B (en) 2023-12-08

Family

ID=71172582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811540190.0A Active CN111325070B (en) 2018-12-17 2018-12-17 Collision detection method and device based on image

Country Status (1)

Country Link
CN (1) CN111325070B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201306518D0 (en) * 2013-04-10 2013-05-22 Ford Global Tech Llc A method for reducing the risk of motor vehicle collision damage
CN103568022A (en) * 2012-07-20 2014-02-12 精工爱普生株式会社 Collision detection system, collision detection data generator, and robot
CN108706520A (en) * 2018-08-21 2018-10-26 杭叉集团股份有限公司 A kind of collision detection control system, method and fork truck
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103568022A (en) * 2012-07-20 2014-02-12 精工爱普生株式会社 Collision detection system, collision detection data generator, and robot
GB201306518D0 (en) * 2013-04-10 2013-05-22 Ford Global Tech Llc A method for reducing the risk of motor vehicle collision damage
CN108714303A (en) * 2018-05-16 2018-10-30 深圳市腾讯网络信息技术有限公司 Collision checking method, equipment and computer readable storage medium
CN108706520A (en) * 2018-08-21 2018-10-26 杭叉集团股份有限公司 A kind of collision detection control system, method and fork truck

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Robot Collision: A Survey on Detection, lsolation, and Identification;Sami Haddadin 等;《IEEE Transactions on Robotics》;第33卷(第6期);第1292-1312页 *
基于图像空间的快速碰撞检测算法;于海军 等;《计算机应用》;第33卷(第2期);第530-533页 *
碰撞检测算法研究综述;王嘉;李孔清;;电脑知识与技术(第20期);第202-205页 *

Also Published As

Publication number Publication date
CN111325070A (en) 2020-06-23

Similar Documents

Publication Publication Date Title
US11321593B2 (en) Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
EP2638452B1 (en) Resolving merged touch contacts
US10943141B2 (en) Object detection device and object detection method
CN107845113B (en) Target element positioning method and device and user interface testing method and device
US11398049B2 (en) Object tracking device, object tracking method, and object tracking program
KR102347248B1 (en) Method and apparatus for recognizing touch gesture
US8965051B2 (en) Method and apparatus for providing hand detection
CN109711508B (en) Image processing method and device
KR102474837B1 (en) Foreground area extracting method and apparatus
US10521915B2 (en) Distance measurement device and distance measurement method
KR20120044484A (en) Apparatus and method for tracking object in image processing system
CN109960959B (en) Method and apparatus for processing image
CN111788533A (en) Method and system for stereo vision based vehicle pose estimation
US20180158203A1 (en) Object detection device and object detection method
Guerra-Segura et al. Study of the variability of the Leap Motion’s measures for its use to characterize air strokes
CN108509876B (en) Object detection method, device, apparatus, storage medium, and program for video
Luna et al. Fast heuristic method to detect people in frontal depth images
CN111325070B (en) Collision detection method and device based on image
KR101967858B1 (en) Apparatus and method for separating objects based on 3D depth image
CN115222653B (en) Test method and device
CN113642493B (en) Gesture recognition method, device, equipment and medium
CN109839645B (en) Speed detection method, system, electronic device and computer readable medium
CN113469087B (en) Picture frame detection method, device, equipment and medium in building drawing
KR20140103021A (en) Object recognition device
JP2013250604A (en) Object detecting device and object detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant