CN111178257A - Regional safety protection system and method based on depth camera - Google Patents
Regional safety protection system and method based on depth camera Download PDFInfo
- Publication number
- CN111178257A CN111178257A CN201911385289.2A CN201911385289A CN111178257A CN 111178257 A CN111178257 A CN 111178257A CN 201911385289 A CN201911385289 A CN 201911385289A CN 111178257 A CN111178257 A CN 111178257A
- Authority
- CN
- China
- Prior art keywords
- human body
- personal computer
- industrial personal
- safety protection
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims description 25
- 238000001514 detection method Methods 0.000 claims abstract description 41
- 230000009545 invasion Effects 0.000 claims abstract description 5
- 230000033001 locomotion Effects 0.000 claims description 16
- 230000003321 amplification Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 230000001681 protective effect Effects 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims 1
- 238000009434 installation Methods 0.000 abstract description 4
- 230000010354 integration Effects 0.000 abstract description 3
- 238000004519 manufacturing process Methods 0.000 description 12
- 238000012544 monitoring process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 3
- 238000006467 substitution reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 210000004180 plasmocyte Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19608—Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B5/00—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied
- G08B5/22—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission
- G08B5/36—Visible signalling systems, e.g. personal calling systems, remote indication of seats occupied using electric transmission; using electromagnetic transmission using visible light sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
Abstract
The invention discloses a regional safety protection system based on a depth camera, which comprises acquisition equipment, an industrial personal computer and a controller, wherein the acquisition equipment is connected with the industrial personal computer; the system comprises an industrial personal computer, a safety protection area, a collecting device and a control device, wherein the collecting device is used for collecting a depth image and a color image of the safety protection area and transmitting the depth image and the color image to the industrial personal computer; the industrial personal computer is connected with the acquisition equipment and the controller and is used for receiving the depth image and the color image acquired by the acquisition equipment, detecting the human body, judging the invasion relation between the human body and the safety protection area according to the detection result and sending different decision signals to the controller; the controller is used for receiving and executing the decision signal so as to control the robot to work. The invention can adapt to different bright environments, and has the advantages of high hardware integration level, flexible installation position and high precision.
Description
Technical Field
The invention relates to the technical field of human-computer safety protection, in particular to a regional safety protection system and method based on a depth camera.
Background
With the development of industrial technology, industrial robots are widely used. In industrial production, human-computer cooperation is an important characteristic of robot development, and various types of robots in an industrial workshop work with people in a cooperative mode to have a large potential safety hazard.
At present, in order to solve the potential safety hazards, a plurality of 2D industrial cameras are generally installed above a machine to monitor a monitoring area of an existing industrial robot, and an environment contour and a human body are located by combining a three-dimensional reconstruction algorithm and a human body recognition algorithm, so that three-dimensional monitoring is achieved, and when an operator enters the monitoring area, the industrial robot can make corresponding early warning behaviors according to the area where a user is located. However, this method has some disadvantages in that the installation position of the 2D industrial camera is limited, the effect can be achieved only by installing the 2D industrial camera on the top, and a plurality of 2D cameras are required to perform three-dimensional reconstruction, the accuracy in the depth direction is not high, and the method has a requirement on the brightness of the environment, and cannot be used in both an excessively dark environment and an excessively bright environment.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
The present invention is directed to a system and method for area security based on a depth camera, so as to solve at least one of the above-mentioned problems of the related art.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
a regional safety protection system based on a depth camera comprises acquisition equipment, an industrial personal computer and a controller; the acquisition equipment is used for acquiring a depth image and a color image of a safety protection area and transmitting the depth image and the color image to the industrial personal computer; the industrial personal computer is connected with the acquisition equipment and the controller and is used for receiving the depth image and the color image acquired by the acquisition equipment, detecting the human body, judging the invasion relation between the human body and the safety protection area according to the detection result and sending different decision signals to the controller; the controller is used for receiving and executing the decision signal so as to control the robot to work.
In some embodiments, the capture device includes a depth camera and a color camera for capturing the depth image and the color image, respectively.
In some embodiments, the system further comprises a display, wherein the display is connected with the industrial personal computer and used for displaying a detection result detected by the industrial personal computer and providing a human-computer interaction interface.
In some embodiments, the safety alarm device is further included and used for giving out a prompt for different situations of human body invasion into the safety protection area.
In some embodiments, the security alarm device is a warning light configured as a tri-color light that displays different colors for different situations of intrusion of a human into the secured area.
The other technical scheme of the invention is as follows:
a regional safety protection method based on a depth camera comprises the following steps:
s1: controlling acquisition equipment to acquire image information of a safety protection area and transmitting the acquired image information to an industrial personal computer;
s2: the industrial personal computer is controlled to receive image information acquired by the acquisition equipment, human body detection is carried out on the received image to obtain a detection result, the intrusion relationship between a human body and a safety protection area is judged according to the detection result, and a decision signal is sent to the controller;
s3: and the controller receives the decision signal and executes the decision signal to control the robot to execute relevant operations.
In some embodiments, step S1 sets the safety zone by:
s11, setting the vision system to a calibration mode, collecting continuous multi-frame images of the robot during working through collection equipment and transmitting the continuous multi-frame images to an industrial personal computer;
s12, the industrial personal computer executes a detection algorithm, detects the motion track of the robot and generates a motion space;
s13, setting a safety margin space through an industrial personal computer, superposing the safety margin space to the motion space, and generating an amplification space to serve as a safety protection area.
In some embodiments, the vision system in step S11 is an integrated process including a collection device, an industrial personal computer, a display, and a safety protection alarm device;
in some embodiments, in step S2, the industrial personal computer performs human body detection on the received image by executing a detection algorithm, where the detection algorithm includes the following steps:
s21, identifying a human body on the color image by using a skeleton identification algorithm according to the received image information acquired by the acquisition equipment, and if the human body is identified, calculating the coordinates of each joint point of the human body on the color image;
s22, registering the received color image and depth image, and calculating the 3D coordinates of each joint point of the human body according to the coordinates of each joint point on the color image;
and S23, judging whether each joint point of the human body falls in the protective area.
In some embodiments, step S2 includes the following steps:
s210, carrying out human body detection on the received color image through an industrial personal computer, and identifying whether a human body exists;
s220, directly positioning the human body in the depth image according to the pixel corresponding relation between the color image and the depth image, and then calculating the three-dimensional coordinates of the human body joint points.
The technical scheme of the invention has the beneficial effects that:
compared with the prior art, the regional safety protection system based on the depth camera can adapt to different bright environments, is high in hardware integration level and flexible in installation position, adopts a human body skeleton recognition algorithm based on a color image, and simultaneously maps two-dimensional skeleton points to a depth field, so that three-dimensional coordinates of the skeleton points are obtained, and centimeter-level positioning accuracy is realized. In addition, the automatic configuration of the protection area is realized by calibrating the motion space of the robot and overlapping a certain amount of space margin.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a depth camera based zonal security system in accordance with one embodiment of the present invention.
FIG. 2 is a flowchart illustration of a depth camera-based zonal security method, according to one embodiment of the present invention.
Fig. 3 is a flowchart illustrating a method for zone security based on a depth camera according to an embodiment of the present invention to set a security zone.
FIG. 4 is a flowchart illustration of a human detection algorithm in a depth camera based regional security method, according to one embodiment of the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 is a schematic diagram of a depth camera-based regional security protection system according to an embodiment of the present invention, and a system 100 includes an acquisition device 101, an industrial personal computer 102, and a controller 105. The acquisition equipment 101 is used for acquiring a depth image and a color image of the safety protection area 106 and transmitting the depth image and the color image to the industrial personal computer 102; the industrial personal computer 102 is connected with the acquisition equipment 101 and the controller 105, and is used for receiving the depth image and the color image acquired by the acquisition equipment 101, performing human body detection on the received depth image and the color image to obtain a detection result, judging the intrusion relationship between a human body and a safety protection area according to the detection result, and sending different decision signals to the controller 105 according to the judgment result; the controller 105 is used for receiving and executing decision signals to control the robot to work. In some embodiments, the controller 105 is a robot controller, primarily for controlling the robot operation.
In one embodiment, the capture device 101 includes a depth camera and a color camera for capturing a depth image and a color image, respectively; the depth camera can be based on structured light, binocular and TOF (time of flight algorithm), the working range can cover 8m 5m 2m of space, the frame rate is 10-20 FPS, and the measurement precision is 1 cm. In some embodiments, the acquisition frequencies of the depth image and the color image may be the same or different, and are set according to specific functional requirements, which is not limited herein.
In one embodiment, the system 100 further includes a display 103, and the display 103 is connected to the industrial personal computer 102 and is configured to display a detection result detected by the industrial personal computer 102 and provide a human-computer interaction interface for an operator to monitor. It should be understood that the industrial personal computer 102 and the display 103 may be independent devices or may be an integrated machine, and are not limited herein.
In one embodiment, the display 103 may be a touch screen incorporating a capacitive touch motor or other touch sensing component, or may not be a touch sensitive display screen. The display 103 may include display pixels formed from Light Emitting Diodes (LEDs), Organic Light Emitting Diodes (OLEDs), plasma cells, electrophoretic display elements, electrowetting display elements, Liquid Crystal Display (LCD) components, or other suitable display pixel structures. Any suitable type of display technology may be used to form the display 103 and is not limited herein.
In some embodiments, the system 100 includes a security alarm device for alerting different instances of human intrusion into the secured area. The security alarm device comprises a warning lamp 104, the warning lamp 104 being configured as a three-color lamp with three color displays of red, yellow and green, displaying different colors for different situations of intrusion of a human body into the security area. According to the intrusion relationship between the human body and the safety protection area obtained from the detection result, the alarm lamp 104 is configured to display "green" when no one invades the safety protection area, the alarm lamp 104 is configured to display "yellow" when the human body slightly invades the safety protection area, and the alarm lamp 104 is configured to display "red" when the human body seriously invades the safety protection area. It should be understood that the safety alarm device is not limited to the alarm lamp 104, but may be one or more of a display panel, a buzzer, a microphone, or a speaker, for sending out related reminders, and is not limited herein.
In one embodiment, the industrial personal computer 102 is configured to determine an intrusion relationship between a human body and a safety protection area of a robot production area according to the received depth image and color image, and send decision signals of different levels to the robot controller 105 according to a determination result. Specifically, according to the intrusion relationship between the human body and the safety protection area obtained by the detection result, when no one invades the safety protection area, the industrial personal computer 102 sends a signal to the robot controller 105 to control the robot to continue to work normally; when a human body slightly invades a safety protection area, the industrial personal computer 102 sends a decision signal to the robot controller 105, so that the production speed of the robot is reduced; when a human body seriously invades a safety protection area, the industrial personal computer 102 sends a decision signal to the robot controller 105, so that the robot stops producing. It should be understood that the intrusion relationship may be stored in the industrial personal computer 102 in advance, and a slight intrusion is defined as a distance that the human body penetrates into the boundary of the safety protection area is less than 0.5 m; the severe intrusion is defined as the distance of the human body penetrating into the boundary of the safety protection area is more than 0.5 m, and the value of the distance can be adjusted according to specific situations, and is not limited here.
Fig. 2 shows a flow of a regional security protection method based on a depth camera according to an embodiment of the present invention, which includes the following steps:
s1: controlling acquisition equipment to acquire image information of a safety protection area and transmitting the acquired image information to an industrial personal computer;
the acquisition equipment comprises a depth camera for acquiring a depth image of the safety protection area; in some embodiments, the depth camera may be a depth camera based on structured light, binocular, TOF (time of flight algorithm) technology, with a working range that covers 8m 5m 2m space, a frame rate of 10-20 FPS, and a measurement accuracy of 1 cm. It is understood that the acquisition device may further comprise a color camera for acquiring color images of the safety protected area. In some embodiments, the collection frequency of the depth camera and the color camera for collecting the depth image and the color image may be the same or different, and the corresponding setting is performed according to the specific functional requirement, which is not limited herein.
The safety protection area can be a preset fixed area or a variable area which changes timely. In some embodiments, the safety protection zone is set as a safety protection zone of a robot production zone, which may be a fixed zone or a zone that changes in time as the movement of the robot changes.
S2: the industrial personal computer is controlled to receive image information acquired by the acquisition equipment, carry out human body detection on the received image to obtain a detection result, judge the invasion relation between a human body and the safety protection area according to the detection result and send a decision signal to the controller; the image information includes a color image and a depth image.
S3: the controller receives the decision signal and executes the decision signal to control the robot to execute the relevant operation.
Referring to fig. 3, taking the robot production safety zone as an example for explanation, in some embodiments, the safety zone may be set in step S1 by the following steps:
s11, setting the vision system to a calibration mode, collecting continuous multi-frame images of the robot during working through collection equipment and transmitting the continuous multi-frame images to an industrial personal computer;
s12, the industrial personal computer executes a detection algorithm, detects the motion track of the robot and generates a motion space;
and S13, setting a safety margin space through the industrial personal computer, superposing the safety margin space to the motion space, and generating an amplification space as a safety protection area.
Specifically, in step S11, the vision system may be an integrated system including a collection device, an industrial personal computer, a display, and a safety protection alarm device. The vision system is set to be in a calibration mode, a monitoring area space is set, the vision system monitors the area, and the acquisition equipment acquires continuous multi-frame images during the working of the robot and transmits the continuous multi-frame images to the industrial personal computer for processing. In some embodiments, the alarm device may be an alarm lamp, and it is understood that in some embodiments, the alarm device may also be a voice alarm device, which is not particularly limited in the present invention;
in step S12, the industrial personal computer executes a detection algorithm, detects the motion trajectory of the robot according to the received image acquired by the acquisition device, and generates a motion space of the robot according to the motion trajectory of the robot.
In step S13, a safety margin distance is set and superimposed on the motion space of the robot to obtain an expanded space as the safety protection area of the robot, so that the safety protection area of the robot production area is expanded, and automatic configuration of the protection area is realized.
In some embodiments, when the vision system is in a suspended use state or part of equipment in the vision system is in an inactive state, the industrial personal computer sends a command signal that the vision system monitoring protection function is not enabled to the robot controller, the robot controller controls the robot to slow down the production speed or stop the production according to the received command signal, and the display displays relevant information according to the use state of the vision system.
In some embodiments, the industrial personal computer performs human body detection on the received image by executing a detection algorithm in step S2. Referring to fig. 4, the detection algorithm specifically includes the following steps:
s21, identifying a human body on the color image by using a skeleton identification algorithm according to the received image information acquired by the acquisition equipment, if the human body is not identified, the robot works normally, and if the human body is identified, the coordinates of each joint point of the human body on the color image are calculated;
s22, registering the received color image and the depth image, and calculating the 3D coordinates of the joint points according to the coordinates of the joint points of the human body on the color image;
and S23, judging whether each joint point of the human body falls in the protective area.
Specifically, in step S21, the industrial personal computer identifies a human body on the color image by using a skeleton identification algorithm (e.g., openpos algorithm) according to the received image information acquired by the acquisition device; if the human body is not identified, the robot works normally, and if the human body is identified, the coordinates of each joint point of the human body on the two-dimensional color image are calculated;
in step S22, the color image and the depth image are registered, the corresponding relationship between the color image and each pixel in the depth image is found through a registration algorithm, the parallax caused by the difference in spatial position between the color image and each pixel in the depth image is eliminated, so as to obtain the projection of the human body joint points on the depth image, and the three-dimensional coordinates of each joint point of the human body are calculated according to the coordinates of the human body joint points on the two-dimensional color image obtained in step S21.
In step S23, it is determined whether or not each joint point of the human body falls within the safety protection area of the robot production area based on the three-dimensional coordinates of each joint point of the human body obtained in step S22.
In some embodiments, step S2 includes the following steps:
s210, carrying out human body detection on the received color image by using an industrial personal computer, and identifying whether a human body exists;
s220, directly positioning the human body in the depth image according to the pixel corresponding relation between the color image and the depth image, and then calculating the three-dimensional coordinates of the human body joint points.
By the method, the object detection algorithm for the depth image at one time can be reduced. Of course, it is understood that, in step S210, the object detection or recognition may be performed on the depth image first, and then the human body detection or recognition may be performed on the color image by using the pixel correspondence between the color image and the depth image.
In some embodiments, step S2 includes the following steps:
firstly, carrying out human body detection on the color image of the previous frame, then carrying out depth image acquisition on the next frame, and obtaining the depth value of a pixel point at the position of a human body; the calculated amount of a depth image extraction algorithm is reduced by only obtaining the depth value of the pixel point at the position of the human body, namely outputting the depth image of the part, and meanwhile, the data transmission bandwidth is reduced.
In step S2, an intrusion relationship is defined in advance and stored in the memory of the industrial personal computer. In some embodiments, the intrusion relationship is defined to be configured to: no human intrusion, light intrusion, and severe intrusion. When no person invades the safety protection area, the industrial personal computer sends a signal to the robot controller to control the robot to continue to work normally; when a human body slightly invades a safety protection area, the industrial personal computer sends a decision signal to the robot controller, so that the production speed of the robot is reduced; when a human body seriously invades a safety protection area, the industrial personal computer sends a decision signal to the robot controller to stop the robot from producing.
The invention can adapt to different bright environments by a 3D sensing scheme based on a depth camera; the hardware integration level is high, and the installation position is flexible; and a human body skeleton recognition algorithm based on a color image is adopted, and the two-dimensional skeleton points are mapped to the depth field, so that three-dimensional coordinates of the skeleton points are obtained, and centimeter-level positioning accuracy is realized. In addition, the automatic configuration of the protection area is realized by calibrating the motion space of the robot and overlapping a certain amount of space margin. It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention.
In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims.
Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. One of ordinary skill in the art will readily appreciate that the above-disclosed, presently existing or later to be developed, processes, machines, manufacture, compositions of matter, means, methods, or steps, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Claims (10)
1. A regional safety protection system based on a depth camera is characterized by comprising acquisition equipment, an industrial personal computer and a controller; wherein the content of the first and second substances,
the collecting equipment is used for collecting a depth image and a color image of a safety protection area and transmitting the depth image and the color image to the industrial personal computer;
the industrial personal computer is connected with the acquisition equipment and the controller and is used for receiving the depth image and the color image acquired by the acquisition equipment, detecting the human body, judging the invasion relation between the human body and the safety protection area according to the detection result and sending different decision signals to the controller;
the controller is used for receiving and executing the decision signal so as to control the robot to work.
2. The depth camera-based zonal security system of claim 1, wherein: the acquisition device comprises a depth camera and a color camera for acquiring the depth image and the color image, respectively.
3. The depth camera-based zonal security system of claim 1, wherein: the device also comprises a display, wherein the display is connected with the industrial personal computer and used for displaying a detection result detected by the industrial personal computer and providing a human-computer interaction interface.
4. The depth camera-based zonal security system of claim 1, wherein: the safety alarm device is used for reminding different conditions of human body invading the safety protection area.
5. The depth camera-based zonal security system of claim 4, wherein: the safety alarm device is an alarm lamp which is configured to be a three-color lamp and displays different colors according to different conditions of human body invading a safety protection area.
6. A regional safety protection method based on a depth camera is characterized by comprising the following steps:
s1: controlling acquisition equipment to acquire image information of a safety protection area and transmitting the acquired image information to an industrial personal computer;
s2: the industrial personal computer is controlled to receive image information acquired by the acquisition equipment, human body detection is carried out on the received image to obtain a detection result, the intrusion relationship between a human body and a safety protection area is judged according to the detection result, and a decision signal is sent to the controller;
s3: and the controller receives the decision signal and executes the decision signal to control the robot to execute relevant operations.
7. The depth camera-based zonal security method of claim 6, wherein: in step S1, the safety protection area is set by:
s11, setting the vision system to a calibration mode, collecting continuous multi-frame images of the robot during working through collection equipment and transmitting the continuous multi-frame images to an industrial personal computer;
s12, the industrial personal computer executes a detection algorithm, detects the motion track of the robot and generates a motion space;
s13, setting a safety margin space through an industrial personal computer, superposing the safety margin space to the motion space, and generating an amplification space to serve as a safety protection area.
8. The depth camera-based zonal security method of claim 7, wherein: at S11, the vision system includes a whole body composed of a collection device, an industrial personal computer, a display, and a safety protection alarm device.
9. The depth camera-based zonal security method of claim 6, wherein: in step S2, the industrial personal computer performs human body detection on the received image by executing a detection algorithm, where the detection algorithm includes the following steps:
s21, identifying a human body on the color image by using a skeleton identification algorithm according to the received image information acquired by the acquisition equipment, and if the human body is identified, calculating the coordinates of each joint point of the human body on the color image;
s22, registering the received color image and depth image, and calculating the 3D coordinates of each joint point of the human body according to the coordinates of each joint point on the color image;
and S23, judging whether each joint point of the human body falls in the protective area.
10. The depth camera-based zonal security method of claim 6, wherein: in step S2, the method includes the steps of:
s210, carrying out human body detection on the received color image through an industrial personal computer, and identifying whether a human body exists;
s220, directly positioning the human body in the depth image according to the pixel corresponding relation between the color image and the depth image, and then calculating the three-dimensional coordinates of the human body joint points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911385289.2A CN111178257A (en) | 2019-12-28 | 2019-12-28 | Regional safety protection system and method based on depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911385289.2A CN111178257A (en) | 2019-12-28 | 2019-12-28 | Regional safety protection system and method based on depth camera |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111178257A true CN111178257A (en) | 2020-05-19 |
Family
ID=70652219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911385289.2A Pending CN111178257A (en) | 2019-12-28 | 2019-12-28 | Regional safety protection system and method based on depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111178257A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112422818A (en) * | 2020-10-30 | 2021-02-26 | 上海大学 | Intelligent screen dropping remote detection method based on multivariate image fusion |
CN112757300A (en) * | 2020-12-31 | 2021-05-07 | 广东美的白色家电技术创新中心有限公司 | Robot protection system and method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102323822A (en) * | 2011-05-09 | 2012-01-18 | 无锡引域智能机器人有限公司 | Method for preventing industrial robot from colliding with worker |
CN103198605A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Indoor emergent abnormal event alarm system |
CN103310589A (en) * | 2013-07-05 | 2013-09-18 | 国家电网公司 | Alarm information generating method and device |
CN104778676A (en) * | 2014-01-09 | 2015-07-15 | 中国科学院大学 | Depth ranging-based moving target detection method and system |
CN106781165A (en) * | 2016-11-30 | 2017-05-31 | 华中科技大学 | A kind of indoor multi-cam intelligent linkage supervising device based on depth sensing |
CN106960535A (en) * | 2017-05-19 | 2017-07-18 | 龙岩学院 | Scope biotic intrusion early warning system based on infrared sensor |
CN109341689A (en) * | 2018-09-12 | 2019-02-15 | 北京工业大学 | Vision navigation method of mobile robot based on deep learning |
CN110430399A (en) * | 2019-08-07 | 2019-11-08 | 上海节卡机器人科技有限公司 | The means of defence of monitoring area, device and system |
CN110561432A (en) * | 2019-08-30 | 2019-12-13 | 广东省智能制造研究所 | safety cooperation method and device based on man-machine co-fusion |
-
2019
- 2019-12-28 CN CN201911385289.2A patent/CN111178257A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102323822A (en) * | 2011-05-09 | 2012-01-18 | 无锡引域智能机器人有限公司 | Method for preventing industrial robot from colliding with worker |
CN103198605A (en) * | 2013-03-11 | 2013-07-10 | 成都百威讯科技有限责任公司 | Indoor emergent abnormal event alarm system |
CN103310589A (en) * | 2013-07-05 | 2013-09-18 | 国家电网公司 | Alarm information generating method and device |
CN104778676A (en) * | 2014-01-09 | 2015-07-15 | 中国科学院大学 | Depth ranging-based moving target detection method and system |
CN106781165A (en) * | 2016-11-30 | 2017-05-31 | 华中科技大学 | A kind of indoor multi-cam intelligent linkage supervising device based on depth sensing |
CN106960535A (en) * | 2017-05-19 | 2017-07-18 | 龙岩学院 | Scope biotic intrusion early warning system based on infrared sensor |
CN109341689A (en) * | 2018-09-12 | 2019-02-15 | 北京工业大学 | Vision navigation method of mobile robot based on deep learning |
CN110430399A (en) * | 2019-08-07 | 2019-11-08 | 上海节卡机器人科技有限公司 | The means of defence of monitoring area, device and system |
CN110561432A (en) * | 2019-08-30 | 2019-12-13 | 广东省智能制造研究所 | safety cooperation method and device based on man-machine co-fusion |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112422818A (en) * | 2020-10-30 | 2021-02-26 | 上海大学 | Intelligent screen dropping remote detection method based on multivariate image fusion |
CN112422818B (en) * | 2020-10-30 | 2022-01-07 | 上海大学 | Intelligent screen dropping remote detection method based on multivariate image fusion |
CN112757300A (en) * | 2020-12-31 | 2021-05-07 | 广东美的白色家电技术创新中心有限公司 | Robot protection system and method |
WO2022142973A1 (en) * | 2020-12-31 | 2022-07-07 | 广东美的白色家电技术创新中心有限公司 | Robot protection system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102449834B1 (en) | Perimeter monitoring system for working machines | |
KR101073076B1 (en) | Fire monitoring system and method using compound camera | |
EP1622107B1 (en) | System, device and method for object tracing | |
EP2541525B1 (en) | Vehicle surroundings monitoring device | |
CN110371140B (en) | Vehicle door opening early warning system and method based on panoramic all-around image processing | |
US11216664B2 (en) | Method and device for augmenting a person's view of a mining vehicle on a mining worksite in real-time | |
JPWO2011108198A1 (en) | Vehicle periphery monitoring device | |
US20080036790A1 (en) | Image Display System, Image Display Method and Image Display Program | |
US10518701B2 (en) | Image monitoring apparatus, movable object, program and failure determining method | |
AU2014213529A1 (en) | Image display system | |
CN111178257A (en) | Regional safety protection system and method based on depth camera | |
CN105600693A (en) | Monitoring system of tower crane | |
CN103167276A (en) | Vehicle monitoring system and vehicle monitoring method | |
KR20140108035A (en) | System and method for managing car parking | |
KR102003998B1 (en) | Apparatus and method for sensing obstacles of construction equipment | |
CN112362077A (en) | Head-mounted display device, obstacle avoidance method thereof and computer-readable storage medium | |
JP5370009B2 (en) | Monitoring system | |
CN107323114A (en) | Intrusion detection method, system and the print control instrument of print control instrument | |
CN112484743B (en) | Vehicle-mounted HUD fusion live-action navigation display method and system thereof | |
JP2018013386A (en) | Display control unit for vehicle, display system for vehicle, display control method for vehicle, and program | |
CN117197779A (en) | Track traffic foreign matter detection method, device and system based on binocular vision | |
CN208087074U (en) | A kind of humanoid anti-collision alarm system of harbour equipment based on monocular vision | |
US11498484B2 (en) | Overhead image generation device, overhead image generation method, and program | |
KR101438921B1 (en) | Apparatus and method for alerting moving-object of surrounding of vehicle | |
CN109859438A (en) | Safe early warning method, system, vehicle and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
CB02 | Change of applicant information |