CN111583336B - Robot and inspection method and device thereof - Google Patents
Robot and inspection method and device thereof Download PDFInfo
- Publication number
- CN111583336B CN111583336B CN202010322489.XA CN202010322489A CN111583336B CN 111583336 B CN111583336 B CN 111583336B CN 202010322489 A CN202010322489 A CN 202010322489A CN 111583336 B CN111583336 B CN 111583336B
- Authority
- CN
- China
- Prior art keywords
- pedestrian
- robot
- area
- image
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000007689 inspection Methods 0.000 title claims abstract description 29
- 238000001514 detection method Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 9
- 238000012549 training Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Processing (AREA)
Abstract
A method of inspection of a robot, comprising: collecting images to be detected around the robot through a camera; identifying the position of the pedestrian included in the image to be detected; and when the identified pedestrians are located in the pre-calibrated safety area, generating prompt information. When the robot is patrolled and examined under the scene that the pedestrian is more, dangerous conditions can be timely found, prompt information is generated, dangerous accidents can be effectively reduced, and the security of patrolling and examining is improved.
Description
Technical Field
The application belongs to the field of robots, and particularly relates to a robot and a routing inspection method and device thereof.
Background
The inspection robot is mainly applied to scenes with very severe environments. Such as photovoltaic power plants like substations, petroleum pipelines, deserts. The inspection robot can be used for replacing manual identification of instrument and component faults. With the increase of campus emergency in recent years, the inspection robot has application value in the field of campus security. Because the campus environment is mostly flat ground, the wheel robot is suitable.
When the campus inspection robot runs in a campus, due to the fact that population density in the campus is high, many young children are very curious about the robot to closely observe the robot, the running speed of the wheel type inspection robot is generally high, more calculation resources are needed when the laser radar is used for detecting obstacles, the operation speed is low, the robot and the person are easy to collide or other accidents occur, and the improvement of the safety of the robot inspection is not facilitated.
Disclosure of Invention
In view of the above, the embodiment of the application provides a robot and a routing inspection method and device thereof, so as to solve the problems that in the prior art, when a wheel type routing inspection robot is used for routing inspection in a campus or other scene, the wheel type robot is high in speed, so that man-machine collision is easy to cause, and the security of routing inspection is not beneficial to improvement.
A first aspect of an embodiment of the present application provides a method for inspecting a robot, including:
collecting images to be detected around the robot through a camera;
identifying the position of the pedestrian included in the image to be detected;
and when the identified pedestrians are located in the pre-calibrated safety area, generating prompt information.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the step of identifying a position of a pedestrian included in the image to be detected includes:
acquiring an acquired image, and inputting the acquired image into a trained pedestrian detection network model;
and calculating the area where the pedestrian is positioned in the image to be detected according to the trained pedestrian detection network model, and determining the position of the pedestrian according to the area where the pedestrian is positioned.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, before the step of acquiring the acquired image and inputting the acquired image into the trained pedestrian detection network model, the method further includes:
acquiring a sample image comprising pedestrians and a calibration area where the pedestrians in the sample image are located;
convolving the sample image through a first convolution kernel to obtain a first feature map;
the first feature map is subjected to second convolution kernel convolution to obtain a second feature map, and the second feature map is subjected to third convolution kernel convolution to obtain a third feature map;
pooling the first feature map to obtain a fourth feature map;
and fusing the third feature map and the fourth feature map, performing convolution through a fourth convolution kernel, and performing full connection to obtain an identification area of the pedestrian in the sample image, and optimizing parameters of the pedestrian detection network model according to the difference between the calibration area and the identification area until the difference meets a preset requirement.
With reference to the second possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, a size of the first convolution kernel and the third convolution kernel is 3*3, and a size of the second convolution kernel and the fourth convolution kernel is 1*1.
With reference to the first aspect, in a fourth possible implementation manner of the first aspect, before the step of generating the prompt information when the identified pedestrian is located in the pre-calibrated safety area, the method further includes:
obtaining a calibration image comprising a safety line, wherein the distance between the safety line in the calibration image and the robot is a preset safety distance;
and calibrating a safety area corresponding to the image acquired by the camera according to the position of the safety line in the calibration image.
With reference to the first aspect, in a fifth possible implementation manner of the first aspect, the step of generating the prompt information when the identified pedestrian is located in the pre-calibrated safety area includes:
if the pedestrian is detected to enter the safety area, generating a pedestrian entering prompt;
and/or if the duration of the pedestrian entering the safety area is detected to be longer than a predetermined duration, generating a warning reminder.
With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the step of acquiring, by a camera, an image to be measured around the robot includes:
collecting a plurality of groups of video streams through a camera group, wherein the camera group comprises cameras arranged at the front, rear, left and right parts of the robot;
and analyzing to obtain an image to be detected according to the collected multiple groups of video streams.
A second aspect of an embodiment of the present application provides a robot inspection apparatus, including:
the image acquisition unit to be measured is used for acquiring images to be measured around the robot through the camera;
the area identifying unit is used for identifying the position of the pedestrian included in the image to be detected;
and the prompting unit is used for generating prompting information when the identified pedestrian is positioned in the pre-calibrated safety area.
A third aspect of an embodiment of the present application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the inspection method of the robot according to any one of the first aspects when the computer program is executed.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the inspection method of a robot according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the beneficial effects that: through gathering the image that awaits measuring around the robot to discern the pedestrian's that awaits measuring in the image place, when the pedestrian's that discerns place is located gets into the safe region of demarcating in advance, generate prompt message, thereby can make the robot patrol and examine under the scene that the pedestrian is more, dangerous condition can be timely discover and prompt message is generated, and the operation speed based on vision is faster, computational resource is fewer, can more quick detection and warning, can effectually reduce dangerous accident's emergence, the security of patrolling is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow diagram of a method for inspecting a robot according to an embodiment of the present application;
fig. 2 is a schematic diagram of a robot camera according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an implementation flow of a training method of a pedestrian detection network model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a pedestrian detection network model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an implementation flow of a method for calibrating a security thread according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a calibration frame according to an embodiment of the present application;
fig. 7 is a schematic diagram of a patrol device of a robot according to an embodiment of the present application;
fig. 8 is a schematic view of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Fig. 1 is a schematic implementation flow chart of a method for inspecting a robot according to an embodiment of the present application, which is described in detail below:
in step S101, an image to be measured around the robot is acquired by the camera.
Specifically, the robot in the embodiment of the present application may be a wheeled robot, or a bipedal robot.
When gathering the image that awaits measuring, can gather through the camera that sets up at the robot direction of advance, also can gather through the robot all around, can include for example that the camera that four sides set up in front, back, left and right carries out image acquisition, can carry out 360 degrees monitoring all around like this. For example, in the schematic diagram of the positional relationship between the robot and the camera shown in fig. 2, the robot body includes four side surfaces, which are a front side surface, a rear side surface, a left side surface and a right side surface, and two adjacent side surfaces are mutually perpendicular. The visual angle of any one camera can be larger than or equal to 90 degrees, so that images around the robot are collected through four cameras arranged on the robot body, and visual angle blind areas around the robot are reduced.
In one implementation, the robot is a wheeled robot, the robot can rapidly move in the front-back direction, among a plurality of cameras arranged around the robot, the frequency of the images collected by the cameras in the front-back direction can be larger than the frequency of the images collected by the cameras arranged on the left side and the right side of the robot, so that more timely views of the movement direction of the robot can be collected more effectively, and the safety pre-warning efficiency is improved.
In one implementation, to further reduce the blind area of the robot for capturing images, multiple cameras may be disposed at the same center position, for example, may be disposed at the head position of the robot, and the inclination angle of the cameras may be adjusted according to the height of the robot, so that the area range of the captured images includes all areas around the robot.
In step S102, the position of the pedestrian included in the image to be detected is identified.
When the position of the pedestrian in the image to be detected is identified, the pedestrian can be judged whether to be detected by matching the preset pedestrian characteristic image, and then the area where the pedestrian is further determined.
Or, the trained pedestrian detection network model can be obtained by training the pedestrian detection network model through the sample image calibrated with the area where the pedestrian is located. And calculating the area of the pedestrian in the image to be detected according to the trained pedestrian detection network model, and determining the position of the pedestrian according to the area of the pedestrian.
The position of the pedestrian can be represented by the coordinates of the two vertices of the upper left corner and the lower right corner of the box corresponding to the area of the pedestrian. For example, two vertexes of the upper left corner and the lower right corner of the area where the pedestrian is located are (X1, Y1), (X2, Y2), that is, (X1, Y1) represents the upper left corner coordinates of the frame of the pedestrian, and (X2, Y2) represents the lower right corner coordinates of the pedestrian, and the position where the pedestrian is located can be expressed as (X2, Y2).
In determining the pedestrian detection network model for detecting the area where the pedestrian is located, as shown in fig. 3, it may include:
in step S301, a sample image including a pedestrian is acquired, and a calibration area in which the pedestrian is located in the sample image.
When training the pedestrian detection network model, a sample image in a robot execution task scene needs to be determined first. For example, when the robot is used in a campus scene, the images acquired in the campus scene may be used as sample images for training. In addition, in order to improve the effectiveness of the pedestrian detection network model, the sample image may include sample images collected at different positions, different people, different weather and different time of a scene to be patrolled and examined, and the area where the pedestrian included in the sample image is located may be determined by means of manual calibration, so that the area where the pedestrian is located is referred to as a calibration area for simplifying the description.
In one possible implementation, the sample image may be further preprocessed, including color space conversion and/or scale space conversion, before the acquired sample image is obtained and scaled. For example, the collected sample image is converted into a target color system or the like, and the collected sample image is compressed into an image of a predetermined size. For example, the collected sample image can be compressed to 224×224 images, and the pedestrian detection network model can train according to the images with the same size, so that the training complexity is simplified.
Of course, after training is completed, when the image to be detected is identified, the image to be detected can be compressed into an image with a preset size, so that detection and identification can be carried out on the area where the pedestrian in the image to be detected is located.
In step S302, the sample image is convolved by a first convolution kernel to obtain a first feature map;
after the training sample image is obtained, the training sample can be checked through a first convolution in advance to carry out feature extraction, and a first feature map corresponding to the first convolution kernel is obtained. When the size of the sample image or the sample image after preprocessing is 224×224, the size of the first convolution kernel may be 3*3, and the first feature image is extracted by convolution of the first convolution kernel, and the first feature image may be divided into two branches for processing.
In step S303, the first feature map is convolved with a second convolution kernel to obtain a second feature map, and the second feature map is convolved with a third convolution kernel to obtain a third feature map.
The structural schematic diagram of the pedestrian detection network model shown in fig. 4, a first feature map obtained after convolution by a first convolution kernel may include two processing branches, including: and convolving the first feature map through a second convolution kernel to obtain a second feature map, and further convolving through a third convolution kernel to obtain a third feature map. Wherein the second convolution kernel size may be 1*1 and the third convolution kernel size may be 3*3.
In step S304, the first feature map is pooled to obtain a fourth feature map.
The second processing branch of the first feature map is as follows: and carrying out pooling treatment on the first characteristic diagram. In the pooling process, the area range of each pooling operation may be 2×2, and after the pooling process, a fourth feature map may be obtained.
In step S305, the third feature map and the fourth feature map are fused, convolved by a fourth convolution kernel, and fully connected to obtain an identification area of the pedestrian in the sample image, and parameters of the pedestrian detection network model are optimized according to differences between the calibration area and the identification area until the differences meet preset requirements.
And fusing the feature images obtained by the two branches, namely a third feature image obtained by the two convolutions of the first branch and a fourth feature image obtained by pooling, and further obtaining the identification area of the pedestrian in the sample image by convolution processing of a fourth convolution kernel, full connection processing and the like.
And comparing the identification area of the pedestrian calculated by the behavior detection network model with the calibration area of the pedestrian calibrated in advance, and determining the difference of the identification area and the calibration area. If the difference between the two parameters does not meet the preset requirement, parameters in the pedestrian detection network model including parameters in the first convolution kernel, the second convolution kernel, the third convolution kernel, the fourth convolution kernel and the like can be further adjusted according to the difference until the difference between the identification area output by the pedestrian detection network model and the preset calibration area meets the preset requirement, so that the trained pedestrian detection network model is obtained.
By means of the feature map fusion mode obtained by the two branches, training of the pedestrian detection model and recognition of pedestrian areas of the image to be detected can be completed more efficiently.
In step S103, when the identified pedestrian is located in the safety area calibrated in advance, a prompt message is generated.
The setting of the safety area can be determined by the size of the safety distance. The greater the safety distance, the greater the range of the safety zone.
In calibrating the safety zone of the robot, it may be according to fig. 5, including:
in step S501, a calibration image including a safety line is acquired, where a distance between the safety line and the robot in the calibration image is a preset safety distance.
When the safety area of the robot is calibrated, the robot can be arranged at a calibration position in advance, safety lines are drawn around the calibration position according to a preset safety distance, and the safety lines can be drawn through identifiable lines.
For example, when the safety distance of the robot is 1 meter, a circle with a radius of 1 meter can be drawn according to the calibration position of the robot as the center. The machine shoots an image at the calibration position, and a calibration image comprising the safety line can be obtained. Of course, the safety distance in the advancing direction may be set to be greater than that in other directions according to the advancing direction of the robot. For example, the safety distance in the forward direction of the robot may be set to 2 meters, the safety distance in the other direction may be set to 1 meter, and the like.
In step S502, according to the position of the safety line in the calibration image, the safety area corresponding to the image acquired by the camera is calibrated.
When the camera is fixed on the wheeled robot, determining a safety area within the safety line according to the safety line included in the calibration image. According to the determined safety line, the pedestrian area of the image to be detected can be directly compared with the pedestrian area of the image to be detected. As shown in fig. 6, if the position of the pedestrian enters a preset safety area, that is, contacts the safety line of the safety area, the distance between the behavior and the robot is closer, and a prompt message, such as a prompt for reminding the pedestrian to pay attention to safety, can be sent. The prompt information includes, but is not limited to, an audible prompt, an indicator light prompt, etc. In addition, when the pedestrian enters the preset safety area and the time length of the entry is longer than the preset time length, for example, longer than 10 seconds, an alarm prompt such as a whistle warning and the like can be sent out, and the face information of the pedestrian which enters the safety area and has the time length longer than the preset time length can be intercepted and stored. In addition, when the detected pedestrians enter the safety area, the images acquired by the robot can be transmitted to a monitoring center, and the areas where the pedestrians are located in the acquired images and the safety area boundary line are marked by red lines and the like, so that the monitoring personnel can find problems in time.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 7 is a schematic diagram of a inspection device for a robot according to an embodiment of the present application, which is described in detail below:
the inspection device of the robot comprises:
the image to be measured acquisition unit 701 is used for acquiring images to be measured around the robot through the camera;
a region identifying unit 702, configured to identify a position of a pedestrian included in the image to be detected;
and the prompting unit 703 is used for generating prompting information when the identified pedestrian is located in the pre-calibrated safety area.
The inspection device of the robot corresponds to an inspection method of the robot.
Fig. 8 is a schematic view of a robot according to an embodiment of the present application. As shown in fig. 8, the robot 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82 stored in the memory 81 and executable on the processor 80, such as a robot inspection program. The processor 80, when executing the computer program 82, implements the steps of the inspection method embodiments of the respective robots described above. Alternatively, the processor 80, when executing the computer program 82, performs the functions of the modules/units of the apparatus embodiments described above.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 82 in the robot 8. For example, the computer program 82 may be partitioned into:
the image acquisition unit to be measured is used for acquiring images to be measured around the robot through the camera;
the area identifying unit is used for identifying the position of the pedestrian included in the image to be detected;
and the prompting unit is used for generating prompting information when the identified pedestrian is positioned in the pre-calibrated safety area.
The robot may include, but is not limited to, a processor 80, a memory 81. It will be appreciated by those skilled in the art that fig. 8 is merely an example of a robot 8 and is not meant to be limiting of the robot 8, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the robot may also include input and output devices, network access devices, buses, etc.
The processor 80 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 81 may be an internal storage unit of the robot 8, such as a hard disk or a memory of the robot 8. The memory 81 may be an external storage device of the robot 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the robot 8. Further, the memory 81 may also include both an internal memory unit and an external memory device of the robot 8. The memory 81 is used for storing the computer program and other programs and data required by the robot. The memory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium may include content that is subject to appropriate increases and decreases as required by jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is not included as electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (9)
1. The inspection method of the robot is characterized by comprising the following steps of:
collecting images to be detected around the robot through a camera;
identifying the position of the pedestrian included in the image to be detected;
generating prompt information when the identified pedestrians are located in a pre-calibrated safety area;
the step of identifying the position of the pedestrian included in the image to be detected includes:
acquiring an acquired image, and inputting the acquired image into a trained pedestrian detection network model;
according to the trained pedestrian detection network model, calculating the area where the pedestrian is located in the image to be detected, determining two vertexes (X1, Y1), (X2, Y2), (X1, Y1) of the area where the pedestrian is located to represent the left-upper corner coordinates of the area where the pedestrian is located, and (X2, Y2) represents the right-lower corner coordinates of the area where the pedestrian is located, and determining the position of the pedestrian according to the right-lower corner coordinates (X2, Y2) of the area where the pedestrian is located.
2. The inspection method of a robot of claim 1, wherein prior to the step of acquiring the acquired image and inputting the acquired image into the trained pedestrian detection network model, the method further comprises:
acquiring a sample image comprising pedestrians and a calibration area where the pedestrians in the sample image are located;
convolving the sample image through a first convolution kernel to obtain a first feature map;
the first feature map is subjected to second convolution kernel convolution to obtain a second feature map, and the second feature map is subjected to third convolution kernel convolution to obtain a third feature map;
pooling the first feature map to obtain a fourth feature map;
and fusing the third feature map and the fourth feature map, performing convolution through a fourth convolution kernel, and performing full connection to obtain an identification area of the pedestrian in the sample image, and optimizing parameters of the pedestrian detection network model according to the difference between the calibration area and the identification area until the difference meets a preset requirement.
3. The inspection method of the robot according to claim 2, wherein the first convolution kernel and the third convolution kernel have a size of 3*3, and the second convolution kernel and the fourth convolution kernel have a size of 1*1.
4. The method of claim 1, wherein prior to the step of generating a prompt when the identified pedestrian is located in a pre-calibrated safe area, the method further comprises:
obtaining a calibration image comprising a safety line, wherein the distance between the safety line in the calibration image and the robot is a preset safety distance;
and calibrating a safety area corresponding to the image acquired by the camera according to the position of the safety line in the calibration image.
5. The inspection method of a robot according to claim 1, wherein the step of generating the prompt message when the identified pedestrian is located in the pre-calibrated safe area comprises:
if the pedestrian is detected to enter the safety area, generating a pedestrian entering prompt;
and/or if the duration of the pedestrian entering the safety area is detected to be longer than a predetermined duration, generating a warning reminder.
6. The inspection method of a robot according to claim 1, wherein the step of capturing the image to be inspected around the robot by the camera includes:
collecting a plurality of groups of video streams through a camera group, wherein the camera group comprises cameras arranged at the front, rear, left and right parts of the robot;
and analyzing to obtain an image to be detected according to the collected multiple groups of video streams.
7. The utility model provides a device is patrolled and examined to robot which characterized in that, the device is patrolled and examined to robot includes:
the image acquisition unit to be measured is used for acquiring images to be measured around the robot through the camera;
the area identifying unit is used for identifying the position of the pedestrian included in the image to be detected;
the prompting unit is used for generating prompting information when the identified pedestrians are located in a pre-calibrated safety area;
the area identifying unit includes:
the input subunit is used for acquiring the acquired image and inputting the acquired image into the trained pedestrian detection network model;
the position determining subunit is configured to calculate, according to the trained pedestrian detection network model, an area where a pedestrian is located in the image to be detected, determine two vertices (X1, Y1), (X2, Y2) of an upper left corner and a lower right corner of the area where the pedestrian is located, where (X1, Y1) represents an upper left corner coordinate of the area where the pedestrian is located, and (X2, Y2) represents a lower right corner coordinate of the area where the pedestrian is located, and determine a position of the pedestrian according to the lower right corner coordinate (X2, Y2) of the area where the pedestrian is located.
8. Robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, realizes the steps of the inspection method of the robot according to any one of claims 1 to 6.
9. A computer-readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the inspection method of a robot according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010322489.XA CN111583336B (en) | 2020-04-22 | 2020-04-22 | Robot and inspection method and device thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010322489.XA CN111583336B (en) | 2020-04-22 | 2020-04-22 | Robot and inspection method and device thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111583336A CN111583336A (en) | 2020-08-25 |
CN111583336B true CN111583336B (en) | 2023-12-01 |
Family
ID=72112519
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010322489.XA Active CN111583336B (en) | 2020-04-22 | 2020-04-22 | Robot and inspection method and device thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111583336B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114821987B (en) * | 2021-01-18 | 2024-04-30 | 漳州立达信光电子科技有限公司 | Reminding method and device and terminal equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107636679A (en) * | 2016-12-30 | 2018-01-26 | 深圳前海达闼云端智能科技有限公司 | A kind of obstacle detection method and device |
CN108780319A (en) * | 2018-06-08 | 2018-11-09 | 珊口(深圳)智能科技有限公司 | Oftware updating method, system, mobile robot and server |
CN109176513A (en) * | 2018-09-04 | 2019-01-11 | 北京华开领航科技有限责任公司 | A kind of method for inspecting and cruising inspection system of intelligent inspection robot |
CN109571468A (en) * | 2018-11-27 | 2019-04-05 | 深圳市优必选科技有限公司 | Security protection crusing robot and security protection method for inspecting |
CN109664301A (en) * | 2019-01-17 | 2019-04-23 | 中国石油大学(北京) | Method for inspecting, device, equipment and computer readable storage medium |
WO2019083291A1 (en) * | 2017-10-25 | 2019-05-02 | 엘지전자 주식회사 | Artificial intelligence moving robot which learns obstacles, and control method therefor |
CN110228413A (en) * | 2019-06-10 | 2019-09-13 | 吉林大学 | Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11009887B2 (en) * | 2018-07-26 | 2021-05-18 | Toyota Research Institute, Inc. | Systems and methods for remote visual inspection of a closed space |
-
2020
- 2020-04-22 CN CN202010322489.XA patent/CN111583336B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107636679A (en) * | 2016-12-30 | 2018-01-26 | 深圳前海达闼云端智能科技有限公司 | A kind of obstacle detection method and device |
WO2019083291A1 (en) * | 2017-10-25 | 2019-05-02 | 엘지전자 주식회사 | Artificial intelligence moving robot which learns obstacles, and control method therefor |
CN108780319A (en) * | 2018-06-08 | 2018-11-09 | 珊口(深圳)智能科技有限公司 | Oftware updating method, system, mobile robot and server |
CN109176513A (en) * | 2018-09-04 | 2019-01-11 | 北京华开领航科技有限责任公司 | A kind of method for inspecting and cruising inspection system of intelligent inspection robot |
CN109571468A (en) * | 2018-11-27 | 2019-04-05 | 深圳市优必选科技有限公司 | Security protection crusing robot and security protection method for inspecting |
CN109664301A (en) * | 2019-01-17 | 2019-04-23 | 中国石油大学(北京) | Method for inspecting, device, equipment and computer readable storage medium |
CN110228413A (en) * | 2019-06-10 | 2019-09-13 | 吉林大学 | Oversize vehicle avoids pedestrian from being involved in the safety pre-warning system under vehicle when turning |
Also Published As
Publication number | Publication date |
---|---|
CN111583336A (en) | 2020-08-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103279949B (en) | Based on the multi-camera parameter automatic calibration system operation method of self-align robot | |
JPH07250319A (en) | Supervisory equipment around vehicle | |
CN109213138B (en) | Obstacle avoidance method, device and system | |
CN112651359A (en) | Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium | |
CN112528771A (en) | Obstacle detection method, obstacle detection device, electronic device, and storage medium | |
CN116310679A (en) | Multi-sensor fusion target detection method, system, medium, equipment and terminal | |
CN114255405A (en) | Hidden danger target identification method and device | |
CN115240148A (en) | Vehicle behavior detection method and device, storage medium and electronic device | |
Kim et al. | System and method for detecting potholes based on video data | |
CN113255444A (en) | Training method of image recognition model, image recognition method and device | |
CN111583336B (en) | Robot and inspection method and device thereof | |
CN113658427A (en) | Road condition monitoring method, system and equipment based on vision and radar | |
CN115909092A (en) | Light-weight power transmission channel hidden danger distance measuring method and hidden danger early warning device | |
CN115171361A (en) | Dangerous behavior intelligent detection and early warning method based on computer vision | |
CN113408454A (en) | Traffic target detection method and device, electronic equipment and detection system | |
CN113945219A (en) | Dynamic map generation method, system, readable storage medium and terminal equipment | |
CN113777622A (en) | Method and device for identifying rail obstacle | |
CN113569812A (en) | Unknown obstacle identification method and device and electronic equipment | |
CN114724119B (en) | Lane line extraction method, lane line detection device, and storage medium | |
CN107194923B (en) | Ultraviolet image diagnosis method for defect inspection of contact network power equipment | |
CN113077455B (en) | Tree obstacle detection method and device for protecting overhead transmission line, electronic equipment and medium | |
CN117523914A (en) | Collision early warning method, device, equipment, readable storage medium and program product | |
CN112364693B (en) | Binocular vision-based obstacle recognition method, device, equipment and storage medium | |
CN117897737A (en) | Unmanned aerial vehicle monitoring method and device, unmanned aerial vehicle and monitoring equipment | |
CN114092857A (en) | Gateway-based collection card image acquisition method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |