CN113099116A - Equipment information collection method and device, robot and computer equipment - Google Patents

Equipment information collection method and device, robot and computer equipment Download PDF

Info

Publication number
CN113099116A
CN113099116A CN202110373967.4A CN202110373967A CN113099116A CN 113099116 A CN113099116 A CN 113099116A CN 202110373967 A CN202110373967 A CN 202110373967A CN 113099116 A CN113099116 A CN 113099116A
Authority
CN
China
Prior art keywords
target
cabinet
equipment
robot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110373967.4A
Other languages
Chinese (zh)
Inventor
许哲涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Original Assignee
Jingdong Shuke Haiyi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Shuke Haiyi Information Technology Co Ltd filed Critical Jingdong Shuke Haiyi Information Technology Co Ltd
Priority to CN202110373967.4A priority Critical patent/CN113099116A/en
Publication of CN113099116A publication Critical patent/CN113099116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The application provides a method and a device for collecting equipment information, a robot and computer equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining an identification number of a cabinet in a target scene, moving to a corresponding cabinet according to the identification number, carrying out image acquisition on a machine position loaded with target equipment, obtaining a target image, and obtaining equipment information of the target equipment according to a bar code in the target image. In the technical scheme, the robot is moved to the cabinet and shoots the target equipment in the cabinet to obtain the target image, the robot extracts the bar code from the target image, the bar code is utilized to quickly determine the equipment information of the target equipment, and the equipment checking efficiency is improved.

Description

Equipment information collection method and device, robot and computer equipment
Technical Field
The present application relates to the field of asset management technologies, and in particular, to a method and an apparatus for collecting device information, a robot, and a computer device.
Background
A plurality of cabinets with the same height are generally deployed in a machine room of a data center, the cabinets are divided into a plurality of accommodating positions according to the standard rack size, computer equipment generally occupies one to a plurality of accommodating positions in the cabinet according to the size of the computer equipment, the number of computers which can be accommodated in the cabinets is large, and the machine room is generally provided with a plurality of cabinets, so that the density of the computers is large, and the computers need to be subjected to asset inventory in subsequent operation and maintenance management.
In the prior art, the method for performing asset inventory on computers in a computer room mainly includes the steps of attaching a label to each computer, scanning a code by using a manual handheld code scanning device, acquiring device information of the computer, and completing asset inventory.
However, in the method of manual checking in the prior art, because the number of computers in the machine room is too large, a large amount of time and cost are consumed for manual checking, and in the checking process, due to the situations of manual fatigue and the like, a checking error easily occurs, so that rework is needed again, and the efficiency of checking assets in the machine room is low.
Disclosure of Invention
The application provides a method and a device for collecting equipment information, a robot and computer equipment, which are used for solving the problem of low checking efficiency of a computer in the existing machine room.
In a first aspect, an embodiment of the present application provides a method for collecting device information, including:
acquiring an identification number of a cabinet in a target scene, and moving the cabinet to a corresponding cabinet according to the identification number, wherein the cabinet comprises a plurality of machine positions loaded with target equipment;
acquiring an image of the machine position loaded with the target equipment to obtain a target image, wherein the target image comprises a serial number used for indicating the target equipment, an identification code of a cabinet and a bar code of the machine position where the target equipment is located;
and acquiring the equipment information of the target equipment according to the bar code in the target image.
In a second aspect, an embodiment of the present application provides an apparatus for collecting device information, including:
the identification module is used for acquiring an identification number of a cabinet in a target scene, and moving the cabinet to a corresponding cabinet according to the identification number, wherein the cabinet comprises a plurality of machine positions loaded with target equipment;
the acquisition module is used for acquiring an image of the machine position loaded with the target equipment to acquire a target image, wherein the target image comprises a serial number used for indicating the target equipment, an identification code of the cabinet and a bar code of the machine position where the target equipment is located;
and the acquisition module is used for acquiring the equipment information of the target equipment according to the bar code in the target image.
In a third aspect, an embodiment of the present application provides a robot, including: the device comprises a controller, a navigation device, a power device, a lifting device and a camera device, wherein the camera device is arranged on the lifting device, and the controller is connected with the navigation device, the power device, the lifting device and the camera device;
the navigation device is used for determining a scene map according to the coordinate position of the cabinet;
the controller is used for controlling the power device to drive the robot to move to the corresponding cabinet according to the scene map, and the controller is also used for controlling the lifting device and the camera device to ascend or descend and controlling the camera device to shoot when the robot moves to the corresponding cabinet.
In a fourth aspect, an embodiment of the present application provides a computer device, including a memory and at least one processor;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory, causing the at least one processor to perform the method as described above.
In a fifth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and computer instructions stored in the computer-readable storage medium, when executed by a processor, are used to implement the method as described above.
In a sixth aspect, the present application provides a computer program product, which includes a computer program/instructions, and when executed by a processor, the computer program/instructions implement the method described above.
According to the method and the device for collecting the equipment information, the robot is moved to the cabinet by the aid of the robot, the target equipment in the cabinet is photographed to obtain the target image, the robot extracts the bar code from the target image, the serial number of the target equipment, the identification number of the cabinet where the target equipment is located and the bar code of the machine position where the target equipment is located can be rapidly determined by the bar code, the code scanning of each target equipment is not needed manually, the equipment information can be obtained more accurately, and the equipment checking efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application;
fig. 1 is a schematic view of a scene of a method for collecting device information according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a first embodiment of a method for collecting device information according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a cabinet provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of barcode generation provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a robot provided in an embodiment of the present application;
fig. 6 is a schematic view illustrating a process of extracting and identifying a barcode according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a second method for collecting device information according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an apparatus for collecting device information according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a first robot according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a second embodiment of a robot provided in the embodiments of the present application;
fig. 11 is a schematic structural diagram of a third embodiment of a robot provided in the embodiments of the present application;
with the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms referred to in this application are explained first:
the equipment cabinet:
the cabinet is deployed in a data room and used for loading computers and other equipment, the cabinet has a plurality of U bits (i.e. machine positions), each of the U bits has a uniform standard size, the computers loaded in the cabinet generally occupy one or more U bits according to their own size, and the cabinet deployed in the data room is generally about 2.2 meters high and has 42U bits.
Fig. 1 is a scene schematic diagram of an apparatus information collection method provided in an embodiment of the present application, and as shown in fig. 1, an application scene of the embodiment is a data room 10, a robot 12 can freely move in the data room 10, a plurality of cabinet arrays 11 are deployed in the data room 10, each cabinet array 11 is formed by a plurality of cabinets side by side, according to a position of each cabinet, a coordinate may be correspondingly allocated to each cabinet, and exemplarily, a coordinate of a cabinet 1 may be (X is an X coordinate)1,Y1) The coordinates of the cabinet 2 may be (X)2,Y1) The coordinates of the cabinet 3 may be (X)3,Y1) The coordinates of cabinet n may be (X)n,Y1) And a computer and other equipment can be loaded in the machine position of each cabinet.
In actual life application, the number of computers in each cabinet is large, and the number of cabinets deployed in a data room is also large, so that the total number of the computers is further increased, and in order to facilitate management and maintenance of the computers by workers in the data room, each computer needs to be attached with a corresponding label, and the label records the cabinet where the computer is located, the U bit and attribute information of the computer, so that the computers in the data room can be checked conveniently.
In the prior art, when a computer in a data room is checked, the following two methods are mainly adopted:
1. the manual checking mode is adopted, when a worker installs a computer into the cabinet, the worker can record and backup the information of the computer, and the computer in the cabinet is checked once every time a time period, so that whether the computer in the cabinet is consistent with the backed-up information or not is determined, and the manual checking and comparing mode wastes time and labor.
2. Through the side installation identification strip in rack inside, at the computer attache with the identification label simultaneously, be connected through wired or wireless communication mode between identification label and the identification strip, come to discern the identification label by the identification strip to quick completion is to the collection of computer tag information. In practical application, the identification strip of each cabinet needs to be connected to a data gateway at the tail end through a network cable, the data gateway transmits the acquired tag information to a user end, the asset counting is completed by the user end, each cabinet needs to be modified when the identification strip is installed, and an independent power supply and the network cable need to be provided, so that the complexity of an electric cable in a machine room is caused.
In order to solve the above problems, embodiments of the present application provide a method and an apparatus for collecting device information, a robot and a computer device, where the robot acquires an image of a machine location where a target device is loaded, extracts a barcode from the image, and identifies the barcode, so that a serial number of the target device, an identification number of a cabinet where the target device is located, and a number of the machine location where the target device is located can be obtained, thereby achieving a purpose of quickly and accurately collecting device information of the target device.
The technical solution of the present application will be described in detail below with reference to specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 2 is a schematic flowchart of a first embodiment of a method for collecting device information according to an embodiment of the present application, where the method may be applied to a robot, and as shown in fig. 2, the method for collecting device information includes the following steps:
s201, obtaining an identification number of the cabinet in the target scene, and moving to the corresponding cabinet according to the identification number.
The rack comprises a plurality of machine positions loaded with target equipment, illustratively, the target equipment can be computers or servers and other equipment, one machine position in the rack can be loaded in one computer, and a plurality of machine positions can be used for loading one computer together.
In this embodiment, the target scenario takes a data room as an example, a plurality of cabinets are deployed in the data room, each cabinet has a unique identification number, so that the cabinets can be distinguished from one another, and for example, the identification number may be a digital code.
The robot can freely move in the data machine room, when the robot moves to a certain cabinet, the robot can collect the identification number of the cabinet and identify the cabinet according to the identification number so as to determine the current position.
For example, a two-dimensional plane coordinate system may be established based on the data room, coordinates of each cabinet deployed in the data room are determined, then a corresponding identification number is automatically allocated to each cabinet according to a coordinate position of each cabinet, and the robot may further determine whether the coordinates correspond to the identification number according to its current coordinate during the movement of the data room, so as to determine which cabinet the robot has moved to.
Optionally, the staff may also assign a corresponding identification number to each cabinet, and then input the identification number corresponding to each cabinet into the robot through a display interface of the robot.
S202, image acquisition is carried out on the machine position loaded with the target equipment, and a target image is obtained.
The target image comprises a serial number used for indicating the target equipment, an identification code of the cabinet and a bar code of a machine position where the target equipment is located. For example, the Serial Number of the target device may refer to a Serial Number (SN code) of a computer, the machine location where the target device is located may refer to a Number of a machine location where the target device is loaded, and the barcode may be attached to the target device.
For example, fig. 3 is a schematic structural diagram of a cabinet provided in an embodiment of the present application, as shown in fig. 3, the cabinet has a plurality of machine positions, each machine position has a corresponding number from low to high, the size of each machine position is the same, and a computer may occupy one or more machine positions according to its own size.
In this embodiment, the robot can photograph the machine position in the cabinet through the camera device to obtain the target image, and when photographing the machine position, the target device loaded in the machine position can be photographed into the image.
Illustratively, when the robot photographs the machine positions in the cabinet, the robot can photograph each machine position once to obtain a plurality of images only containing one machine position as the target images, and can photograph a plurality of machine positions in the cabinet simultaneously to obtain images containing a plurality of machine positions as the target images.
In this embodiment of the present application, the barcode is generated in advance and is attached to the corresponding computer by a worker, fig. 4 is a schematic flowchart of generating the barcode provided in this embodiment of the present application, and the barcode may be a two-dimensional code, as shown in fig. 4, where the two-dimensional code is generated by a cabinet number ID, a U bit where the computer is located (i.e., a machine bit where the computer is installed), and a computer SN code (i.e., a serial number of the computer).
The cabinet number ID can establish a mapping relation with the cabinet coordinates, different cabinets have different cabinet number IDs, and the coordinates are different.
And S203, acquiring equipment information of the target equipment according to the bar code in the target image.
In this embodiment, the robot may identify the barcode in the target image to obtain the device information corresponding to the barcode attached to the target device. For example, the barcode may be a two-dimensional code or a barcode, different barcodes are attached to different computers, a mapping relationship exists between the barcode and the device information of the computer, the mapping relationship may be stored in a data storage device of the robot in advance, and after the robot extracts the barcode attached to the computer from the target image, the robot searches a mapping database to obtain the device information of the computer.
Illustratively, the device information includes the SN code of the target device, the identification code of the cabinet where the target device is located, and the number of the machine place where the target device is loaded.
According to the embodiment of the application, the robot is used for moving in a target scene, when the robot moves to a certain cabinet, the camera device of the robot can be used for shooting the machine position loaded with the target equipment in the cabinet, the bar code attached to the target equipment is extracted from the shot target image, and therefore the equipment information of the target equipment is obtained according to the bar code, manual counting is avoided, and the information collection efficiency is improved.
For example, on the basis of the above embodiment, if the robot includes a lifting device and an image capturing device, and the image capturing device is disposed on the lifting device, the step S202 may be specifically implemented by:
controlling the lifting device to drive the camera device to ascend or descend;
and when the camera device ascends or descends, controlling the camera device to photograph each machine position with target equipment to obtain a target image.
Wherein the target device has a barcode affixed thereto.
In this embodiment, the robot can control elevating gear to go up and down, thereby the drive sets up in elevating gear's camera device rises or descends, exemplarily, including a plurality of position that is in co-altitude in the rack, all can load target device in each position, because each position is in co-altitude not, through controlling camera device to rise or descend, adjust camera device's height, make camera device can be in co-altitude with each position, in order to shoot the target device who loads in each position, obtain the target image.
Illustratively, the target image at least comprises a bar code attached to the target device, and the bar code is used for indicating a serial number of the target device, an identification code of the cabinet and a machine position where the target device is located.
Fig. 5 is a schematic structural diagram of the robot provided in the embodiment of the present application, and as shown in fig. 5, a lifting device 51 and a camera device 52 arranged on the lifting device 51 are arranged on a robot 50, when the robot moves to a cabinet and needs to take a picture of a machine position where target equipment is loaded in the cabinet, the lifting device 51 may be controlled to be lifted, so that the camera device 52 may be at the same level as each machine position, thereby facilitating taking the picture.
For example, the height of the robot may be less than the height of the cabinet.
This application embodiment adjusts camera device's height through control elevating gear for camera device can be in the same height with each machine position in the rack, conveniently shoots the target device who loads in the machine position, and the target image of guarantee shooting can be including target device and attached in target device's bar code, avoids appearing the condition of omitting, improves information collection's the degree of accuracy.
Further, in some embodiments, when the image capturing device ascends or descends, if the height value of each ascending or descending of the image capturing device reaches a preset value, the image capturing device is controlled to photograph a preset number of machine positions to obtain a target image.
For example, the cabinet may include 42 machine positions from low to high, the height of each machine position is fixed and the same, the preset value may be a sum of the heights of five machine positions (for example, the height of each machine position is 5 cm, the sum of the heights of five machine positions is 25 cm), when the camera device ascends or descends 25 cm, the target devices loaded by the five machine positions in the cabinet are photographed to obtain a set of images, so that each time the camera device ascends or descends 25 cm, the corresponding target devices loaded by the five machine positions in the cabinet are photographed once to obtain a plurality of sets of images as the target images.
The embodiment of the application can adjust the size of the preset number, so that the shooting device can shoot the target equipment loaded by the machine positions of the preset number when the shooting device rises or falls to a certain height value, the shooting times of the robot at each cabinet can be adjusted, and the information collection efficiency can be improved.
For example, on the basis of the above embodiments, in some embodiments, the step S203 may specifically be implemented by the following steps:
extracting a bar code of the target equipment from the target image;
decoding the bar code of the target equipment to obtain a decoding result;
searching a preset database according to the decoding result to acquire the equipment information of the target equipment,
the device information comprises a serial number of the target device, an identification code of a cabinet where the target device is located and a machine position where the target device is located.
In this embodiment, the number of target images obtained by photographing the target devices loaded in the bays by the robot through the camera device may be one or more, for example, each bay may load one target device, or a plurality of bays may load one target device together, and the target images may include a plurality of bays, target devices loaded in the bays, and barcodes attached to each target device.
For example, the target image may include barcodes of a plurality of target devices, and the robot may perform image recognition on the target image, extract barcodes of the plurality of target devices from the target image, and scan the barcodes in the target image by using a scanner or the like to obtain a decoding result.
Optionally, the preset database may be pre-stored in the robot, and a mapping relationship between the barcode and the SN code of the target device, the serial number of the target device, and the location information of the target device is stored in the preset database, where the location information of the target device includes an identifier code of a cabinet where the target device is located and a number of a machine position where the target device is located.
According to the embodiment of the application, the bar code of the target equipment is directly extracted from the target image, the bar code is decoded and identified, the equipment information of the target equipment is obtained, the equipment information can be quickly obtained, manual intervention is not needed, and the collection efficiency of the equipment information is improved.
Further, in some embodiments, the step of "extracting the barcode of the target device from the target image" may be specifically implemented by the following steps:
carrying out binarization processing on the target image to obtain a binarized image;
performing expansion operation on the binarized image to obtain an expanded image;
performing edge detection on the expanded image, and determining the region boundary of the bar code in the target image;
and according to the region boundary, obtaining the bar code by segmentation from the target image.
In this embodiment, the binarization processing refers to setting the gray value of a pixel point on an image to be 0 or 255, so that the image exhibits a black-and-white effect. The dilation operation is to merge all background points in the image that are in contact with the object into the object, so that the boundary of the image is expanded to the outside, and the dilation operation can be used to fill up the hole in the object. Edge detection refers to identifying pixels in an image with obvious brightness change.
According to the embodiment of the application, the target image is subjected to binarization processing, expansion operation and edge detection, so that the bar code obtained by segmentation from the target image can be clear and complete, the robot can accurately identify the bar code, the equipment information of the target equipment to which the bar code is attached is obtained, and the information collection accuracy is improved.
Optionally, in some embodiments, if the barcode is a two-dimensional code, the step "decode the barcode of the target device to obtain a decoding result" may specifically include the following steps:
grid sampling is carried out on the two-dimensional code to obtain a black and white grid;
and decoding the black and white grid into a binary sequence value as a decoding result by using a preset coding rule.
In this embodiment, the two-dimensional code includes black and white patterns, and the black and white grid obtained by grid sampling the two-dimensional code includes a black pattern and a white pattern, where the black pattern may represent a binary code "1" and the white pattern may represent a binary code "0" according to a preset coding rule, and a combination of a plurality of black patterns and white patterns is regarded as a segment of binary sequence value.
According to the embodiment of the application, the mapping relation is established by using the SN code of the two-dimensional code and the target equipment, the identification code of the cabinet where the target equipment is located and the serial number of the machine position where the target equipment is located, so that the robot can decode the two-dimensional code quickly, the decoding speed is improved, and finally, the equipment information of the target equipment is obtained quickly.
Illustratively, on the basis of the foregoing embodiments, in some embodiments, the step "obtaining the barcode by dividing from the target image according to the region boundary" may be specifically implemented by the following steps:
and correcting the region boundary according to a preset correction algorithm to obtain a corrected region boundary, and segmenting the target image to obtain the bar code by using the corrected region boundary.
In this embodiment, the preset correction algorithm may adopt a region growing algorithm, which is also called a region growing algorithm, and the region growing algorithm is to combine pixels with similar properties together, correct the region boundary by using the region growing algorithm, and then divide the barcode by convex hull calculation.
According to the embodiment of the application, the regional boundary is corrected through the preset correction algorithm, the defect of the regional boundary with the two-dimensional code can be avoided, and the complete and clear two-dimensional code is finally obtained, so that the robot can accurately decode the two-dimensional code to obtain a decoding result, and the collection accuracy of the equipment information is improved.
On the basis of the foregoing embodiments, in some embodiments, the step S201 may be specifically implemented by the following steps:
acquiring an identification number of a cabinet in a target scene;
searching to obtain the coordinate position of the cabinet according to the identification number by using a preset corresponding relation;
establishing a scene map of a target scene according to the coordinate position of the cabinet;
and moving to the cabinet by using the scene map.
In this embodiment, the preset comparison relationship refers to a mapping relationship between the identifier number of the cabinet and the coordinate position of the cabinet, and the preset comparison relationship may be stored in the data storage device of the robot in advance. The coordinate position of the cabinet may be determined according to the deployed position of the cabinet in the target scene, for example, taking the target scene as the machine room as an example, a two-dimensional plane coordinate system of the machine room is established, so that the coordinate position of the cabinet deployed therein may be determined.
For example, the robot may be provided with a radar scanning device, scan a target scene through the radar scanning device to obtain radar point cloud data, determine a coordinate position of the cabinet, and establish a scene map of the target scene according to the radar point cloud data.
For example, the robot may set a navigation device, and automatically move to the corresponding cabinet through the navigation device and the scene map.
According to the embodiment of the application, each cabinet is identified by using the identification numbers, so that the robot can distinguish computers loaded by the machine positions in each cabinet, and meanwhile, the robot can determine the coordinate position of each cabinet according to different identification numbers to establish a scene map, so that the robot can independently move each cabinet according to the scene map, the collection of equipment information is completed, the working efficiency can be improved, and the manpower input is reduced.
Further, in some embodiments, if a plurality of cabinets are arranged in the target scene, the "moving to the cabinet using the scene map" may be specifically implemented by the following steps:
determining the arrangement sequence of each cabinet according to the coordinate position of each cabinet;
determining a moving track according to the scene map and the arrangement sequence of each cabinet;
and moving to the cabinet according to the moving track.
For example, the cabinets in the target scene may be deployed in an array, for example, n columns and n rows are deployed, and n × n cabinets in total may be used (X)n,Yn) And the coordinate position of the equipment cabinets in the nth row and the nth column is represented, and the row and the column of each equipment cabinet can be determined according to the coordinate position of each equipment cabinet, so that the arrangement sequence of each equipment cabinet is determined, and finally the moving track of the robot is determined according to the sequencing sequence.
The moving track is used for indicating a route of the robot moving from the current position of the cabinet to the next cabinet.
According to the embodiment of the application, the moving track of the robot is determined, so that the robot can move to each cabinet by using the moving track, image acquisition is carried out on the machine position loaded with the target equipment in each cabinet, image data is obtained, automatic collection of equipment information is achieved, manpower input is reduced, and information collection efficiency is improved.
On the basis of the foregoing embodiment, in some embodiments, after the foregoing step S203, the method for collecting device information further includes the following steps:
comparing the equipment information of the target equipment with a preset database to obtain a comparison result;
and uploading the comparison result to the terminal equipment.
In this embodiment, the preset database stores history information indicating device information of a history device loaded at a certain machine position, if the device information of a target device currently loaded at the certain machine position is different from the history information, it indicates that the history device loaded at the certain machine position is replaced by the current target device, at this time, the robot may mark the target device and upload a comparison result of device change to the terminal device, and if the device information of the target device is the same as the history information, it indicates that the history device loaded at the certain machine position is not replaced, and the robot may upload a comparison result of device non-change to the terminal device.
Illustratively, the terminal device can be an upper computer, and the upper computer establishes wired or wireless connection with the robot to obtain the comparison result.
According to the embodiment of the application, the equipment information of the target equipment is compared with the historical information, whether the computer loaded in the machine position is changed within a period of time or not can be rapidly determined, the change information of the computer is collected, and the information collection effect is improved.
Fig. 6 is a schematic view of a process of extracting and identifying a barcode provided in an embodiment of the present application, taking the barcode as a two-dimensional code as an example, as shown in fig. 6, the process specifically includes the following steps:
s601, taking a picture.
Specifically, the photo may be obtained by shooting with a robot shooting device, and the photo includes an image of the target device, an image of the two-dimensional code attached to the target device, an image of the machine position, and the like.
And S602, positioning the two-dimensional code.
Specifically, firstly, image binarization is carried out on a shot photo to obtain an image with a black-and-white effect, then image expansion is carried out, and finally, the region boundary of the two-dimensional code in the photo is determined through edge detection.
And S603, dividing the two-dimensional code.
Specifically, after the area boundary of the two-dimensional code in the photo is determined, the two-dimensional code can be directly extracted from the photo.
And S604, grid sampling decoding.
Specifically, after the two-dimensional code is extracted, black and white patterns in the two-dimensional code are determined by using grid sampling, and a binary sequence value is obtained by decoding according to a preset coding rule.
S605, searching a preset database to obtain the equipment information of the target equipment.
Specifically, the preset database stores device information of the target device corresponding to the two-dimensional code, and the device information includes, for example, an SN code of the target device, an identification number ID of a cabinet where the target device is located, and a number of a machine position where the target device is located.
Fig. 7 is a schematic flowchart of a second embodiment of a method for collecting device information according to an embodiment of the present application, and as shown in fig. 7, the method may be applied to a robot, and specifically includes the following steps:
s701, establishing a mapping relation between the coordinate points of the routing inspection map and the cabinet codes.
Specifically, the robot may scan a target scene (e.g., a machine room), establish a patrol map, determine coordinates of each cabinet in the patrol map, and identify codes of each cabinet, so as to establish a mapping relationship between the coordinate points and the codes.
S702, acquiring the bar code of each device in the cabinet and storing the bar code into the data storage device.
Specifically, each device stored in the cabinet is attached with a barcode, a mapping relationship exists between the barcode and the identification code of the cabinet where the device is located, the number of the machine position where the device is located, and the production serial number of the device, and a mapping database is stored in the data storage device, so that a subsequent robot can query the identification code of the cabinet where the device is located, the number of the machine position where the device is located, and the production serial number of the device by using the barcode.
And S703, moving to a coordinate point, and acquiring a target image of the machine position loaded with the target equipment in the cabinet.
Specifically, the robot can move to the corresponding coordinate point (corresponding cabinet department) according to the patrol and examine map and according to the sequence of patrolling and examining, then take a picture to the position of loading the computer in the cabinet, obtain the target image, include at least by attached bar code on this computer in the target image.
S704, recognizing and decoding the bar code in the target image, and comparing the bar code with a mapping database stored in a data storage device to obtain a comparison result.
Specifically, the computers loaded in the machine positions may be changed, for example, in the maintenance process, a worker may exchange the computers in the two machine positions, if the computers are not reset in time, the computers loaded in the machine positions may be changed, for example, the computer loaded in one machine position may be replaced by the worker during maintenance, and by identifying and decoding the bar codes on the computers loaded in the machine positions and comparing the bar codes with the information in the mapping database, the comparison result may be obtained, and whether the computer loaded in the machine position is changed or not may be determined.
In summary, in the data room scene of this embodiment, the robot may be used to check the computer loaded in the rack in the cabinet deployed in the data room, and by taking a picture, obtain an image of the barcode attached to the computer, and determine the device information of the computer using the barcode, information collection can be completed quickly, information collection efficiency is improved, and labor investment is reduced.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 8 is a schematic structural diagram of an apparatus for collecting device information according to an embodiment of the present disclosure, where the apparatus for collecting device information may be integrated in a robot, and as shown in fig. 8, the apparatus for collecting device information 80 includes an identification module 81, a collection module 82, and an acquisition module 83.
The identification module 81 is configured to obtain an identification number of an enclosure in a target scene, and move to a corresponding enclosure according to the identification number. The acquisition module 82 is used for acquiring images of the machine positions loaded with the target equipment and acquiring target images. The obtaining module 83 is configured to obtain device information of the target device according to the barcode in the target image.
The equipment cabinet comprises a plurality of machine positions loaded with target equipment, and the target image comprises a serial number used for indicating the target equipment, an identification code of the equipment cabinet and a bar code of the machine position where the target equipment is located.
For example, in some embodiments, if the robot includes a lifting device and a camera device, and the camera device is disposed on the lifting device, the acquisition module 82 may be specifically configured to:
controlling the lifting device to drive the camera device to ascend or descend;
when the camera device ascends or descends, the camera device is controlled to photograph the target equipment loaded in each machine position to obtain a target image, and the target equipment is attached with a bar code.
Optionally, in some embodiments, the acquisition module 82 may be specifically configured to:
when the camera device ascends or descends, if the height value of the camera device every time the camera device ascends or descends reaches a preset value, the camera device is controlled to photograph target equipment loaded on a preset number of machine positions in the cabinet to obtain a target image.
In some embodiments, the obtaining module 83 may be specifically configured to:
extracting a bar code of the target equipment from the target image;
decoding the bar code of the target equipment to obtain a decoding result;
and searching a preset database according to the decoding result, and acquiring the equipment information of the target equipment, wherein the equipment information comprises the serial number of the target equipment, the identification code of the cabinet where the target equipment is located and the machine position where the target equipment is located.
In some embodiments, the obtaining module 83 may be specifically configured to:
carrying out binarization processing on the target image to obtain a binarized image;
performing expansion operation on the binarized image to obtain an expanded image;
performing edge detection on the expanded image, and determining the region boundary of the bar code in the target image;
and according to the region boundary, obtaining the bar code by segmentation from the target image.
In some embodiments, if the barcode is a two-dimensional code, the obtaining module 83 may be specifically configured to:
grid sampling is carried out on the two-dimensional code to obtain a black and white grid;
and decoding the black and white grid into a binary sequence value as a decoding result by using a preset coding rule.
In some embodiments, the obtaining module 83 may be specifically configured to:
and correcting the region boundary according to a preset correction algorithm to obtain a corrected region boundary, and segmenting the target image to obtain the bar code by using the corrected region boundary.
In some embodiments, the identification module 81 may be specifically configured to:
acquiring an identification number of a cabinet in a target scene;
searching to obtain the coordinate position of the cabinet according to the identification number by using a preset corresponding relation;
establishing a scene map of a target scene according to the coordinate position of the cabinet;
and moving to the cabinet by using the scene map.
In some embodiments, the identification module 81 may be specifically configured to:
determining the arrangement sequence of each cabinet according to the coordinate position of each cabinet;
determining a moving track according to the scene map and the arrangement sequence of each cabinet;
and moving to the cabinet according to the moving track.
In some embodiments, the device information collecting apparatus 80 may further include an uploading module, configured to compare the device information of the target device with a preset database, and obtain a comparison result;
and uploading the comparison result to the terminal equipment.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or may be implemented entirely in hardware; and part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the obtaining module may be a processing element separately set up, or may be implemented by being integrated in a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and a processing element of the apparatus calls and executes the functions of the obtaining module. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Fig. 9 is a schematic structural diagram of a first embodiment of a robot according to the present invention, and as shown in fig. 9, the robot includes a camera 91, a lifting device 92, a controller 93, a navigation device 94, and a power device 95.
The camera device 91 is arranged on the lifting device 92, and the controller 93 is connected with the navigation device 94, the power device 95, the lifting device 92 and the camera device 91;
the navigation device 94 is used for determining a scene map according to the coordinate position of the cabinet;
the controller 93 is used for controlling the power device 95 to drive the robot to move to the corresponding cabinet according to the scene map, and the controller 93 is also used for controlling the lifting device 92 and the camera device 91 to ascend or descend and controlling the camera device 91 to take pictures when the robot moves to the corresponding cabinet.
Fig. 10 is a schematic structural diagram of a second embodiment of the robot according to the embodiment of the present application, and as shown in fig. 10, the robot includes an image pickup device 101, a lifting device 102, a controller 103, a navigation device 104, and a power device 105.
The lifting device 102 comprises a lifting mechanism 1021, a motor 1022 and a driving device 1023, wherein the driving device 1023 is connected with the motor 1022, and the motor 1022 is connected with the lifting mechanism 1021;
the driving device 1023 is used for driving the motor 1022 to raise or lower the lifting mechanism 1021 and the image pickup apparatus 101 provided in the lifting mechanism 1021.
For example, on the basis of the above embodiments, fig. 11 is a schematic structural diagram of a third embodiment of the robot provided in the embodiments of the present application, and the robot further includes a data storage device 116, where the data storage device 116 is connected to the controller 113.
The data storage device can store a preset database, and the preset database is used for storing the mapping relation between the bar code attached to the computer and the SN code of the computer, the identification number of the cabinet where the computer is located and the number of the machine position where the computer is located.
For example, in some embodiments, the robot further comprises an information uploading device, wherein the information uploading device is connected with the controller; the information uploading device is used for data interaction between the robot and the terminal equipment.
In this embodiment, the robot may transmit the comparison result to a terminal device through an information uploading device, and the terminal device may be an upper computer or a server.
For example, in some embodiments, the camera may be an industrial camera whose focal length may be a fixed value that is set in advance.
In this embodiment, through the focus of fixing in advance, can guarantee that the camera acquires more clear target image at the in-process of shooing, improve information collection's the degree of accuracy.
Illustratively, in some embodiments, the robot further comprises a display device, and the display device is connected with the controller.
The display device can be a liquid crystal display screen or a touch display screen, and a user can interact information with the robot through the display device.
Optionally, an embodiment of the present application further provides a computer-readable storage medium, on which computer instructions are stored, and when the computer instructions are executed by a processor, the computer-readable storage medium is used for implementing the above method.
Optionally, an embodiment of the present application further provides a computer program product, which includes a computer program/instruction, and when executed by a processor, the computer program/instruction implements the method described above.
In the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship; in the formula, the character "/" indicates that the preceding and following related objects are in a relationship of "division". "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
It is to be understood that the various numerical references referred to in the embodiments of the present application are merely for convenience of description and distinction and are not intended to limit the scope of the embodiments of the present application. In the embodiment of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present application.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (19)

1. A method for collecting device information, which is applied to a robot, includes:
acquiring an identification number of a cabinet in a target scene, and moving the cabinet to a corresponding cabinet according to the identification number, wherein the cabinet comprises a plurality of machine positions loaded with target equipment;
acquiring an image of the machine position loaded with the target equipment to obtain a target image, wherein the target image comprises a serial number used for indicating the target equipment, an identification code of a cabinet and a bar code of the machine position where the target equipment is located;
and acquiring the equipment information of the target equipment according to the bar code in the target image.
2. The method according to claim 1, wherein the robot comprises a lifting device and a camera device, the camera device is arranged on the lifting device, and the step of acquiring the image of the machine position loaded with the target equipment to obtain the target image comprises the following steps:
controlling the lifting device to drive the camera device to ascend or descend;
and when the camera device ascends or descends, controlling the camera device to photograph each machine position loaded with target equipment to obtain the target image, wherein the bar code is attached to the target equipment.
3. The method according to claim 2, wherein the controlling the camera to take a picture of the target equipment loaded in each station when the camera is raised or lowered to obtain the target image comprises:
when the camera device ascends or descends, if the height value of the camera device every time the camera device ascends or descends reaches a preset value, the camera device is controlled to photograph a preset number of machine positions, and the target image is obtained.
4. The method of claim 1, wherein the obtaining device information of the target device according to the barcode in the target image comprises:
extracting a bar code of the target equipment from the target image;
decoding the bar code of the target equipment to obtain a decoding result;
and searching a preset database according to the decoding result, and acquiring the equipment information of the target equipment, wherein the equipment information comprises the serial number of the target equipment, the identification code of the cabinet where the target equipment is located and the machine position where the target equipment is located.
5. The method of claim 4, wherein extracting the barcode of the target device from the target image comprises:
carrying out binarization processing on the target image to obtain a binarized image;
performing expansion operation on the binarized image to obtain an expanded image;
performing edge detection on the expanded image, and determining the region boundary of the bar code in a target image;
and segmenting the bar code from the target image according to the region boundary.
6. The method of claim 4, wherein the barcode is a two-dimensional code, and decoding the barcode of the target device to obtain a decoding result comprises:
carrying out grid sampling on the two-dimensional code to obtain a black and white grid;
and decoding the black and white grid into a binary sequence value by using a preset encoding rule as the decoding result.
7. The method of claim 5, wherein the segmenting the barcode from the target image according to the region boundary comprises:
correcting the region boundary according to a preset correction algorithm to obtain a corrected region boundary,
and segmenting the bar code from the target image by using the corrected region boundary.
8. The method according to claim 1, wherein the obtaining the identification number of the cabinet in the target scene and moving to the corresponding cabinet according to the identification number comprises:
acquiring an identification number of a cabinet in a target scene;
searching to obtain the coordinate position of the cabinet according to the identification number by using a preset corresponding relation;
establishing a scene map of the target scene according to the coordinate position of the cabinet;
and moving to the cabinet by using the scene map.
9. The method of claim 8, wherein a plurality of cabinets are disposed in the target scene, and the moving to the cabinets using the scene map comprises:
determining the arrangement sequence of each cabinet according to the coordinate position of each cabinet;
determining a moving track according to the scene map and the arrangement sequence of each cabinet;
and moving to the cabinet according to the moving track.
10. The method according to any one of claims 1-9, wherein after acquiring the device information of the target device according to the barcode in the target image, the method further comprises:
comparing the equipment information of the target equipment with a preset database to obtain a comparison result;
and uploading the comparison result to the terminal equipment.
11. An apparatus for collecting device information, comprising:
the identification module is used for acquiring an identification number of a cabinet in a target scene, and moving the cabinet to a corresponding cabinet according to the identification number, wherein the cabinet comprises a plurality of machine positions loaded with target equipment;
the acquisition module is used for acquiring an image of the machine position loaded with the target equipment to acquire a target image, wherein the target image comprises a serial number used for indicating the target equipment, an identification code of the cabinet and a bar code of the machine position where the target equipment is located;
and the acquisition module is used for acquiring the equipment information of the target equipment according to the bar code in the target image.
12. A robot, comprising: the device comprises a controller, a navigation device, a power device, a lifting device and a camera device, wherein the camera device is arranged on the lifting device, and the controller is connected with the navigation device, the power device, the lifting device and the camera device;
the navigation device is used for determining a scene map according to the coordinate position of the cabinet;
the controller is used for controlling the power device to drive the robot to move to the corresponding cabinet according to the scene map, and the controller is also used for controlling the lifting device and the camera device to ascend or descend and controlling the camera device to shoot when the robot moves to the corresponding cabinet.
13. A robot as claimed in claim 12, wherein the lifting means comprises: the lifting mechanism comprises a lifting mechanism, a motor and a driving device, wherein the driving device is connected with the motor, and the motor is connected with the lifting mechanism;
the driving device is used for driving the motor, so that the lifting mechanism and the camera device arranged on the lifting mechanism ascend or descend.
14. The robot of claim 12, further comprising: a data storage device connected with the controller.
15. The robot of claim 12, further comprising: the information uploading device is connected with the controller;
the information uploading device is used for data interaction between the robot and the terminal equipment.
16. The robot according to claim 12, wherein the image pickup device is an industrial camera, and a focal length of the industrial camera is a fixed value set in advance.
17. The robot of claim 12, further comprising: a display device connected with the controller.
18. A computer-readable storage medium having stored thereon computer instructions for implementing the method of any one of claims 1-10 when executed by a processor.
19. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the method of any of claims 1-10.
CN202110373967.4A 2021-04-07 2021-04-07 Equipment information collection method and device, robot and computer equipment Pending CN113099116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110373967.4A CN113099116A (en) 2021-04-07 2021-04-07 Equipment information collection method and device, robot and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110373967.4A CN113099116A (en) 2021-04-07 2021-04-07 Equipment information collection method and device, robot and computer equipment

Publications (1)

Publication Number Publication Date
CN113099116A true CN113099116A (en) 2021-07-09

Family

ID=76674764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110373967.4A Pending CN113099116A (en) 2021-04-07 2021-04-07 Equipment information collection method and device, robot and computer equipment

Country Status (1)

Country Link
CN (1) CN113099116A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025084A (en) * 2021-10-28 2022-02-08 杭州海康威视系统技术有限公司 Method for acquiring image of equipment component and electronic equipment
CN114445446A (en) * 2021-12-21 2022-05-06 福建新大陆软件工程有限公司 Equipment information statistical method based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984346A (en) * 2014-05-21 2014-08-13 上海第二工业大学 System and method for intelligent warehousing checking
JP2016222371A (en) * 2015-05-27 2016-12-28 ワム・システム・デザイン株式会社 Information processor, information processing method, program, and forklift
CN106379681A (en) * 2016-07-11 2017-02-08 黄金刚 Intelligent warehousing robot, system and system control method
CN207939602U (en) * 2018-01-05 2018-10-02 广东数相智能科技有限公司 A kind of filming apparatus
CN109299670A (en) * 2018-09-04 2019-02-01 浙江梧斯源通信科技股份有限公司 Calculator room equipment management method based on image recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103984346A (en) * 2014-05-21 2014-08-13 上海第二工业大学 System and method for intelligent warehousing checking
JP2016222371A (en) * 2015-05-27 2016-12-28 ワム・システム・デザイン株式会社 Information processor, information processing method, program, and forklift
CN106379681A (en) * 2016-07-11 2017-02-08 黄金刚 Intelligent warehousing robot, system and system control method
CN207939602U (en) * 2018-01-05 2018-10-02 广东数相智能科技有限公司 A kind of filming apparatus
CN109299670A (en) * 2018-09-04 2019-02-01 浙江梧斯源通信科技股份有限公司 Calculator room equipment management method based on image recognition

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114025084A (en) * 2021-10-28 2022-02-08 杭州海康威视系统技术有限公司 Method for acquiring image of equipment component and electronic equipment
CN114025084B (en) * 2021-10-28 2024-02-09 杭州海康威视系统技术有限公司 Method for acquiring images of equipment parts and electronic equipment
CN114445446A (en) * 2021-12-21 2022-05-06 福建新大陆软件工程有限公司 Equipment information statistical method based on computer vision

Similar Documents

Publication Publication Date Title
CN108416403B (en) Method, system, equipment and storage medium for automatically associating commodity with label
CN110336691B (en) Management method, device, equipment and communication system of equipment in data center
US9934444B2 (en) Image processing apparatus, image processing method and computer-readable storage medium
CN113099116A (en) Equipment information collection method and device, robot and computer equipment
CN110675425B (en) Video frame identification method, device, equipment and medium
KR102375325B1 (en) Method for detection and recognition of distant high-density visual markers
CN112329495B (en) Bar code identification method, device and system
CN116758006B (en) Scaffold quality detection method and device
CN113486739A (en) Screw detection method and device, electronic equipment and storage medium
CN117095275A (en) Asset inventory method, system, device and storage medium for data center
CN114445498A (en) Depth camera calibration method, system, device and medium
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN114065336B (en) Revit-based high formwork region inspection method, device, medium and equipment
CN114004891A (en) Distribution network line inspection method based on target tracking and related device
JP4796535B2 (en) Multi-conductor electric wire tracking method, apparatus and program by image processing, and multi-conductor electric wire abnormality detection method, apparatus and program using the same
CN114299307A (en) Power transmission line image annotation method and related device
CN111768384A (en) Cell counting method and system based on three-dimensional scanning imaging
CN110556172A (en) Slide filing method, device terminal, slide filing system, and readable storage medium
CN113298271B (en) Digital inspection method and system for optical fiber network resources
CN113988240B (en) Decoding method of reader-writer, reader-writer and storage medium
CN117218162B (en) Panoramic tracking vision control system based on ai
CN113989200A (en) Sample interference recognition device, method, electronic device and computer storage medium
WO2023231625A1 (en) Data processing method and apparatus, and device and system
CN118247356A (en) Camera external parameter calibration method, device, computer equipment and storage medium
CN116246260A (en) Component recognition method and recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210709