CN117268474A - Device and method for estimating volume, number and weight of objects in scene - Google Patents

Device and method for estimating volume, number and weight of objects in scene Download PDF

Info

Publication number
CN117268474A
CN117268474A CN202311548617.2A CN202311548617A CN117268474A CN 117268474 A CN117268474 A CN 117268474A CN 202311548617 A CN202311548617 A CN 202311548617A CN 117268474 A CN117268474 A CN 117268474A
Authority
CN
China
Prior art keywords
scene
sensor
objects
component
volume
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311548617.2A
Other languages
Chinese (zh)
Inventor
李伯东
嵇绪
邹之浩
黄翔煊
唐志玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huidao Information Technology Co ltd
Jiangxi Zhonghui Cloud Chain Supply Chain Management Co ltd
Original Assignee
Shanghai Huidao Information Technology Co ltd
Jiangxi Zhonghui Cloud Chain Supply Chain Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huidao Information Technology Co ltd, Jiangxi Zhonghui Cloud Chain Supply Chain Management Co ltd filed Critical Shanghai Huidao Information Technology Co ltd
Priority to CN202311548617.2A priority Critical patent/CN117268474A/en
Publication of CN117268474A publication Critical patent/CN117268474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a device and a method for estimating the volume, the number and the weight of objects in a scene, and relates to the technical field of volume and weight estimation; the main control component, the sensor component and the image acquisition component are all arranged on the carrier platform; the carrier platform is arranged on the driven part; the driven part is mechanically connected with the driving part; the main control component is respectively connected with the active component, the sensor component and the image acquisition component and is used for: controlling the start and stop of the driving component, the sensor component and the image acquisition component; receiving the distance between the object in the scene and the sensor, the number, the position and the category of the object in the scene; calculating the volume of objects in the scene; and determining the corresponding object density based on the object types in the scene, and calculating the weight of the objects in the scene by combining the volumes of the objects in the scene. The invention improves the efficiency and accuracy of object counting work in a scene.

Description

Device and method for estimating volume, number and weight of objects in scene
Technical Field
The invention relates to the technical field of volume and weight estimation, in particular to a device and a method for estimating the volume, the number and the weight of objects in a scene.
Background
Currently, in some scenes, the conventional object volume, number and weight estimation method has the problems of inconvenience, time consumption, inefficiency and the like.
Disclosure of Invention
The invention aims to provide a device and a method for estimating the volume, the number and the weight of objects in a scene, which improve the efficiency and the accuracy of object counting in the scene.
In order to achieve the above object, the present invention provides the following solutions:
in a first aspect, the present invention provides a device for estimating the volume, number and weight of objects in a scene, comprising a main control unit, a driving unit, a driven unit, a carrier platform, a sensor assembly and an image acquisition assembly;
the main control component, the sensor component and the image acquisition component are all arranged on the carrier platform; the carrier platform is arranged on the driven part; the driven part is mechanically connected with the driving part;
the driving part is used for driving the carrier platform to move through the driven part; the sensor assembly is used for moving along with the carrier platform and collecting the distance between an object and a sensor in a scene in real time; the image acquisition component is used for moving along with the carrier platform and acquiring images of objects in a scene in real time, and then determining the number, the positions and the categories of the objects in the scene according to the images based on a preset image recognition algorithm;
the main control component is respectively connected with the active component, the sensor component and the image acquisition component; the main control unit is used for: controlling the start and stop of the driving component, the sensor component and the image acquisition component; receiving a distance between an object in the scene and a sensor, and the number, the position and the category of the object in the scene; calculating the volume of the object in the scene based on the distance between the object and the sensor in the scene and the position of the object in the scene; and determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
In a second aspect, the present invention provides a method for estimating the volume, number and weight of objects in a scene, applied to the apparatus for estimating the volume, number and weight of objects in a scene, the method comprising:
starting the driving component, the sensor component and the image acquisition component;
collecting the distance between an object and a sensor in a scene;
acquiring images of objects in a scene, and then determining the number, the positions and the types of the objects in the scene based on a preset image recognition algorithm;
calculating the volume of the object in the scene based on the distance between the object and the sensor in the scene and the position of the object in the scene;
and determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention discloses a device and a method for estimating the volume, the number and the weight of objects in a scene. Compared with the prior art, the method has the advantages that manual operation participation is not needed, estimation efficiency of the volume, the number and the weight of objects in the scene is greatly improved, and further efficiency and accuracy of work such as object counting and safe production in the scene are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an apparatus for estimating the volume, number and weight of objects in a scene according to the present invention;
FIG. 2 is a schematic diagram II of the apparatus for estimating the volume, number and weight of objects in a scene according to the present invention;
FIG. 3 is a schematic view of the installation of the device of the present invention;
FIG. 4 is a second schematic installation view of the apparatus of the present invention;
FIG. 5 is a third schematic view of the installation of the apparatus of the present invention;
FIG. 6 is a block diagram of a first electrical connection of the apparatus of the present invention;
FIG. 7 is a block diagram of a second electrical connection of the apparatus of the present invention;
FIG. 8 is a third electrical connection block diagram of the apparatus of the present invention;
FIG. 9 is a fourth electrical connection block diagram of the apparatus of the present invention;
FIG. 10 is a schematic diagram of a method for estimating object volume, number and weight within a scene according to the present invention;
FIG. 11 is a logic flow diagram of an example of a method of the present invention for estimating the volume, number and weight of objects within a scene;
FIG. 12 is a schematic diagram of the operation of the apparatus of the present invention;
FIG. 13 is a second schematic diagram of the operation of the apparatus of the present invention;
FIG. 14 is a third schematic representation of the operation of the apparatus of the present invention;
fig. 15 is a schematic diagram of the operation of the apparatus of the present invention.
Symbol description:
the device comprises a 1-motor, a 2-pulley, a 3-track, a 4-sliding table, a 5-screw rod, a 6-casing, a 7-sensor assembly, an 8-image acquisition assembly, a 9-first sensor and a 10-second sensor.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a device and a method for estimating the volume, the number and the weight of objects in a scene, which realize the measurement of a plurality of object objects and physical quantities in the scene by an integrated control mode, including but not limited to the acquisition of information of cargoes, personnel, vehicles and environment, so as to quickly and accurately determine the information of the number, the volume, the weight and the like of the objects in the scene.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
As shown in fig. 1 or fig. 2, the present invention provides a device for estimating the volume, number and weight of objects in a scene, which comprises a main control unit, a driving unit, a driven unit, a carrier platform, a sensor assembly 7 and an image acquisition assembly 8.
The main control component, the sensor component 7 and the image acquisition component 8 are all arranged on the carrier platform; the carrier platform is arranged on the driven part; the driven member is mechanically coupled to the driving member. The driving part is used for driving the carrier platform to move through the driven part; the sensor assembly 7 is used for moving along with the carrier platform and collecting the distance between an object and a sensor in a scene in real time; the image acquisition component 8 is used for moving along with the carrier platform and acquiring images of objects in a scene in real time, and then determining the number, the positions and the types of the objects in the scene according to the images based on a preset image recognition algorithm.
Specifically, as shown in fig. 1, the active component is a motor 1; the driven part comprises a pulley 2, a linkage sub-part and a track 3; the carrier platform is arranged on the pulley 2; the motor 1 is used for driving the pulley 2 to move linearly or in a curve on the track 3 through the linkage sub-component; the linkage sub-component is a belt or a chain, and the belt or the chain is arranged in the track 3. And wherein the track 3 may be a single straight track as shown in fig. 3, applied to a relatively small indoor scene. In a large indoor scene, the track length can be increased according to the installation conditions on the ceiling in the scene, or a plurality of devices are installed, or the track is made into an arc shape and installed on the ceiling, as shown in fig. 4, a plurality of straight-line tracks which are arranged in parallel are shown, and as shown in fig. 5, a curved track which is formed by the straight-line tracks and the arc-shaped tracks and is connected end to end is shown.
In another example, as shown in fig. 2, the driving component is a motor 1, and the driven component includes a screw rod 5 and a sliding table 4; the carrier platform is arranged on the sliding table 4; the motor 1 is used for driving the sliding table 4 to perform linear motion through the screw rod 5.
Both the motor 1 in fig. 1 and the motor 1 in fig. 2 can use a brushless motor and a brushed motor, and a speed reduction group can be added to lift the torque of the motor so as to drive a belt, a chain or a screw rod 5, so as to drive a pulley 2 fixed on the belt, the chain or a sliding table 4 fixed on the screw rod 5.
The sensor assembly 7 comprises an ultrasonic sensor; the ultrasonic sensor is used for: transmitting ultrasonic waves to objects in the scene; receiving ultrasonic waves reflected back by objects in a scene, and determining the flight time of the ultrasonic waves; and calculating the distance between the object and the sensor in the scene in real time based on the ultrasonic flight time.
In another example, the sensor assembly 7 comprises a lidar; the laser radar is used for: emitting a laser beam onto an object within the scene; receiving a laser beam reflected back by an object in a scene, and determining the flight time of the laser beam; and calculating the distance between the object in the scene and the sensor in real time by adopting the laser beam flight time based on a laser triangulation ranging method. Specifically, a laser triangulation ranging method is used, in the laser triangulation ranging process, a set of time measurement values are obtained by measuring the time difference between the emission and the reception of a laser beam, and then the distance is calculated by using the sine theorem or the cosine theorem of a triangle according to the angle relation between a laser radar and a target object.
The distance measurement can be completed in extremely short time no matter an ultrasonic sensor or a laser radar is adopted, so that the high-speed scanning of surrounding environment and objects in a scene and the object distance measurement are realized. The scanned object can then be outlined from this data and its size calculated to calculate its volume.
The image acquisition assembly 8 comprises a camera and an image processing sub-component; the preset image recognition algorithm comprises a preset target detection algorithm and a preset object category recognition algorithm. The camera is used for moving along with the carrier platform and collecting images of objects in a scene in real time. The image processing sub-component is arranged on the camera and is used for:
1) Determining the position of an object in the scene according to the image of the object in the scene based on the preset target detection algorithm; the preset target detection algorithm is RCNN or YOLO; specifically, the image of the object in the scene is processed through the RCNN or YOLO algorithm, so that the region in the image where the object exists is found, and the position information of the object is given. In practical applications, the image of the object in the scene may also be preprocessed, such as image cropping.
2) Determining the number and the category of the objects in the scene according to the images of the objects in the scene based on the preset object category recognition algorithm; the preset object type recognition algorithm is a convolutional neural network or a support vector machine. Before practical application, the positions of the objects in the scene extracted above can be marked with features to distinguish the measured objects from the non-measured objects in the scene, and then the measured objects and the non-measured objects are sent to a convolutional neural network or a support vector machine for training so as to obtain a network which can be directly used subsequently.
The main control component is respectively connected with the active component, the sensor component 7 and the image acquisition component 8; the main control unit is used for: controlling the start and stop of the active component, the sensor assembly 7 and the image acquisition assembly 8; receiving a distance between an object in the scene and a sensor, and the number, the position and the category of the object in the scene; calculating the volume of the object in the scene based on the distance between the object and the sensor in the scene and the position of the object in the scene; and determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
It should be noted that after the object type is identified, the density information of the object of the type can be obtained in a set database or a network, and then the weight information is calculated by combining the volume information. In the whole process, the camera can also play a role of an optimization algorithm, and the target object data identified by the image is sent to the main control, so that the sensor only scans the target object, the single scanning time of the sensor is saved, and the scanning frequency is increased.
Specifically, the main control component is a Linux system board, an X86 platform system board, an ARM platform system board, a 51 single-chip microcomputer, an esp8266, an esp32 or an STM32 single-chip microcomputer. In practical application, the main control unit is also connected with the network terminal to receive a control instruction transmitted by the network terminal, so as to control the start and stop of the active unit, the sensor assembly 7 and the image acquisition assembly 8; and uploading the data acquired by the distance between the object in the scene and the sensor, the number, the position, the category and the like of the object in the scene to the cloud for storage, and uploading the calculated data such as the volume, the mass, the number and the like of the object in the scene to the cloud for storage. In order to protect the main control unit, the main control unit is generally disposed inside the casing 6. The main control unit may be provided near the motor 1 as needed.
In another specific example, the sensor assembly 7 further comprises a temperature sensor; the temperature sensor is used for acquiring surface temperature distribution simulation data of objects in a scene in a non-contact mode, and then determining digital temperature data based on the surface temperature distribution simulation data. Specifically, the temperature sensor is a thermal infrared imager or a thermal infrared imager, and the like, and can acquire the temperature distribution of the surface of the object in a non-contact mode through infrared radiation and convert the temperature distribution into a digital signal for subsequent processing.
The main control unit is also connected with the temperature sensor, and is also used for: correlating the digital temperature data with a point cloud image or an image of an object within the scene; the point cloud is determined from the laser beam reflected back by objects within the scene. The point cloud image of the data output scanned by the laser radar sensor can scan out the environment in the scene approximately and scan out the height of objects (including human bodies) in the scene accurately.
That is, the image acquired by the camera is correlated with the temperature data acquired by the thermal infrared imager. In addition, a corresponding infrared temperature sensor can be additionally arranged on the camera for measuring the temperature of the target area; the infrared temperature sensor can directly acquire temperature information of the surface of the object and correlate the temperature information with the image. The temperature data is not limited to be associated with the image data of the camera, but may be associated with a point cloud image generated with the lidar data. Therefore, the temperature of the whole scene can be acquired by additionally arranging the temperature sensor on the equipment, for example, the temperature of a certain place in the scene is abnormal or a fire disaster occurs, and the information fed back by personnel in the scene through the equipment can be processed in time.
In another embodiment, the device further comprises a first sensor 9 and a second sensor 10; as shown in fig. 1, the first sensor 9 and the second sensor 10 are respectively disposed at two ends of the track 3; the first sensor 9 and the second sensor 10 are used to cooperatively determine a first position of the trolley 2 on the track 3. Alternatively, as shown in fig. 2, the first sensor 9 and the second sensor 10 are respectively disposed at two ends of the screw 5; the first sensor 9 and the second sensor 10 are used for cooperatively determining a second position of the sliding table 4 on the screw rod 5.
Whether the first sensor 9 and the second sensor 10 in fig. 1 or the first sensor 9 and the second sensor 10 in fig. 2 can be a micro switch sensor or a distance sensor, and are connected with the main control unit, the main control unit is further configured to control the motor 1 to stop running or perform a reversing operation based on the first position or the second position. In the case of the micro switch sensors, the motor 1 will stop working or reverse rotation as soon as the trolley 2 or the sliding table 4 touches the micro switch, and the sliding table 4 or the trolley 2 is predicted to reach the end or the initial position of the track 3 or the screw rod 5 (the first sensor 9 and the second sensor 10 can be named, and which sensor is triggered is judged, so that whether the trolley 2 or the sliding table 4 is at the initial position or the end position is known). When the distance sensors are all distance sensors, one end of the motor 1 is provided with the distance sensor, the distance sensor can judge the distance between the pulley 2 or the sliding table 4 and the motor 1 in real time through ultrasonic waves or laser beams and the like, so that the position of the pulley 2 or the sliding table 4 is judged, and when the pulley 2 or the sliding table 4 reaches a set position, the motor 1 can automatically stop running or perform reversing operation. If two distance sensors are arranged, the judgment of the distance between the pulley 2 or the sliding table 4 and the motor 1 is not affected, and compared with the situation that one distance sensor is arranged, the resource is saved.
In another embodiment, the apparatus further comprises a hall sensor; the hall sensor is arranged on the motor 1, and the hall sensor is used for: recording the number of turns of the motor 1; determining the position of the trolley 2 on the track 3 according to the number of turns; or, according to the number of running turns, determining the position of the sliding table 4 on the screw rod 5. Furthermore, before the device is used, the number of turns required by the shaft of the motor 1 should be recorded in advance after the sliding table 4 or the pulley 2 runs for one pass, and then the position of the sliding table 4 or the pulley 2 can be determined according to the real-time number of turns during the operation.
In the working process of the device, power needs to be supplied, as shown in fig. 6, a power supply network is supplied for wire power supply, and the specific power supply connection mode is as follows: the commercial power and the Internet are both connected with the router and supply power for the router; the router is connected with the POE switch to supply power for the POE switch, and meanwhile, the commercial power also supplies power for the POE switch; the POE switch is connected with a main control unit (main control) to supply power and network for the main control unit; the main control is respectively connected with the camera, the sensor assembly 7, the first sensor 9, the second sensor 10, the Hall sensor, the temperature sensor and the like to respectively supply power and control; the main control unit is also connected with the motor 1 to control the motor 1, and the commercial power supplies power to the motor 1. The first sensor, the second sensor, and the third sensor in fig. 6 are only schematic, and represent portions of the plurality of sensors, such as the sensor assembly 7, the first sensor 9, the second sensor 10, the hall sensor, and the temperature sensor.
As shown in fig. 7, the system is a wired power supply and a wireless power supply network, and is different from fig. 6 in that: the mains supply supplies power to the power supply module, and the power supply module supplies power to the main control; the router and the master control are both wireless power supply networks.
As shown in fig. 8, the brush contact type power supply network is different from that of fig. 6 in that: the POE switch is connected with one electric brush, the electric brush is contacted with the other electric brush, and the other electric brush supplies power and supplies a network for the main control.
As shown in fig. 9, the brush contact type power supply and wireless power supply network are different from those of fig. 8 in that: the mains supply supplies power to the power supply module, and the power supply module supplies power to one electric brush; the router and the master control are both wireless power supply networks.
In summary, the intelligent hardware device is used for scanning volume information, estimated quantity and weight information of objects in a scene. In the device, the motor drives the pulley to linearly or curvilinearly move on the track through the belt or the chain, or drives the screw rod to rotate by the motor so as to enable the sliding table to linearly move. The main control unit, the camera, the sensor assembly and the like are loaded on the pulley/sliding table, objects in a scene are scanned along with the motion of the pulley/sliding table, the image recognition is carried out by matching with the camera through an artificial intelligent algorithm, the information of goods, personnel, vehicles and environment can be collected, the number and the volume of the objects in the scene are determined, the types of the objects are determined according to the image recognition, the density of the objects of the types is obtained from a database or a network which is arranged in advance, and the weight information of the objects is estimated by combining the volume information. The intelligent hardware equipment can provide accurate and rapid object volume and estimated weight information, and has wide application prospect and economic benefit.
Example two
As shown in fig. 10, to achieve the technical solution in the first embodiment to achieve the corresponding functions and technical effects, the present embodiment further provides a method for estimating the volume, the number and the weight of the objects in the scene, which is applied to the apparatus for estimating the volume, the number and the weight of the objects in the scene in the first embodiment, and the method includes:
step 100, turning on the active component, the sensor assembly and the image acquisition assembly.
Step 200, the distance between an object and a sensor within a scene is acquired.
Step 300, acquiring images of objects in a scene, and then determining the number, the positions and the types of the objects in the scene according to the images based on a preset image recognition algorithm.
Step 400, calculating the volume of the object in the scene based on the distance between the object in the scene and the sensor and the position of the object in the scene.
Step 500, determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
In a specific practical application, as shown in fig. 11, after the device is powered on, the main control unit is automatically started, the motor is self-checked, and returns to the initial position for standby after a back and forth movement is performed. When the main control unit receives a starting instruction sent by the network terminal or receives a starting instruction of a local timing scanning task, the main control unit sends starting signals to all the sensors and the motors. As shown in fig. 12-15, the motor and each sensor start to work, and the motor drives the pulley/slipway to start to move from the initial position (one side of the rail/screw rod), so as to drive the sensor assembly and the image acquisition component to move; during the moving process, sweeping the object in the scene; after moving to the end point (the other side of the track/screw rod), the motor and each sensor stop running, and at the moment, the main control part uploads the data scanned by each sensor to the cloud. After a predetermined time (typically a few seconds), the motor starts to reverse, returns to the initial position, and stops to wait for the next instruction.
Compared with the prior art, the invention has the following advantages:
1) The invention can rapidly and accurately scan the volume and estimated weight of the object in the scene by using intelligent hardware equipment and an artificial intelligent algorithm, thereby greatly improving the working efficiency.
2) According to the invention, through the combined movement of the motor and the pulley or the sliding table, the automatic operation of the equipment is realized, and the manual intervention and the operation cost are reduced.
3) The invention has multifunction, can be used in a plurality of fields such as goods counting, safety production and the like, and has wide application prospect.
4) The invention provides important basis for decision making and management by collecting information of goods, personnel, vehicles and environment and combining an artificial intelligence algorithm to perform data analysis.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (9)

1. The device for estimating the volume, the number and the weight of the objects in the scene is characterized by comprising a main control component, a driving component, a driven component, a carrier platform, a sensor component and an image acquisition component;
the main control component, the sensor component and the image acquisition component are all arranged on the carrier platform; the carrier platform is arranged on the driven part; the driven part is mechanically connected with the driving part;
the driving part is used for driving the carrier platform to move through the driven part; the sensor assembly is used for moving along with the carrier platform and collecting the distance between an object and a sensor in a scene in real time; the image acquisition component is used for moving along with the carrier platform and acquiring images of objects in a scene in real time, and then determining the number, the positions and the categories of the objects in the scene according to the images based on a preset image recognition algorithm;
the driving component is a motor; the driven part comprises a pulley, a linkage sub-part and a track; the carrier platform is arranged on the pulley; the motor is used for driving the pulley to move linearly or in a curve on the track through the linkage sub-component; the linkage sub-component is a belt or a chain; the sensor assembly includes a lidar; the laser radar is used for: emitting a laser beam onto an object within the scene; receiving a laser beam reflected back by an object in a scene, and determining the flight time of the laser beam; calculating the distance between an object in a scene and a sensor in real time by adopting the laser beam flight time based on a laser triangulation ranging method;
the main control component is respectively connected with the active component, the sensor component and the image acquisition component; the main control unit is used for: controlling the start and stop of the driving component, the sensor component and the image acquisition component; receiving a distance between an object in the scene and a sensor, and the number, the position and the category of the object in the scene; calculating the volume of the object in the scene based on the distance between the object and the sensor in the scene and the position of the object in the scene; and determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
2. The apparatus for estimating a volume, a number and a weight of objects within a scene according to claim 1, wherein the driving member is a motor, and the driven member comprises a screw and a sliding table; the carrier platform is arranged on the sliding table; the motor is used for driving the sliding table to conduct linear motion through the screw rod.
3. The apparatus for estimating the volume, the number and the weight of objects in a scene according to claim 1, wherein the main control unit is a Linux system board, an X86 platform system board, an ARM platform system board, a 51 single-chip microcomputer, an esp8266, an esp32 or an STM32 single-chip microcomputer.
4. The apparatus for estimating the volume, number and weight of objects within a scene according to claim 1 wherein said sensor assembly comprises an ultrasonic sensor;
the ultrasonic sensor is used for: transmitting ultrasonic waves to objects in the scene; receiving ultrasonic waves reflected back by objects in a scene, and determining the flight time of the ultrasonic waves; and calculating the distance between the object and the sensor in the scene in real time based on the ultrasonic flight time.
5. The apparatus for estimating a volume, number and weight of an object within a scene as recited in claim 4, wherein said sensor assembly further comprises a temperature sensor;
the temperature sensor is used for acquiring surface temperature distribution simulation data of objects in a scene in a non-contact mode, and then determining digital temperature data based on the surface temperature distribution simulation data;
the main control unit is also connected with the temperature sensor, and is also used for: correlating the digital temperature data with a point cloud image or an image of an object within the scene; the point cloud is determined from the laser beam reflected back by objects within the scene.
6. The apparatus for estimating the volume, number and weight of objects within a scene as recited in claim 1, wherein said image acquisition assembly comprises a camera and an image processing sub-assembly; the preset image recognition algorithm comprises a preset target detection algorithm and a preset object category recognition algorithm;
the camera is used for moving along with the carrier platform and collecting images of objects in a scene in real time;
the image processing sub-component is arranged on the camera and is used for:
determining the position of an object in the scene according to the image of the object in the scene based on the preset target detection algorithm; the preset target detection algorithm is RCNN or YOLO;
determining the number and the category of the objects in the scene according to the images of the objects in the scene based on the preset object category recognition algorithm; the preset object type recognition algorithm is a convolutional neural network or a support vector machine.
7. The apparatus for estimating the volume, number and weight of objects within a scene according to claim 2, further comprising a first sensor and a second sensor;
the first sensor and the second sensor are respectively arranged at two ends of the track; the first sensor and the second sensor are used for cooperatively determining a first position of the pulley on the track;
or the first sensor and the second sensor are respectively arranged at two ends of the screw rod; the first sensor and the second sensor are used for cooperatively determining a second position of the sliding table on the screw rod;
the main control unit is respectively connected with the first sensor and the second sensor, and is also used for controlling the motor to stop running or perform reversing operation based on the first position or the second position.
8. The apparatus for estimating the volume, number and weight of objects within a scene according to claim 2, further comprising a hall sensor;
the Hall sensor is arranged on the motor and is used for:
recording the running turns of the motor;
determining the position of the pulley on the track according to the running turns; or determining the position of the sliding table on the screw rod according to the running turns.
9. A method for estimating the volume, number and weight of objects in a scene, applied to the device for estimating the volume, number and weight of objects in a scene according to any one of claims 1 to 8, characterized in that the method comprises:
starting the driving component, the sensor component and the image acquisition component;
collecting the distance between an object and a sensor in a scene;
acquiring images of objects in a scene, and then determining the number, the positions and the types of the objects in the scene based on a preset image recognition algorithm;
calculating the volume of the object in the scene based on the distance between the object and the sensor in the scene and the position of the object in the scene;
and determining the corresponding object density based on the object category in the scene, and then calculating the weight of the object in the scene by combining the volume of the object in the scene.
CN202311548617.2A 2023-11-21 2023-11-21 Device and method for estimating volume, number and weight of objects in scene Pending CN117268474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311548617.2A CN117268474A (en) 2023-11-21 2023-11-21 Device and method for estimating volume, number and weight of objects in scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311548617.2A CN117268474A (en) 2023-11-21 2023-11-21 Device and method for estimating volume, number and weight of objects in scene

Publications (1)

Publication Number Publication Date
CN117268474A true CN117268474A (en) 2023-12-22

Family

ID=89208395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311548617.2A Pending CN117268474A (en) 2023-11-21 2023-11-21 Device and method for estimating volume, number and weight of objects in scene

Country Status (1)

Country Link
CN (1) CN117268474A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101614606A (en) * 2009-07-30 2009-12-30 中国科学院力学研究所 A kind of measurement mechanism and method that detects the space plasma thruster thrust vectoring
CN103347111A (en) * 2013-07-27 2013-10-09 青岛歌尔声学科技有限公司 Intelligent mobile electronic equipment with size and weight estimation function
CN103913116A (en) * 2014-03-10 2014-07-09 上海大学 Large-scale piled material volume two-side parallel measuring device and method
EP2863176A2 (en) * 2013-10-21 2015-04-22 Sick Ag Sensor with scanning unit that can be moved around a rotating axis
CN104977072A (en) * 2015-06-03 2015-10-14 上海飞翼农业科技有限公司 Fruit weight remote measurement device and method
CN105674908A (en) * 2015-12-29 2016-06-15 中国科学院遥感与数字地球研究所 Measuring device, and volume measuring and monitoring system
CN109931869A (en) * 2019-03-21 2019-06-25 北京理工大学 Volume of material high-precision detecting method based on laser scanning imaging
CN211234299U (en) * 2019-10-12 2020-08-11 中国科学院东北地理与农业生态研究所 Portable remote plant size measuring instrument
CN111553914A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Vision-based goods detection method and device, terminal and readable storage medium
CN112101389A (en) * 2020-11-17 2020-12-18 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods
CN112970026A (en) * 2018-11-20 2021-06-15 华为技术有限公司 Method for estimating object parameters and electronic equipment
CN113297408A (en) * 2021-06-09 2021-08-24 上海电机学院 Image matching and scene recognition system and method based on Sift algorithm
CN114511611A (en) * 2022-01-25 2022-05-17 普洛斯科技(重庆)有限公司 Image recognition-based goods heap statistical method and device
CN114612786A (en) * 2022-03-18 2022-06-10 杭州萤石软件有限公司 Obstacle detection method, mobile robot and machine-readable storage medium
CN114966733A (en) * 2022-04-21 2022-08-30 北京福通互联科技集团有限公司 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera
CN115205654A (en) * 2022-07-06 2022-10-18 舵敏智能科技(苏州)有限公司 Novel monocular vision 3D target detection method based on key point constraint
CN115389212A (en) * 2022-08-31 2022-11-25 国科大杭州高等研究院 System and method for detecting starting response time of cold air thruster
CN115661230A (en) * 2022-10-24 2023-01-31 浙江天垂科技有限公司 Estimation method for warehouse material volume
CN219347645U (en) * 2023-01-13 2023-07-14 福建宏泰智能工业互联网有限公司 Laser pallet volume measuring and calculating machine
CN116935192A (en) * 2023-07-28 2023-10-24 北京元境数字科技有限公司 Data acquisition method and system based on computer vision technology

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101614606A (en) * 2009-07-30 2009-12-30 中国科学院力学研究所 A kind of measurement mechanism and method that detects the space plasma thruster thrust vectoring
CN103347111A (en) * 2013-07-27 2013-10-09 青岛歌尔声学科技有限公司 Intelligent mobile electronic equipment with size and weight estimation function
EP2863176A2 (en) * 2013-10-21 2015-04-22 Sick Ag Sensor with scanning unit that can be moved around a rotating axis
CN103913116A (en) * 2014-03-10 2014-07-09 上海大学 Large-scale piled material volume two-side parallel measuring device and method
CN104977072A (en) * 2015-06-03 2015-10-14 上海飞翼农业科技有限公司 Fruit weight remote measurement device and method
CN105674908A (en) * 2015-12-29 2016-06-15 中国科学院遥感与数字地球研究所 Measuring device, and volume measuring and monitoring system
CN112970026A (en) * 2018-11-20 2021-06-15 华为技术有限公司 Method for estimating object parameters and electronic equipment
CN109931869A (en) * 2019-03-21 2019-06-25 北京理工大学 Volume of material high-precision detecting method based on laser scanning imaging
CN211234299U (en) * 2019-10-12 2020-08-11 中国科学院东北地理与农业生态研究所 Portable remote plant size measuring instrument
CN111553914A (en) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 Vision-based goods detection method and device, terminal and readable storage medium
CN112101389A (en) * 2020-11-17 2020-12-18 支付宝(杭州)信息技术有限公司 Method and device for measuring warehoused goods
CN113297408A (en) * 2021-06-09 2021-08-24 上海电机学院 Image matching and scene recognition system and method based on Sift algorithm
CN114511611A (en) * 2022-01-25 2022-05-17 普洛斯科技(重庆)有限公司 Image recognition-based goods heap statistical method and device
CN114612786A (en) * 2022-03-18 2022-06-10 杭州萤石软件有限公司 Obstacle detection method, mobile robot and machine-readable storage medium
CN114966733A (en) * 2022-04-21 2022-08-30 北京福通互联科技集团有限公司 Beef cattle three-dimensional depth image acquisition system based on laser array and monocular camera
CN115205654A (en) * 2022-07-06 2022-10-18 舵敏智能科技(苏州)有限公司 Novel monocular vision 3D target detection method based on key point constraint
CN115389212A (en) * 2022-08-31 2022-11-25 国科大杭州高等研究院 System and method for detecting starting response time of cold air thruster
CN115661230A (en) * 2022-10-24 2023-01-31 浙江天垂科技有限公司 Estimation method for warehouse material volume
CN219347645U (en) * 2023-01-13 2023-07-14 福建宏泰智能工业互联网有限公司 Laser pallet volume measuring and calculating machine
CN116935192A (en) * 2023-07-28 2023-10-24 北京元境数字科技有限公司 Data acquisition method and system based on computer vision technology

Similar Documents

Publication Publication Date Title
TWI619462B (en) Electric sweeper
CN108974045A (en) A kind of automatically walk track detector based on machine vision
CN108254063B (en) Vibration measuring device and method for tracking rotating blade
CN112504123A (en) Automatic detection equipment and method for plates of power transmission tower
CN112526995A (en) Hanging rail type inspection robot system and detection method thereof
CN213226227U (en) Robot coating system based on accurate location of laser
CN110996054A (en) Intelligent power transmission line inspection robot inspection system and inspection method
CN117268474A (en) Device and method for estimating volume, number and weight of objects in scene
KR100621065B1 (en) displacement measuring system
CN112379605B (en) Bridge crane semi-physical simulation control experiment system and method based on visual servo
CN116296517B (en) Lifting machinery comprehensive performance detection device and detection method
CN219537423U (en) Cigarette circumference distribution detection device based on image method
CN107063988A (en) Steel construction Damage of Corroded imaging device and method inside a kind of armored concrete
CN111707484A (en) Magnetic wave positioning vehicle bottom intelligent detection system
CN117232387A (en) Laser and scanning three-dimensional modeling terminal equipment and method
CN114705691B (en) Industrial machine vision control method and device
CN112163484B (en) Intelligent positioning device and method for defects of enclosure structure of refrigeration house
CN209615485U (en) A kind of radio patrol checking machine people's system
CN109341582A (en) Goods and materials outline data acquisition device and method suitable for large scene of storing in a warehouse
CN210806532U (en) Inspection device for power transformation and distribution equipment
CN108344744A (en) A kind of cold formed steel structure laser geometric defect detection equipment and method
CN209764772U (en) solar module scanning device
CN216819957U (en) Contact net parameter and image acquisition device based on wisdom fortune dimension
CN108032335A (en) A kind of robot essence sweeps module
CN212872759U (en) Underground circuit fault detection device based on multipoint infrared images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination