CN115209237B - Data acquisition box and control method for same - Google Patents

Data acquisition box and control method for same Download PDF

Info

Publication number
CN115209237B
CN115209237B CN202210639631.2A CN202210639631A CN115209237B CN 115209237 B CN115209237 B CN 115209237B CN 202210639631 A CN202210639631 A CN 202210639631A CN 115209237 B CN115209237 B CN 115209237B
Authority
CN
China
Prior art keywords
assembly
camera
box body
component
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210639631.2A
Other languages
Chinese (zh)
Other versions
CN115209237A (en
Inventor
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202210639631.2A priority Critical patent/CN115209237B/en
Publication of CN115209237A publication Critical patent/CN115209237A/en
Application granted granted Critical
Publication of CN115209237B publication Critical patent/CN115209237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K5/00Casings, cabinets or drawers for electric apparatus
    • H05K5/02Details
    • H05K5/0217Mechanical details of casings
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The embodiment of the disclosure discloses a data acquisition box and a control method for the data acquisition box. One embodiment of the data collection box comprises: the system comprises a box body, a base, an antenna assembly, a laser radar assembly, a camera assembly, an inertial navigation assembly and a control assembly, wherein the antenna assembly is arranged at the top of the box body and is used for receiving positioning information; the laser radar component is arranged at the top of the box body in an adjustable detection angle and is used for detecting characteristic information of a target object; the camera component is arranged in the box body in an adjustable collection angle manner and is used for collecting image information through the box body; the inertial navigation component is arranged in the box body and used for acquiring instant information; the base is connected to the bottom of the box, and the control assembly is arranged in the box and is in communication connection with the antenna assembly, the laser radar assembly, the camera assembly and the inertial navigation assembly. According to the embodiment, the setting angles of the camera component and the laser radar component can be adjusted to adapt to corresponding scenes, so that the flexibility of the data acquisition box is improved.

Description

Data acquisition box and control method for same
Technical Field
The embodiment of the disclosure relates to the technical field of data acquisition, in particular to a data acquisition box and a control method for the data acquisition box.
Background
The data acquisition box is used for acquiring corresponding information, such as temperature, distance and the like, through the sensor. Has been applied to various fields such as the field of electronic map data collection, etc.
In the process of collecting map data, a data collection box is usually placed on a mobile device such as a vehicle, and data is collected in real time through a sensor in the data collection box.
However, when the above-mentioned data collection box is adopted to collect data in real time, there are often the following technical problems:
first, the relevant sensors are usually fixed, the acquisition angle is fixed, and only the acquisition requirement of a single scene can be met, exposing the limitation of the data acquisition box.
Second, since the acquisition angles of the relevant sensors are fixed, the sensors cannot automatically adjust the acquisition angles when entering different scenes, making the data acquisition box inflexible.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a data collection box and a control method for the data collection box to solve one or more of the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a data collection box comprising: the system comprises a box body, a base, an antenna assembly, a laser radar assembly, a camera assembly, an inertial navigation assembly and a control assembly, wherein the antenna assembly is arranged at the top of the box body and is used for receiving positioning information; the laser radar component is arranged at the top of the box body in an adjustable detection angle and is used for detecting characteristic information of a target object; the camera component is arranged in the box body in an adjustable collection angle and is used for collecting image information through the box body; the inertial navigation component is arranged in the box body and is used for acquiring instant information; the base is connected to the bottom of the box, and the control assembly is arranged in the box and is in communication connection with the antenna assembly, the laser radar assembly, the camera assembly and the inertial navigation assembly.
Optionally, the camera assembly includes two cameras, a first connecting plate connected to an inner wall of the case, and two first pivoting assemblies provided to both ends of the first connecting plate, the two cameras being connected to the first pivoting assemblies in one-to-one correspondence, the first pivoting assemblies being capable of rotating the cameras with respect to the case.
Optionally, the first pivot assembly includes a camera fixing base fixedly connected to the first connecting plate, and a camera rotating base rotatably connected to the fixing base, where the camera rotating base is fixedly connected to the camera.
Optionally, the laser radar assembly includes a laser radar and a second pivot assembly, the second pivot assembly connecting the top of the tank and the laser radar, the second pivot assembly being capable of rotating the laser radar relative to the tank.
Optionally, the second pivot assembly includes a radar fixing base, a radar rotating base rotatably connected to the radar fixing base, and the radar rotating base is fixedly connected to the radar.
Optionally, the inertial navigation assembly includes a second connection board and an inertial navigation sensor fixed on the second connection board, and two ends of the second connection board are fixedly connected with the inside of the box body.
Optionally, the data collection box further comprises a handle provided to a top of the box.
In a second aspect, some embodiments of the present disclosure provide a control method for the data collection box described in any one of the embodiments of the first aspect, the method comprising: receiving positioning information transmitted by an antenna assembly; receiving characteristic information transmitted by a laser radar component; receiving image information transmitted by a camera component; receiving instant information transmitted by an inertial navigation component; and processing and storing the positioning information, the characteristic information, the image information and the instant information.
The above embodiments of the present disclosure have the following advantageous effects: the data acquisition box of some embodiments of the present disclosure can adapt to acquisition requirements of various scenarios. Specifically, the reason for making the associated data collection box difficult to accommodate a variety of scenarios is: the sensors in the data acquisition box are typically fixedly located. Based on this, the data acquisition box of some embodiments of the present disclosure includes not only a box, a base, an antenna assembly, an inertial navigation assembly, and a control assembly, but also a lidar assembly adjustably disposed to the top of the box at a detection angle and a camera assembly adjustably disposed to the inside of the box at a collection angle. Also because laser radar subassembly and camera subassembly adjustable angle ground set up for when getting into different scenes, can adapt to corresponding scene through adjusting the setting angle, make this data acquisition box can gather data under multiple scene, when having improved this data acquisition box and gathered data accuracy, also increased application range.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a cross-sectional view of some embodiments of a data collection box according to the present disclosure;
FIG. 2 is a schematic structural view of some embodiments of a camera assembly according to the present disclosure;
FIG. 3 is a schematic structural view of some embodiments of a lidar assembly according to the present disclosure;
fig. 4 is a flow chart of some embodiments of a control method for a data collection box according to the present disclosure.
The figures are marked as follows:
1. a case; 2. A base; 3. An antenna assembly;
4. a lidar component; 5. A camera assembly; 6. An inertial navigation assembly;
11. opening holes; 41. A laser radar; 42. A radar rotating seat;
43. a radar fixing seat; 51. A camera; 52. A camera rotating base;
53. a camera fixing seat; 54. A first connection plate; 61. An inertial navigation sensor;
62. and a second connecting plate.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
First, referring to fig. 1, fig. 1 is a cross-sectional view of some embodiments of a data collection box according to the present disclosure. As shown in fig. 1, the data acquisition box comprises a box body 1, a base 2, an antenna assembly 3, a laser radar assembly 4, a camera assembly 5, an inertial navigation assembly 6 and a control assembly.
The case 1 may house the camera assembly 5 and the inertial navigation assembly 6. The case 1 may be assembled by six baffles, one or more of which may be assembled or disassembled, so as to facilitate replacement and maintenance of the camera assembly 5, the inertial navigation assembly 6, and the control assembly. The base 2 may be provided to the bottom of the case 1 for accommodating the control assembly. Further, the base 2 may be provided with a connection member to be detachably connected to a mobile device such as a vehicle.
The control component may be a processor, chip, electronic device, etc. that carries a control program, receives and processes the relevant signals.
The antenna assembly 3 is disposed on the top of the case 1 and is used for receiving positioning information. The antenna assembly 3 may be a GPS (Global Positioning System ) antenna assembly, a beidou satellite navigation system antenna assembly, etc. The data acquisition box is located by receiving satellite signals.
The camera assembly 5 is disposed inside the case 1 to collect image information through the case 1 with an adjustable collection angle. Referring next to fig. 2 with continued reference to fig. 1, fig. 2 is a schematic structural view of some embodiments of a camera assembly according to the present disclosure. The camera assembly 5 described above may include two cameras 51, a first connection plate 54, and two first pivoting assemblies. Wherein both ends of the first connection plate 54 are connected to inner walls of left and right baffles (directions in fig. 2) of the case 1. The two pivoting assemblies are provided to both ends of the first link plate 54. The two cameras 51 are connected to the first pivoting assembly in a one-to-one correspondence, and the first pivoting assembly is capable of rotating the cameras 51 with respect to the case 1. Openings 11 may be formed in the left and right baffles so that the camera 51 can collect image information outside the box body 1. The above-described first pivoting assembly can be determined by a person skilled in the art according to common general knowledge to realize the function of rotating the camera 51.
In some alternative implementations of some embodiments, the first pivot assembly described above may include a camera mount 53 and a camera swivel mount 52. The camera fixing base 53 and the camera rotating base 52 may be rotatably connected by a pivot shaft passing through the camera fixing base 53 and the camera rotating base 52. The camera fixing base 53 may be fixedly connected to the first connecting plate 54. The camera rotation base 52 may be connected to the camera 51. In the working state, the worker can adjust the camera angle by rotating the camera rotating base 52.
The laser radar assembly 4 is disposed at the top of the case 1 to detect characteristic information of a target object with an adjustable detection angle. The characteristic information may be information such as a distance, a height, and an outline of the target object.
Next, description will be given with reference to fig. 3. Fig. 3 is a schematic structural view of some embodiments of a lidar assembly according to the present disclosure. As shown in fig. 3, the lidar assembly 4 may include a lidar 41 and a second pivot assembly. In the operating state, the laser radar 41 emits laser light to the target object, and receives the reflected laser light to determine information such as the distance or height of the target object. The target object can be a wall, a tree, a telegraph pole, a sign board and the like. The second pivoting assembly is connected to the top of the cabinet 1, and the lidar 41 is connected to the second pivoting assembly. The second pivoting assembly can be determined by a person skilled in the art according to common general knowledge to realize the function of rotation of the lidar 41.
In some alternative implementations of some embodiments, the second pivot assembly includes a radar mount 43 and a radar swivel mount 42. The radar fixing base 43 and the radar rotating base 42 may be rotatably connected by a pivot shaft passing through the radar fixing base 43 and the radar rotating base 42. In the working state, the worker can adjust the detection angle of the laser radar 41 by rotating the radar rotating base 42.
In this way, the camera rotation base 52 and the radar rotation base 42 can be rotated to adapt the data acquisition box to various acquisition scenes. Meanwhile, the accuracy of acquisition is improved, and the reliability of the data acquisition box is improved.
Further, the first pivot assembly and the second pivot assembly can also automatically adjust the rotation angle. Since the first pivot assembly and the second pivot assembly are identical in structure, the first pivot assembly will be described as an example.
A driving motor may be mounted on the camera fixing base 53 or the case 1, and a driving shaft of the driving motor may be connected to the camera rotating base 52 through a gear. When the driving motor rotates, the camera rotating base 52 can be driven to rotate. Further, a start-stop button may be provided to control the rotation of the driving motor. Therefore, compared with manual adjustment, the automatic adjustment device is more convenient and has higher efficiency.
Further, the drive motor may be communicatively coupled to the control assembly. When the control component receives the positioning information, current scene information, such as streets, parking lots or highways, can be determined according to the positioning information. A table of correspondence between scene information and rotation angles may be preset in the control unit, and after determining the scene information, the control unit controls the camera rotating base 52 to rotate by the corresponding rotation angle.
Still further, the rotation angle may be obtained by analyzing the scene information by using an artificial intelligence chip included in the control component, where a machine learning model carried by the artificial intelligence chip is obtained by training a training sample set.
Optionally, the training sample set includes sample scene information and a sample rotation angle, and the machine learning model is trained with the sample scene information as input and the sample rotation angle as a desired output.
As an example, the machine learning model may be derived from performing the following training steps based on a set of training samples: sample scene information of at least one training sample in the training sample set is respectively input into an initial machine learning model to obtain a corresponding rotation angle; comparing the rotation angle corresponding to each sample scene information in the at least one training sample with the corresponding sample rotation angle; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy rate is greater than the preset accuracy rate threshold, taking the initial machine learning model as a machine learning model with completed training; and in response to determining that the accuracy is not greater than the preset accuracy threshold, adjusting parameters of the initial machine learning model, and using unused training samples to form a training sample set, using the adjusted initial machine learning model as an initial machine learning model, and executing the training step again. It will be appreciated that after the training described above, a machine learning model may be used to characterize the correspondence of scene information to rotation angle. The machine learning model mentioned above may be a convolutional neural network model.
As an example, the machine learning model described above may include scene information and a correspondence table. The correspondence table may be a correspondence table based on correspondence between a large amount of scene information and rotation angles by those skilled in the art. In this way, the scene information is compared with the plurality of scene information in the correspondence table in turn, and if any one of the scene information in the correspondence table is identical to or similar to the scene information, the rotation angle corresponding to the scene information in the correspondence table is taken as the rotation angle indicated by the scene information. The control component described above is capable of determining a rotation angle for the scene information. Thus, different rotation angles are configured according to different scene information.
As another example, the initial machine learning model may be an untrained or untrained deep learning model, and the layers of the initial deep learning model may be provided with initial parameters that may be continuously adjusted during the training of the deep learning model. The initial deep learning model may be various types of untrained or untrained artificial neural networks or a model obtained by combining a plurality of untrained or untrained artificial neural networks, for example, the initial deep learning model may be an untrained convolutional neural network, an untrained cyclic neural network, or a model obtained by combining an untrained convolutional neural network, an untrained cyclic neural network, and an untrained fully connected layer. In this way, the scene information can be input from the input side of the deep learning model, sequentially processed by the parameters of each layer in the deep learning model, and output from the output side of the deep learning model, wherein the information output from the output side is the rotation angle.
The above embodiment is also applicable to the second pivoting assembly, and the rotation angle of the lidar can be controlled.
The above technical solution is an invention point of the embodiments of the present disclosure, and solves the second technical problem of the background art, in which the sensor cannot automatically adjust the collection angle when entering into different scenes because the collection angle of the related sensor is fixed, so that the data collection box lacks flexibility. Factors that lead to poor flexibility of the camera assembly, the lidar assembly of the data acquisition box tend to be as follows: the acquisition angles of the relevant sensors are fixed, and the sensors cannot automatically adjust the acquisition angles when entering different scenes. To achieve this effect, the present disclosure introduces a drive motor. Specifically, the driving motor is connected to the camera rotation base or the radar rotation base. The context information is determined by the positioning information received by the control component. And then adjusting according to the camera acquisition angle or the detection angle of the laser radar corresponding to the scene information. Specifically, the control assembly controls the driving motor to drive the camera rotating seat or the radar rotating seat to rotate by a corresponding rotation angle. In this way, the acquisition angle of the camera or the detection angle of the laser radar is adapted to the current scene information. Therefore, even when entering into another scene from one scene, the control component can timely control the driving motor to rotate, so that the flexibility and the automation degree of the camera are improved.
Still further, still can also obtain rotation angle through the analysis of the artificial intelligence chip that control assembly included to scene information for this data acquisition box is more intelligent, and automatic adjustment rotation angle's efficiency is higher.
Referring back to fig. 1, the inertial navigation assembly 6 may include a second connection plate 62 and an inertial navigation sensor 61 secured to the second connection plate 62. Both ends of the second connection plate 62 are fixedly connected to the inside of the case 1. The inertial navigation sensor 61 is used to acquire instant messages. As an example, the above-described data collection box can be mounted on a mobile device such as a vehicle. The inertial navigation sensor 61 can detect instantaneous information such as the instantaneous speed or the instantaneous position of the vehicle.
Optionally, a handle can be further arranged on the top of the box body 1, so that the box body is convenient for workers to carry.
The present disclosure also provides a control method for a data collection box, which can be used for the data collection box in the above embodiments. As shown in fig. 4, a flow chart 400 of some embodiments of the control method for a data collection box provided by the present disclosure is presented. The method may comprise the steps of:
step 401: and receiving the positioning information transmitted by the antenna assembly.
In some embodiments, the execution subject of the control method may be a control component. The antenna assembly is in communication with the control assembly and is capable of transmitting positioning information to the control assembly.
Step 402: and receiving the characteristic information transmitted by the laser radar component.
In some embodiments, a lidar component is communicatively coupled to the control component, the lidar component being capable of transmitting the collected characteristic information to the control component.
Step 403: image information transmitted by the camera assembly is received.
In some embodiments, a camera assembly is communicatively coupled to the control assembly, the camera assembly being capable of transmitting the acquired image information to the control assembly.
Step 404: and receiving the instant message transmitted by the inertial navigation component.
In some embodiments, an inertial navigation component is communicatively coupled to the control component, the inertial navigation component being capable of transmitting the collected transient information to the control component.
Step 405: and processing and storing the positioning information, the characteristic information, the image information and the instant information.
In some embodiments, the control component performs analysis and operation of correlation algorithm on the received positioning information, characteristic information, image information and instantaneous information, and stores the processed correlation information.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (6)

1. A data collection box, comprising: the device comprises a box body, a base, an antenna component, a laser radar component, a camera component, an inertial navigation component and a control component, wherein,
the antenna component is arranged on the top of the box body and is used for receiving positioning information;
the laser radar component is arranged at the top of the box body in an adjustable detection angle and is used for detecting characteristic information of a target object;
the camera component is arranged in the box body in an adjustable collection angle and is used for collecting image information through the box body;
the inertial navigation component is arranged in the box body and is used for acquiring instant information;
the base is connected to the bottom of the box body, and the control assembly is arranged in the box body and is in communication connection with the antenna assembly, the laser radar assembly, the camera assembly and the inertial navigation assembly;
the camera assembly comprises two cameras, a first connecting plate connected to the inner wall of the box body and two first pivoting assemblies, wherein the two pivoting assemblies are arranged at two ends of the first connecting plate, the two cameras are connected to the first pivoting assemblies in a one-to-one correspondence manner, and the first pivoting assemblies enable the cameras to rotate relative to the box body;
the first pivot assembly comprises a camera fixing seat fixedly connected to the first connecting plate, and a camera rotating seat rotatably connected with the fixing seat, and the camera rotating seat is fixedly connected with the camera;
the camera fixing seat or the box body is provided with a driving motor, a driving shaft of the driving motor is connected with the camera rotating seat through a gear, and when the driving motor rotates, the driving motor drives the camera rotating seat to rotate;
the driving motor is in communication connection with the control component, when the control component receives positioning information, current scene information is determined according to the positioning information, a corresponding relation table of preset scene information and rotation angles in the control component is used for controlling the camera rotating seat to rotate by the corresponding rotation angles after the scene information is determined;
the rotation angle is obtained by analyzing the scene information through an artificial intelligent chip included in the control component, wherein a machine learning model borne by the artificial intelligent chip is obtained by training a training sample set;
the training sample set comprises sample scene information and a sample rotation angle, and the machine learning model is obtained by taking the sample scene information as input and taking the sample rotation angle as expected output for training;
the machine learning model is obtained by performing the following training steps based on a training sample set: sample scene information of at least one training sample in the training sample set is respectively input into an initial machine learning model to obtain a corresponding rotation angle; comparing the rotation angle corresponding to each sample scene information in the at least one training sample with the corresponding sample rotation angle; determining the prediction accuracy of the initial machine learning model according to the comparison result; determining whether the prediction accuracy is greater than a preset accuracy threshold; in response to determining that the accuracy rate is greater than the preset accuracy rate threshold, taking the initial machine learning model as a trained machine learning model; and in response to determining that the accuracy is not greater than the preset accuracy threshold, adjusting parameters of the initial machine learning model, and using unused training samples to form a training sample set, using the adjusted initial machine learning model as an initial machine learning model, and executing the training step again.
2. The data acquisition bin of claim 1, wherein the lidar assembly comprises a lidar and a second pivot assembly, the second pivot assembly connecting the bin top and the lidar, the lidar being capable of rotating relative to the bin.
3. The data acquisition bin of claim 2, wherein the second pivot assembly comprises a radar mount, a radar swivel rotatably coupled to the radar mount, the radar swivel fixedly coupled to the radar.
4. The data acquisition box according to claim 1, wherein the inertial navigation assembly comprises a second connection plate and an inertial navigation sensor fixedly arranged on the second connection plate, and two ends of the second connection plate are fixedly connected with the inside of the box body.
5. The data collection bin of claim 1, further comprising a handle disposed to a top of the bin.
6. A control method for the data collection box according to any one of claims 1 to 5, comprising:
receiving positioning information transmitted by an antenna assembly;
receiving characteristic information transmitted by a laser radar component;
receiving image information transmitted by a camera component;
receiving instant information transmitted by an inertial navigation component;
and processing and storing the positioning information, the characteristic information, the image information and the instant information.
CN202210639631.2A 2022-06-08 2022-06-08 Data acquisition box and control method for same Active CN115209237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210639631.2A CN115209237B (en) 2022-06-08 2022-06-08 Data acquisition box and control method for same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210639631.2A CN115209237B (en) 2022-06-08 2022-06-08 Data acquisition box and control method for same

Publications (2)

Publication Number Publication Date
CN115209237A CN115209237A (en) 2022-10-18
CN115209237B true CN115209237B (en) 2023-05-26

Family

ID=83575405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210639631.2A Active CN115209237B (en) 2022-06-08 2022-06-08 Data acquisition box and control method for same

Country Status (1)

Country Link
CN (1) CN115209237B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977813A (en) * 2019-03-13 2019-07-05 山东沐点智能科技有限公司 A kind of crusing robot object localization method based on deep learning frame

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109597095A (en) * 2018-11-12 2019-04-09 北京大学 Backpack type 3 D laser scanning and three-dimensional imaging combined system and data capture method
CN209069210U (en) * 2019-04-02 2019-07-05 成都信息工程大学 A kind of three-dimensional reconstruction system data acquisition device
CN210954736U (en) * 2019-12-06 2020-07-07 深圳市千乘机器人有限公司 Outdoor automatic inspection robot
CN212008943U (en) * 2020-03-26 2020-11-24 北京农业信息技术研究中心 High-flux three-dimensional scanning spectral imaging measuring device
CN114071008A (en) * 2020-07-31 2022-02-18 华为技术有限公司 Image acquisition device and image acquisition method
CN113902813A (en) * 2021-10-21 2022-01-07 西安电子科技大学 Intelligent scene perception defense device and method based on multi-source information fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977813A (en) * 2019-03-13 2019-07-05 山东沐点智能科技有限公司 A kind of crusing robot object localization method based on deep learning frame

Also Published As

Publication number Publication date
CN115209237A (en) 2022-10-18

Similar Documents

Publication Publication Date Title
CN108279428B (en) Map data evaluating device and system, data acquisition system, acquisition vehicle and acquisition base station
CN109977813A (en) A kind of crusing robot object localization method based on deep learning frame
CN111427320A (en) Intelligent industrial robot distributed unified scheduling management platform
CN108279023A (en) Field data collecting device precision check method and device, collecting vehicle and field data acquisition system
CN108303508A (en) Ecology language system and method based on laser radar and deep learning optimum path search
CN111721809B (en) Glass curtain wall structural adhesive detection method and device, unmanned aerial vehicle and storage medium
CN113038074B (en) Indoor environment intelligent inspection method and system based on self-moving data acquisition equipment
CN114020002A (en) Method, device and equipment for inspecting fan blade by unmanned aerial vehicle, unmanned aerial vehicle and medium
CN110493524A (en) A kind of survey light method of adjustment, device, equipment and storage medium
CN115209237B (en) Data acquisition box and control method for same
CN114035606A (en) Pole tower inspection system, pole tower inspection method, control device and storage medium
CN217766851U (en) Data acquisition box
US11729372B2 (en) Drone-assisted sensor mapping
CN112560751A (en) Balcony high-altitude falling risk detection method and system
CN104904415A (en) Remote measurement and control machine-mounted device and method for combine harvester
CN115542951B (en) Unmanned aerial vehicle centralized management and control method, system, equipment and medium based on 5G network
CN114659499B (en) Smart city 3D map model photography establishment method based on unmanned aerial vehicle technology
CN112601021B (en) Method and system for processing monitoring video of network camera
CN115052337A (en) Novel intelligent head-mounted terminal inspection system based on AI-Beacon
CN109961644A (en) Idle parking stall recognition methods, autonomous parking method and device
CN113074955B (en) Method, apparatus, electronic device, and medium for controlling data acquisition
CN112672134A (en) Three-dimensional information acquisition control equipment and method based on mobile terminal
CN110766929A (en) Intelligent data acquisition unit for assisting tobacco field management and use method thereof
CN111150402A (en) Method, device, storage medium and electronic device for determining livestock form parameters
CN117213499A (en) Satellite navigation image acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

CP03 Change of name, title or address