CN114264361A - Object identification method and device combining radar and camera and intelligent electronic scale - Google Patents
Object identification method and device combining radar and camera and intelligent electronic scale Download PDFInfo
- Publication number
- CN114264361A CN114264361A CN202111488335.9A CN202111488335A CN114264361A CN 114264361 A CN114264361 A CN 114264361A CN 202111488335 A CN202111488335 A CN 202111488335A CN 114264361 A CN114264361 A CN 114264361A
- Authority
- CN
- China
- Prior art keywords
- radar
- image
- camera
- tray
- electronic scale
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000003993 interaction Effects 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims description 34
- 238000010801 machine learning Methods 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 22
- 238000013527 convolutional neural network Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 238000007635 classification algorithm Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 238000005303 weighing Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 239000003086 colorant Substances 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012856 packing Methods 0.000 description 2
- 229920003023 plastic Polymers 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000007769 metal material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 229910052755 nonmetal Inorganic materials 0.000 description 1
- 239000011368 organic material Substances 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
Images
Landscapes
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses an object identification method and device combining a radar and a camera and an intelligent electronic scale, wherein the electronic scale comprises the following components: the device comprises a tray for supporting an object, a control machine for identifying and analyzing the object and a touch screen for information interaction with a user; the controller is arranged on the side edge of the tray, the touch screen is arranged on the side edge of the controller, and the front surface of the touch screen faces the top surface of the tray; the bottom edge of the touch screen is provided with a first radar assembly, the first radar assembly faces the top surface of the tray, the top edge of the touch screen is provided with a camera, and the camera faces the top surface of the tray.
Description
Technical Field
The invention relates to the technical field of object weighing, in particular to an object identification method and device combining a radar and a camera and an intelligent electronic scale.
Background
In daily life, when purchasing necessities for life such as vegetables, fruits, meat and the like, a scale is used to weigh the commodity to determine the weight of the commodity, and finally, the buyer pays equivalent money to exchange. Because the error appears easily in the mode of weighing of traditional weight and balance beam, in order to improve the degree of accuracy of weighing, now change intelligent electronic scale into and weigh.
One of the commonly used intelligent electronic scales is an intelligent image recognition electronic scale, which is an electronic scale that adds a camera to collect image information of a commodity on the original electronic scale, and then a background processor recognizes and classifies the image information to determine information such as the type, weight, price and the like of the commodity, so as to improve the efficiency of weighing and recognition.
However, the current commonly used intelligent image recognition electronic scale has the following technical problems: the type and the unit price of the object of still needing the user to select to weigh in the touch screen of electronic scale when weighing at every turn, place the object again and weigh in the electronic scale, it is loaded down with trivial details to control, and the transparent plastic bag packing that different goods had used different colours before weighing, and the sack of different colours refractivity is different under different environment and different light intensity for the deviation appears easily when gathering the object image in the camera, lead to discerning inaccurate, the rate of accuracy of discernment has been reduced.
Disclosure of Invention
The invention provides an object identification method and device combining a radar and a camera and an intelligent electronic scale.
A first aspect of an embodiment of the present invention provides an electronic scale, including: the device comprises a tray for supporting an object, a control machine for identifying and analyzing the object and a touch screen for information interaction with a user;
the controller is arranged on the side edge of the tray, the touch screen is arranged on the side edge of the controller, and the front surface of the touch screen faces the top surface of the tray;
the bottom edge of the touch screen is provided with a first radar assembly, the first radar assembly faces the top surface of the tray, the top edge of the touch screen is provided with a camera, and the camera faces the top surface of the tray.
In a possible implementation manner of the first aspect, a second radar component is disposed inside the control machine, and the second radar component faces the top surface of the tray.
In a possible implementation manner of the first aspect, an extension rod is disposed on a top edge of the touch screen, and the camera is disposed on the extension rod, so that the camera is disposed on and faces the top surface of the tray.
In a possible implementation manner of the first aspect, the device further comprises a scale body for supporting the tray;
the scale body is arranged at the bottom of the tray.
A second aspect of an embodiment of the present invention provides an object identification method combining a radar and a camera, the method being applied to an electronic scale as described above, the method including:
respectively acquiring radar detection data acquired by a radar and object image data acquired by a camera;
determining a radar recognition result corresponding to the radar detection data by using a preset machine learning classification algorithm, and determining an image recognition result corresponding to the object image data by using a preset image recognition algorithm;
and fusing the radar identification result and the image identification result in a Kalman filtering mode to obtain an object identification result.
In a possible implementation manner of the second aspect, the determining, by using a preset machine learning classification algorithm, a radar recognition result corresponding to the radar detection data includes:
converting the radar detection data to generate a radar image;
and inputting the radar image into a machine learning algorithm based on a convolutional neural network to obtain a radar identification result.
In one possible implementation manner of the second aspect, the radar detection data includes radar intermediate frequency raw data of a plurality of channels;
the converting the radar detection data to generate a radar image includes:
arranging radar intermediate frequency original data of each channel in a frequency modulation period horizontally in a longitudinal direction to obtain a two-dimensional radar sampling data matrix;
and respectively carrying out Fourier transform on each row and each column of the radar sampling data matrix to obtain a radar image with a two-dimensional distance angle.
In one possible implementation manner of the second aspect, the method further includes:
labeling the image identification result to obtain a label image;
and performing model optimization training on the machine learning algorithm based on the convolutional neural network by using the label image.
In a possible implementation manner of the second aspect, after the step of fusing the radar recognition result and the image recognition result by using the kalman filtering method to obtain the object recognition result, the method further includes:
displaying the object recognition result, and acquiring correction data input by a user when the object recognition result is inconsistent with an actual object;
and secondarily identifying the radar detection data by adopting the correction data to obtain and store a secondary radar identification result.
A third aspect of embodiments of the present invention provides an object recognition apparatus combining a radar and a camera, the apparatus being adapted for use with an electronic scale as described above, the apparatus including:
the acquisition module is used for respectively acquiring radar detection data acquired by a radar and object image data acquired by a camera;
the respective identification module is used for determining a radar identification result corresponding to the radar detection data by using a preset machine learning classification algorithm and determining an image identification result corresponding to the object image data by using a preset image identification algorithm;
and the fusion identification module is used for fusing the radar identification result and the image identification result in a Kalman filtering mode to obtain an object identification result.
Compared with the prior art, the object identification method and device combining the radar and the camera and the intelligent electronic scale have the advantages that: the invention can simultaneously collect the radar wave signal reflected by the object and the object image, and performs double recognition by using the radar wave signal and the object image so as to reduce the influence of environmental factors during recognition and improve the recognition accuracy.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent electronic scale according to an embodiment of the present invention;
fig. 2 is a rear view of an intelligent electronic scale according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a control machine according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of an object recognition method combining a radar and a camera according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a two-dimensional range-angle radar image provided by an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an object recognition device incorporating a radar and a camera according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The current commonly used intelligent image recognition electronic scale has the following technical problems: the type and the unit price of the object of still needing the user to select to weigh in the touch screen of electronic scale when weighing at every turn, place the object again and weigh in the electronic scale, it is loaded down with trivial details to control, and the transparent plastic bag packing that different goods had used different colours before weighing, and the sack of different colours refractivity is different under different environment and different light intensity for the deviation appears easily when gathering the object image in the camera, lead to discerning inaccurate, the rate of accuracy of discernment has been reduced.
In order to solve the above problem, the following detailed description and illustration of an intelligent electronic scale provided in the embodiments of the present application will be provided by the following specific embodiments.
Referring to fig. 1-2, a schematic structural diagram of an intelligent electronic scale according to an embodiment of the present invention and a rear view of the intelligent electronic scale according to an embodiment of the present invention are respectively shown.
Wherein, as an example, the intelligent electronic scale may include: the device comprises a tray 1 for supporting an object, a control machine 2 for identifying and analyzing the object and a touch screen 3 for information interaction with a user;
the controller 2 is arranged on the side edge of the tray 1, the touch screen 3 is arranged on the side edge of the controller, and the front surface of the touch screen 3 faces the top surface of the tray 1;
the bottom edge of the touch screen 3 is provided with a first radar component 4, the first radar component 4 faces the top surface of the tray 1, the top edge of the touch screen 3 is provided with a camera 5, and the camera 5 faces the top surface of the tray 1.
Specifically, the controller 2 may be disposed at a top corner of the tray 1, and the front surface of the touch screen 3 may be perpendicular to the top surface of the tray 1.
When the device is used, a user can place an object to be recognized on the tray 1, so that the first radar component 4 on the touch screen 3 transmits radar waves to the object and collects the radar waves reflected by the object, the camera 5 can collect images of the object, the radar waves reflected by the object and the images of the object are returned to the control machine 2, the control machine 2 performs primary recognition by adopting the radar waves reflected by the object and performs secondary recognition by adopting the images of the object, and finally the two recognition results are fused to finally recognize and determine the type of the object placed by the user.
Through the recognition of the two modes, the two recognition results are fused, so that the influence of environmental factors can be reduced, and the recognition accuracy is improved.
Since there may be a plurality of objects to be weighed and the weight of different objects may vary, in order to increase the weighing capacity of the tray 1, with reference to fig. 1-2, in one embodiment, a scale body 6 for holding the tray 1 is further included;
the scale body 6 is arranged at the bottom of the tray 1.
In particular, the scale body 6 may be a hard material, such as a metal.
In order to enlarge the shooting range of the camera 5, so that the camera 5 can shoot the object on the tray 1 more comprehensively, in an embodiment, an extension rod 7 is disposed on the top edge of the touch screen 3, and the camera 5 is disposed on the extension rod 7, so that the camera 5 is disposed on the top surface of the tray 1 and faces the top surface of the tray 1.
In order to further improve the accuracy of identification, referring to fig. 3, a schematic structural diagram of a control machine according to an embodiment of the present invention is shown.
In one embodiment, a second radar component 8 is disposed inside the controller 2, and the second radar component 8 faces the top surface of the tray 1.
Optionally, a printing paper 9 and a motor 10 for controlling the rotation of the printing paper 9 may be further provided in the controller 2. After the type and the weight of the object are identified, the unit price of the object can be determined according to the type of the object, then the price of the object is calculated according to the unit price and the weight of the object, finally the price of the object is printed on the printing paper 9, and the printing paper 9 is driven by the motor 10 to rotate so as to be taken away by a user.
In an alternative embodiment, the first radar assembly 4 and the second radar assembly 8 may each employ a radar sensor.
Specifically, the radar sensor can collect intermediate frequency time domain radar raw data. In an application mode, a 60GHz MIMO radar can be adopted, specifically, the MIMO radar can include 3 transmitting channels and 4 receiving channels, radar intermediate frequency raw data of 12 channels can be collected in total, and then radar identification is performed through the radar intermediate frequency raw data of 12 channels.
Since radar sensors are sensitive to vibration, in order to reduce the effect of the motor 10 on the radar when rotating, referring to fig. 3, in one embodiment, the second radar component 8 may be located away from the motor 10 and close to the housing of the controller 2. In a further alternative embodiment, in order to further avoid that vibrations caused by the rotation of the motor 10 affect the radar measurement, a shock absorbing pad or the like may be installed between the radar and the motor 10.
Under the condition that the intelligent electronic scales with different appearances are considered, in order to improve the penetrating power of radar waves emitted by a radar, a shell of a control machine 2 covering the radar sensor can be made of non-metal materials at the position where the radar sensor is placed, so that the radar waves can penetrate through the shell to detect and identify commodity materials conveniently. For example, all or organic materials may be used.
In this embodiment, the embodiment of the present invention provides an intelligent electronic scale, which has the following beneficial effects: the invention can simultaneously collect the radar wave signal reflected by the object and the object image, and performs double recognition by using the radar wave signal and the object image so as to reduce the influence of environmental factors during recognition and improve the recognition accuracy.
The embodiment of the invention also provides an object identification method combining the radar and the camera.
The object recognition method combining radar and camera may be applied to the intelligent electronic scale as described in the above embodiments.
Referring to fig. 4, a flowchart of an object recognition method combining a radar and a camera according to an embodiment of the present invention is shown.
As an example, the object recognition method with a radar and a camera combined may include:
and S11, respectively acquiring radar detection data acquired by a radar and object image data acquired by a camera.
In particular, radar detection data may be collected by a radar and image data of an object may be collected by a camera at the same time.
The radar detection data may be radar waves reflected by the object after the radar transmits the radar waves to the object.
And S12, determining a radar recognition result corresponding to the radar detection data by using a preset machine learning classification algorithm, and determining an image recognition result corresponding to the object image data by using a preset image recognition algorithm.
The radar detection data can be classified and identified by using a preset machine learning classification algorithm, so that a corresponding radar identification result is obtained, and a corresponding image identification result in the object image data is obtained by using an image identification algorithm.
In order to improve the accuracy of radar identification, in an alternative embodiment, step S12 may include the following sub-steps:
and a substep S121 of converting the radar detection data to generate a radar image.
Wherein the radar image may be an image of a radar matrix.
In one embodiment, the radar detection data includes radar intermediate frequency raw data of a plurality of channels.
As disclosed with reference to the above embodiments, the radar used may be a radar including a plurality of transmitting channels and a plurality of receiving channels, which may receive corresponding data at a certain frequency, which is the intermediate frequency raw data of the radar.
Therein, as an example, the substep S121 may comprise the substeps of:
and a substep S1211, arranging the radar intermediate frequency raw data of each channel in a frequency modulation period according to the longitudinal direction and the horizontal direction to obtain a two-dimensional radar sampling data matrix.
And a substep S1212 of performing fourier transform on each row and each column of the radar sampling data matrix respectively to obtain a radar image with a two-dimensional range angle.
Referring to fig. 5, a schematic diagram of a two-dimensional range-angle radar image provided by an embodiment of the present invention is shown.
The radar data of different channels can be arranged in rows or columns of the matrix to obtain a corresponding radar matrix, and finally, Fourier transform (called 2DFFT, abbreviated as FFT) is performed on each row and each column of data in the matrix, so that a radar image with a two-dimensional distance angle is obtained.
And a substep S122, inputting the radar image into a machine learning algorithm based on a convolutional neural network to obtain a radar identification result.
And finally, inputting the radar image into a machine learning algorithm based on a convolutional neural network, and identifying by the machine learning algorithm to obtain a radar identification result.
The machine learning algorithm based on the convolutional neural network may be a machine learning algorithm for presetting neural training.
The corresponding object image data may identify the corresponding object type using conventional image recognition algorithms.
And S13, fusing the radar recognition result and the image recognition result in a Kalman filtering mode to obtain an object recognition result.
After the two recognition results are obtained, a radar recognition result and an image recognition result can be fused in a Kalman filtering mode, so that an object recognition result is obtained.
To improve the accuracy of radar identification, in an embodiment, the method may further include:
and S21, labeling the image recognition result to obtain a label image.
And S22, performing model optimization training on the machine learning algorithm based on the convolutional neural network by using the label image.
Specifically, the object image data can be utilized to carry out optimization training on the neural network for recognizing the radar detection data, so that two recognition results are closer, and meanwhile, the recognition accuracy can be improved.
At the time of application, in order to be corrected by the user after the error is identified, in an embodiment, the method may further include:
and S31, displaying the object recognition result, and acquiring correction data input by the user when the object recognition result is inconsistent with the actual object.
And S32, secondarily identifying the radar detection data by adopting the correction data to obtain and store a secondary radar identification result.
After the object recognition result is displayed, the user can judge whether the object recognition result is consistent with the actual object, if the object recognition result is not consistent with the actual object, the user can input correction data, and the correction data are specific object types.
After the object type is obtained, the machine learning algorithm based on the convolutional neural network can be optimally trained based on the object type, then the trained machine learning algorithm based on the convolutional neural network is used for carrying out secondary identification on radar detection data, and finally the identification result of the secondary identification is stored.
In this embodiment, an embodiment of the present invention provides an object identification method combining a radar and a camera, which has the following beneficial effects: the invention can respectively acquire radar detection data acquired by a radar and object image data acquired by a camera, respectively identify objects of the two data, and finally fuse the two identification results into an identification result, thereby realizing the effect of double identification and improving the identification accuracy.
An embodiment of the present invention further provides an object recognition device combining a radar and a camera, and referring to fig. 6, a schematic structural diagram of an object recognition device combining a radar and a camera according to an embodiment of the present invention is shown.
The device is suitable for the electronic scale in the embodiment.
As an example, the object recognition apparatus combining a radar and a camera may include:
an obtaining module 601, configured to obtain radar detection data collected by a radar and object image data collected by a camera respectively;
a respective identification module 602, configured to determine a radar identification result corresponding to the radar detection data by using a preset machine learning classification algorithm, and determine an image identification result corresponding to the object image data by using a preset image identification algorithm;
and the fusion identification module 603 is configured to fuse the radar identification result and the image identification result in a kalman filtering manner to obtain an object identification result.
Optionally, the individual identification module is further configured to:
converting the radar detection data to generate a radar image;
and inputting the radar image into a machine learning algorithm based on a convolutional neural network to obtain a radar identification result.
Optionally, the radar detection data includes radar intermediate frequency raw data of a plurality of channels;
the respective identification module is further configured to:
arranging radar intermediate frequency original data of each channel in a frequency modulation period horizontally in a longitudinal direction to obtain a two-dimensional radar sampling data matrix;
and respectively carrying out Fourier transform on each row and each column of the radar sampling data matrix to obtain a radar image with a two-dimensional distance angle.
Optionally, the apparatus further comprises:
the label module is used for labeling the image identification result to obtain a label image;
and the optimization training module is used for performing model optimization training on the machine learning algorithm based on the convolutional neural network by using the label image.
Optionally, after the step of obtaining the object recognition result by fusing the radar recognition result and the image recognition result in the kalman filtering manner, the apparatus further includes:
the display module is used for displaying the object recognition result and acquiring correction data input by a user when the object recognition result is inconsistent with an actual object;
and the secondary identification module is used for carrying out secondary identification on the radar detection data by adopting the correction data to obtain and store a secondary radar identification result.
Further, an embodiment of the present application further provides an electronic device, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the object recognition method combining radar and camera as described in the above embodiments.
Further, the present application also provides a computer-readable storage medium storing computer-executable instructions for causing a computer to execute the object identification method combining a radar and a camera according to the foregoing embodiment.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
Claims (10)
1. An electronic scale, characterized in that the electronic scale comprises: the device comprises a tray for supporting an object, a control machine for identifying and analyzing the object and a touch screen for information interaction with a user;
the controller is arranged on the side edge of the tray, the touch screen is arranged on the side edge of the controller, and the front surface of the touch screen faces the top surface of the tray;
the bottom edge of the touch screen is provided with a first radar assembly, the first radar assembly faces the top surface of the tray, the top edge of the touch screen is provided with a camera, and the camera faces the top surface of the tray.
2. The electronic scale according to claim 1, wherein a second radar unit is provided inside the controller, and the second radar unit faces the top surface of the tray.
3. The electronic scale according to claim 1, wherein an extension rod is provided on a top edge of the touch screen, and the camera is disposed on the extension rod so as to be disposed on and face the top surface of the tray.
4. The electronic scale according to claim 1, further comprising a scale body for holding the tray;
the scale body is arranged at the bottom of the tray.
5. A method for object recognition in combination with a radar and a camera, wherein the method is applied to an electronic scale according to any one of claims 1-4, the method comprising:
respectively acquiring radar detection data acquired by a radar and object image data acquired by a camera;
determining a radar recognition result corresponding to the radar detection data by using a preset machine learning classification algorithm, and determining an image recognition result corresponding to the object image data by using a preset image recognition algorithm;
and fusing the radar identification result and the image identification result in a Kalman filtering mode to obtain an object identification result.
6. The object recognition method combining radar and a camera according to claim 5, wherein the determining the radar recognition result corresponding to the radar detection data by using a preset machine learning classification algorithm includes:
converting the radar detection data to generate a radar image;
and inputting the radar image into a machine learning algorithm based on a convolutional neural network to obtain a radar identification result.
7. The object recognition method combining a radar and a camera according to claim 6, wherein the radar detection data includes radar intermediate frequency raw data of a plurality of channels;
the converting the radar detection data to generate a radar image includes:
arranging radar intermediate frequency original data of each channel in a frequency modulation period horizontally in a longitudinal direction to obtain a two-dimensional radar sampling data matrix;
and respectively carrying out Fourier transform on each row and each column of the radar sampling data matrix to obtain a radar image with a two-dimensional distance angle.
8. The combined radar and camera object identification method of claim 6, further comprising:
labeling the image identification result to obtain a label image;
and performing model optimization training on the machine learning algorithm based on the convolutional neural network by using the label image.
9. The object recognition method combining a radar and a camera according to any one of claims 6 to 8, wherein after the step of fusing the radar recognition result and the image recognition result by using the kalman filtering method to obtain the object recognition result, the method further comprises:
displaying the object recognition result, and acquiring correction data input by a user when the object recognition result is inconsistent with an actual object;
and secondarily identifying the radar detection data by adopting the correction data to obtain and store a secondary radar identification result.
10. An object recognition apparatus incorporating a radar and a camera, the apparatus being adapted for use with an electronic scale according to any one of claims 1 to 4, the apparatus comprising:
the acquisition module is used for respectively acquiring radar detection data acquired by a radar and object image data acquired by a camera;
the respective identification module is used for determining a radar identification result corresponding to the radar detection data by using a preset machine learning classification algorithm and determining an image identification result corresponding to the object image data by using a preset image identification algorithm;
and the fusion identification module is used for fusing the radar identification result and the image identification result in a Kalman filtering mode to obtain an object identification result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111488335.9A CN114264361B (en) | 2021-12-07 | 2021-12-07 | Object identification method and device combining radar and camera and intelligent electronic scale |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111488335.9A CN114264361B (en) | 2021-12-07 | 2021-12-07 | Object identification method and device combining radar and camera and intelligent electronic scale |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114264361A true CN114264361A (en) | 2022-04-01 |
CN114264361B CN114264361B (en) | 2024-07-23 |
Family
ID=80826450
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111488335.9A Active CN114264361B (en) | 2021-12-07 | 2021-12-07 | Object identification method and device combining radar and camera and intelligent electronic scale |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114264361B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118209190A (en) * | 2024-05-21 | 2024-06-18 | 常州检验检测标准认证研究院 | Method for processing weighing signals of portable intelligent electronic scale |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2099743C1 (en) * | 1995-12-07 | 1997-12-20 | Военная академия противовоздушной обороны сухопутных войск Российской Федерации | Method of construction of 2d radar image of rectilinear flying target with multifrequency narrow-band probing |
CN101697006A (en) * | 2009-09-18 | 2010-04-21 | 北京航空航天大学 | Target identification method based on data fusion of airborne radar and infrared imaging sensor |
DE102015222043A1 (en) * | 2015-11-10 | 2017-05-11 | Robert Bosch Gmbh | Method for operating an OFDM radar device |
WO2018077121A1 (en) * | 2016-10-24 | 2018-05-03 | 合肥美的智能科技有限公司 | Method for recognizing target object in image, method for recognizing food article in refrigerator and system |
CN109283497A (en) * | 2018-10-19 | 2019-01-29 | 西安电子科技大学 | Bistatic FDA-MIMO distance by radar cheating interference recognition methods |
RU2018140361A (en) * | 2018-11-15 | 2020-05-15 | Федеральное государственное казенное военное образовательное учреждение высшего образования "Военный учебно-научный центр Военно-воздушных сил "Военно-воздушная академия имени профессора Н.Е. Жуковского и Ю.А. Гагарина" (г. Воронеж) Министерства обороны Российской Федерации | METHOD FOR FORMING RADAR IMAGING |
US20200240829A1 (en) * | 2019-01-25 | 2020-07-30 | Panasonic Intellectual Property Management Co., Ltd. | Smart weighing scale and methods related thereto |
WO2020155939A1 (en) * | 2019-01-31 | 2020-08-06 | 广州视源电子科技股份有限公司 | Image recognition method and device, storage medium and processor |
CN111753757A (en) * | 2020-06-28 | 2020-10-09 | 浙江大华技术股份有限公司 | Image recognition processing method and device |
CN112183463A (en) * | 2020-10-23 | 2021-01-05 | 珠海大横琴科技发展有限公司 | Ship identification model verification method and device based on radar image |
CN112307890A (en) * | 2020-09-22 | 2021-02-02 | 西人马帝言(北京)科技有限公司 | Object identification method and device, object identification equipment and storage medium |
CN112946596A (en) * | 2019-12-11 | 2021-06-11 | 三星电子株式会社 | Method and apparatus for identifying radar data |
KR102272411B1 (en) * | 2020-08-12 | 2021-07-02 | 국방과학연구소 | Method and apparatus for learning artificial nearal network to improve the target recognition of simulation-image database in sar image |
US20210302569A1 (en) * | 2018-07-23 | 2021-09-30 | Acconeer Ab | Autonomous moving object |
-
2021
- 2021-12-07 CN CN202111488335.9A patent/CN114264361B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2099743C1 (en) * | 1995-12-07 | 1997-12-20 | Военная академия противовоздушной обороны сухопутных войск Российской Федерации | Method of construction of 2d radar image of rectilinear flying target with multifrequency narrow-band probing |
CN101697006A (en) * | 2009-09-18 | 2010-04-21 | 北京航空航天大学 | Target identification method based on data fusion of airborne radar and infrared imaging sensor |
DE102015222043A1 (en) * | 2015-11-10 | 2017-05-11 | Robert Bosch Gmbh | Method for operating an OFDM radar device |
WO2018077121A1 (en) * | 2016-10-24 | 2018-05-03 | 合肥美的智能科技有限公司 | Method for recognizing target object in image, method for recognizing food article in refrigerator and system |
US20210302569A1 (en) * | 2018-07-23 | 2021-09-30 | Acconeer Ab | Autonomous moving object |
CN109283497A (en) * | 2018-10-19 | 2019-01-29 | 西安电子科技大学 | Bistatic FDA-MIMO distance by radar cheating interference recognition methods |
RU2018140361A (en) * | 2018-11-15 | 2020-05-15 | Федеральное государственное казенное военное образовательное учреждение высшего образования "Военный учебно-научный центр Военно-воздушных сил "Военно-воздушная академия имени профессора Н.Е. Жуковского и Ю.А. Гагарина" (г. Воронеж) Министерства обороны Российской Федерации | METHOD FOR FORMING RADAR IMAGING |
US20200240829A1 (en) * | 2019-01-25 | 2020-07-30 | Panasonic Intellectual Property Management Co., Ltd. | Smart weighing scale and methods related thereto |
WO2020155939A1 (en) * | 2019-01-31 | 2020-08-06 | 广州视源电子科技股份有限公司 | Image recognition method and device, storage medium and processor |
CN112946596A (en) * | 2019-12-11 | 2021-06-11 | 三星电子株式会社 | Method and apparatus for identifying radar data |
CN111753757A (en) * | 2020-06-28 | 2020-10-09 | 浙江大华技术股份有限公司 | Image recognition processing method and device |
KR102272411B1 (en) * | 2020-08-12 | 2021-07-02 | 국방과학연구소 | Method and apparatus for learning artificial nearal network to improve the target recognition of simulation-image database in sar image |
CN112307890A (en) * | 2020-09-22 | 2021-02-02 | 西人马帝言(北京)科技有限公司 | Object identification method and device, object identification equipment and storage medium |
CN112183463A (en) * | 2020-10-23 | 2021-01-05 | 珠海大横琴科技发展有限公司 | Ship identification model verification method and device based on radar image |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118209190A (en) * | 2024-05-21 | 2024-06-18 | 常州检验检测标准认证研究院 | Method for processing weighing signals of portable intelligent electronic scale |
Also Published As
Publication number | Publication date |
---|---|
CN114264361B (en) | 2024-07-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11151427B2 (en) | Method and apparatus for checkout based on image identification technique of convolutional neural network | |
CN108875664B (en) | Method and device for identifying purchased goods and vending machine | |
US7142124B2 (en) | Packaging incorporating volume-measurement capability using RFID tags | |
US10948338B2 (en) | Digital product label generation using modular scale device | |
CN109556695B (en) | Weighing method, weighing device, unmanned sales counter and unmanned sales method | |
JP7374453B2 (en) | Trained model generation method, trained model generation device, product discrimination method, product discrimination device, product discrimination system, and weighing device | |
US11468400B1 (en) | System to determine item weight using data from weight sensors | |
CN114264361A (en) | Object identification method and device combining radar and camera and intelligent electronic scale | |
US11436557B1 (en) | Interaction determination using data from weight sensors | |
WO2023280124A1 (en) | Weighing processing method and apparatus, and weighing devices | |
CN110909698A (en) | Electronic scale recognition result output method, system, device and readable storage medium | |
CN111126990A (en) | Automatic article identification method, settlement method, device, terminal and storage medium | |
US11353322B2 (en) | Surface characteristic inspection apparatus and surface characteristic inspection program | |
AU2018201512A1 (en) | Methods, systems, and computer readable media for tracking consumer interactions with products using electromagnetic beam sensors | |
CN109615361A (en) | Payment control method, device, equipment and the storage medium of automatically vending system | |
US20110259959A1 (en) | Checkout container and checkout operation therefor | |
WO2021048813A1 (en) | Scale and method for the automatic recognition of a product | |
CN209486782U (en) | From mobile vending machine people | |
US11544933B2 (en) | Smart reader system | |
CN112508659B (en) | Commodity settlement processing method and device, computing equipment and computer storage medium | |
US20170091703A1 (en) | Tracking merchandise using watermarked bags | |
CN114264360A (en) | Object identification method and device based on radar and intelligent electronic scale | |
US11263583B1 (en) | Inferred determination of interaction using data from weight sensors | |
CN112163582A (en) | Weighing method and device based on kitchen multi-sensor integrated scale and intelligent terminal | |
CN110455390A (en) | A kind of part detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |