CN115730236A - Drug identification acquisition method, device and storage medium based on man-machine interaction - Google Patents

Drug identification acquisition method, device and storage medium based on man-machine interaction Download PDF

Info

Publication number
CN115730236A
CN115730236A CN202211486821.1A CN202211486821A CN115730236A CN 115730236 A CN115730236 A CN 115730236A CN 202211486821 A CN202211486821 A CN 202211486821A CN 115730236 A CN115730236 A CN 115730236A
Authority
CN
China
Prior art keywords
medicine
identification
information
robot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211486821.1A
Other languages
Chinese (zh)
Other versions
CN115730236B (en
Inventor
黄向荣
王坚
余佳珂
夏梓源
涂昱坦
杨名
樊谨
张波涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211486821.1A priority Critical patent/CN115730236B/en
Publication of CN115730236A publication Critical patent/CN115730236A/en
Application granted granted Critical
Publication of CN115730236B publication Critical patent/CN115730236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a medicine identification and acquisition method based on human-computer interaction, which comprises the following steps: determining the kind of the required medicine; the robot moves to a fixed medicine storage position through an SLAM technology, and field information is acquired and stored by adopting a visual servo method; processing the image obtained by the depth camera by adopting a histogram equalization method, and enhancing the integral contrast of the image; identifying a target object by adopting a multi-feature fusion object identification method, and determining position information of a target medicine; the first medicine with the first matching degree sequence is grabbed by the multi-degree-of-freedom soft claw mechanical arm for the first time, the medicine is brought to the visual field of the old, whether the medicine is the needed medicine or not is inquired, and if the medicine meets the requirements, the old moves to the front of the old. The method provides a scheme for identifying the shielding object and planning the global optimal path, greatly reduces the influence of the complex environment on the working efficiency of the robot, and further improves the identification and grabbing accuracy of the specified medicine.

Description

Drug identification acquisition method, device and storage medium based on man-machine interaction
Technical Field
The invention belongs to the technical field of robot control, and relates to a method, equipment and a storage medium for identifying and acquiring a medicine based on human-computer interaction.
Background
The nursing robot is a semi-autonomous or fully-autonomous working robot, can provide necessary life assistance for disabled people, and thus the robot is required to have good human-computer interaction capacity and high processing efficiency in the face of abnormal conditions. The current nursing robot still has the defects of low target identification precision and large influence of complex environment,
the working spaces of home environments, hospitals, nursing homes and the like where nursing robots are located usually belong to unstructured or semi-structured environments, target drugs are often placed out of order, and the operation of the targets through mechanical arms is challenging. The mechanical arm is usually carried on a mobile platform, and the degree of freedom and self-positioning disturbance introduced by the mobile platform further improve the difficulty of identifying and operating the target. Therefore, the method has important significance for realizing accurate positioning and grabbing of the target through multi-sensor fusion in a complex indoor environment. Therefore, the invention provides a medicine identification and acquisition method of a nursing robot based on human-computer interaction.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method, equipment and a storage medium for identifying and acquiring a medicine based on human-computer interaction, provides a scheme for identifying a shielding object and planning a global optimal path, greatly lightens the influence of a complex environment on the working efficiency of a robot, and further improves the identification and grabbing accuracy of a specified medicine.
The invention provides a medicine identification and acquisition method based on human-computer interaction, which comprises an intelligent robot and a mechanical arm carrying a mobile platform, wherein the medicine identification and acquisition method of an old-aged nursing assistant robot comprises the following steps:
step (1), obtaining characteristic parameters of voice by adopting an MFCC parameter extraction method, carrying out DTW matching with a voice template which is made in advance, mapping voice information to a medicine image template, and determining the type of a required medicine;
the robot moves to a fixed medicine storage position through an SLAM technology, acquires and stores field information by adopting a visual servo method, and sends information reminding to family members through a network module, wherein the information comprises an identification result and a robot pose;
step (3) processing the image obtained by the depth camera by adopting a histogram equalization method to enhance the overall contrast of the image;
step (4), identifying the target object by adopting a multi-feature fusion object identification method, and determining the position information of the target medicine;
step (5) a multi-degree-of-freedom soft claw mechanical arm is used for grabbing first-order medicines with matching degree for the first time, bringing the first-order medicines into the visual field of the old, inquiring whether the first-order medicines are needed medicines or not, and if the first-order medicines meet the requirements, moving the old to the front;
step (6), if the selected medicine does not meet the requirement, repeating the steps (2) to (5), and sequentially selecting the medicine with the highest residual matching degree until the requirement is met or an instruction for stopping searching is received;
and (7) after the old sends a use finishing instruction, grabbing the medicines in the old and returning to the original placing position, and finally moving to the initial position to wait for a further instruction.
The invention has the beneficial effects that:
the invention fully considers the man-machine interaction with the disabled, increases the convenience and intelligence of the nursing robot in the use process and provides more comprehensive and thorough service;
the invention provides a scheme for identifying the shielding object and planning the global optimal path, thereby greatly reducing the influence of the complex environment on the working efficiency of the robot and further improving the identification and grabbing accuracy of the specified medicine;
the invention can upload the pose and environment information of the robot in real time, help family members to know the situation in time and issue further instructions, and reduce the probability of accidents.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
In the drawings:
fig. 1 is a flowchart of a task implementation method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. In addition, numerous specific details are set forth below in order to provide a better understanding of the present invention. It will be understood by those skilled in the art that the present invention may be practiced without some of these specific details. In some instances, methods, procedures, components, and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present invention.
The embodiment is as follows:
as shown in fig. 1, the method for identifying and acquiring the medicine for the old-assistant nursing robot is based on configuring an old-assistant nursing robot capable of movably grabbing articles, wherein the old-assistant nursing robot carries a moving platform at the bottom, carries a mechanical arm and a depth camera at the top and is at least configured with a laser radar;
the invention relates to a medicine identification and acquisition method for an old-helping nursing robot, which comprises the following steps:
step (1), obtaining characteristic parameters of voice by adopting an MFCC parameter extraction method, carrying out DTW matching with a voice template which is made in advance, mapping voice information to a medicine image template, and determining the type of a required medicine, wherein the specific steps are as follows:
1-1 pre-emphasis: the voice signal passes through a high-pass filter, and a high-frequency part is lifted to obtain a signal function y (t);
y (t) = x (t) - μ x (t-1) formula (1)
Wherein x (t) represents the original speech signal and μ represents the pre-emphasis parameter;
1-2 mixing NSampling point set is a frame, and M is added between two adjacent frames m An overlapping area of the sampling points, wherein
Figure BDA0003962797480000041
1-3 adding Hamming window to the framed signal, using window function W (n):
Figure BDA0003962797480000042
wherein a is 0 Representing a Hamming parameter, and N representing the size of the frame;
applying a window function to each frame, a new signal function S' (n) is obtained:
s' (n) = S (n) × W (n) formula (3)
Carrying out fast Fourier transform on each frame signal to obtain frequency spectrum X of each frame a (k):
Figure BDA0003962797480000043
Wherein N' represents the number of points of the Fourier transform;
1-4 the energy spectrum is passed through a set of Mel-scale triangular filter banks, and the logarithmic energy s (m) output by each filter bank is calculated:
Figure BDA0003962797480000051
obtaining MFCC coefficients C (n) through Discrete Cosine Transform (DCT):
Figure BDA0003962797480000052
wherein M is t The number of the triangular filters is represented, and L represents the MFCC coefficient order;
1-5 comparing the obtained MFCC coefficients with the drug templates one by one, and calculating the matching degree M d
M d =P C +P O Formula (7)
Wherein P is C Representing the probability of color matching, P O Representing the probability of contour matching;
then arranging the matching results according to the sequence of the matching degrees from high to low;
the robot moves to a fixed medicine storage position through an SLAM technology, acquires and stores field information by adopting a visual servo method, and sends information prompts to family members through a network module, wherein the information prompts comprise identification results and robot poses, and the specific steps are as follows:
2-1, constructing a grid map by using a GMapping method and combining information acquired by a laser radar, an inertia measurement unit and a speedometer in a coordinate conversion mode;
2-2, acquiring semantic information of the environment through the recognition and positioning functions of the depth camera, loading the semantic information to a storage space, and establishing a semantic map model;
2-3, constructing a semantic information updating model to enable the robot to update corresponding semantic information according to the change of the environment;
2-4, moving to a specified position by adopting a path planning method, and uploading the current pose of the robot in real time;
and (3) processing the image obtained by the depth camera by adopting a histogram equalization method, and enhancing the overall contrast of the image, wherein the specific steps are as follows:
3-1 normalizing the gray level, and calculating a Cumulative Distribution Function (CDF) of the gray value of the original image;
3-2 determining the mapping transformation function T (r):
Figure BDA0003962797480000061
where r represents the original image gray level after normalization, p r Probability density function representing the gray level of the original image;
for digital images with discrete gray levels, the transformation function T (r) k ) Can be expressed as:
Figure BDA0003962797480000062
step (4), identifying the target object by adopting a multi-feature fusion object identification method, and determining the position information of the target medicine, wherein the method specifically comprises the following steps:
4-1 target object feature extraction scheme:
4-1-1 based on color characteristics: the object and the surrounding environment have obvious color fall, the object and the surrounding environment can be subjected to threshold segmentation, and the key part and the surrounding environment are segmented by utilizing a color histogram statistics or color moment mode, so that an image of the target object is obtained;
4-1-2 based on profile features: considering the edge pixel of the target object as a complete contour, extracting contour information of the object, so that the analysis of a rapid connected region can be carried out on the image, meanwhile, carrying out polygon approximation on the complete contour, calculating the area in the contour by using a rectangular surrounding frame, or matching by using Hu moment after fitting the contour;
4-2 matching recognition algorithm:
4-2-1, performing fusion processing on identification information provided by learning models with different depths by using an article fusion identification algorithm based on DSmT (Dezert-Smarandache) reasoning and applying a data fusion idea, such as pre-training models AlexNet, caffeNet and GoogleNet under a Caffe framework;
4-2-2, performing specific fine adjustment according to a classification recognition task by using an existing pre-training deep learning model, and performing evidence source reliability assignment on the discrimination output of the image by using a deep learning network aiming at the problem that reliability assignment is difficult to construct in a DSmT theory;
4-2-3, performing fusion processing on the confidence level assignment by using a DSmT (differential signaling to multi-target) combination theory at a decision level, and matching with a corresponding model of a training set to further realize accurate identification of the article;
step (5), a multi-degree-of-freedom soft claw mechanical arm is used for picking and placing the first medicine with the matching degree sequence for the first time, the first medicine is brought into the field of vision of the old, whether the medicine is the needed medicine or not is inquired, if the requirement is met, the medicine moves to the front of the old, and the method specifically comprises the following steps:
5-1, constructing a visual servo closed-loop control system;
5-2 using the depth camera to find the object with the first matching degree determined in the step (1),
calculating a specific capture position;
5-3, generating a feasible capturing track based on the rapid search random tree RRT, carrying out prior probability evaluation on the feasible track by utilizing a probability theory in combination with Kalman filtering and a modern control theory, and taking the track with the maximum capturing target probability as the capturing track of the mechanical arm.
Step (6), if the selected medicine does not meet the requirement, repeating the steps (2) to (5), and sequentially selecting the medicine with the highest residual matching degree until the requirement is met or an instruction for stopping searching is received;
step (7), after the old people finish using, returning the medicine to the original placing position, moving to the initial position, and waiting for a further instruction, specifically:
7-1, after the old people send a use finishing instruction, the robot recognizes that the old people hold medicines by vision and drives a mechanical arm to accurately grab the medicines;
7-2, moving the robot to a fixed medicine storage point, and determining a return place by using the position information of the medicine stored during grabbing;
7-3, driving the mechanical arm to return the medicine to the original position;
7-4 the robot moves to the initial position and enters the standby state.

Claims (9)

1. A medicine identification and acquisition method based on human-computer interaction is characterized by comprising the following steps:
step (1), determining the required medicine according to the voice signal;
the robot moves to a fixed medicine storage position through an SLAM technology, field information is obtained and stored by adopting a visual servo method, the field information comprises positioning information and image information, and meanwhile, an information prompt is sent through a network module, wherein the information comprises an identification result and a robot pose;
and (3) processing the obtained image by adopting a histogram equalization method to enhance the overall contrast of the image, and specifically comprising the following steps:
3-1, normalizing the gray level, and calculating the cumulative distribution function of the gray value of the original image;
3-2 determining the mapping transformation function T (r):
Figure FDA0003962797470000011
where r represents the original image gray level after normalization, p r Probability density function representing the gray level of the original image;
for digital images with discrete gray levels, the transformation function T (r) k ) Can be expressed as:
Figure FDA0003962797470000012
step (4), identifying the target object by adopting a multi-feature fusion object identification method, and determining the position information of the target medicine;
step (5) a multi-degree-of-freedom soft claw mechanical arm is used for picking and placing the first medicine with the matching degree sequence for the first time, the first medicine is brought into the visual field of a patient, whether the medicine is the required medicine or not is inquired, and if the requirement is met, the medicine moves to the front of the patient;
step (6), if the selected medicine does not meet the requirement, repeating the steps (2) to (5), and sequentially selecting the medicine with the highest residual matching degree until the requirement is met or an instruction for stopping searching is received;
and (7) after the patient finishes using, grabbing the medicines in the hand of the patient, returning to the original placing position, moving to the initial position, and waiting for a further instruction.
2. The human-computer interaction based drug identification acquisition method according to claim 1, wherein the step (1) comprises the following sub-steps:
1-1 pre-emphasis: acquiring a voice signal, and passing the voice signal through a high-pass filter to promote a high-frequency part to obtain a signal function y (t);
y (t) = x (t) - μ x (t-1) formula (3)
Wherein x (t) represents the original speech signal and μ represents the pre-emphasis parameter;
1-2, collecting N sampling points into frames, and adding M between two adjacent frames m An overlapping area of the sampling points, wherein
Figure FDA0003962797470000021
1-3 adding Hamming window to the framed signal, using window function W (n):
Figure FDA0003962797470000022
wherein a is 0 Representing a Hamming parameter, and N representing the size of the frame;
applying a window function to each frame, a new signal function S' (n) is obtained:
s' (n) = S (n) × W (n) formula (5)
Carrying out fast Fourier transform on each frame signal to obtain frequency spectrum X of each frame a (k):
Figure FDA0003962797470000023
Wherein N' represents the number of points of the Fourier transform;
1-4 the energy spectrum is passed through a set of Mel-scale triangular filter banks, and the logarithmic energy s (m) output by each filter bank is calculated:
Figure FDA0003962797470000031
obtaining MFCC coefficient C (n) through discrete cosine transform:
Figure FDA0003962797470000032
wherein M is t Showing the number of the triangular filters, wherein L shows the MFCC coefficient order;
1-5, comparing the obtained MFCC coefficients with the drug templates one by one, and arranging the MFCC coefficients from high to low according to the matching degree.
3. The human-computer interaction based medicine identification and acquisition method according to claim 2, wherein the step (2) comprises the following specific steps:
2-1, constructing a grid map by a coordinate conversion mode by using a GMapping method and combining information acquired by a laser radar, an inertia measurement unit and a mileometer;
2-2, acquiring semantic information of the environment through the recognition and positioning functions of the depth camera, loading the semantic information to a storage space, and establishing a semantic map model;
2-3, constructing a semantic information updating model, so that the robot can update corresponding semantic information according to the change of the environment;
2-4, moving to a specified position by adopting a path planning method, and uploading the current pose of the robot in real time.
4. The human-computer interaction based drug identification and acquisition method according to claim 2, wherein the matching degree M in the steps (1-5) d The operation method comprises the following steps:
M d =P C +P O formula (9)
Wherein P is C Representing the probability of color matching, P O Representing the probability of a contour match.
5. The human-computer interaction based medicine identification and acquisition method according to claim 4, wherein the specific contents of the step (4) are as follows:
4-1, extracting the characteristics of the target object:
4-1-1, based on color characteristics, an object and the surrounding environment have obvious color fall, threshold segmentation can be performed on the object and the surrounding environment, and a key part and the surrounding environment are segmented by utilizing a color histogram statistics or color moment mode, so that an image of a target object is obtained;
4-1-2, determining image edge pixels of a target object as a complete contour based on contour features, extracting contour information of the object, so that the image can be subjected to analysis of a fast connected region, forming a set by adjacent pixels with the same pixel value to obtain edge features of different objects, simultaneously performing polygon approximation on the complete contour, calculating the area in the contour by using a rectangular surrounding frame, and matching the contour by using Hu moment after fitting;
4-2 matching identification algorithm:
4-2-1, fusing the identification information provided by a plurality of depth model optimization algorithms by using an article fusion identification algorithm based on DSmT reasoning;
4-2-2, carrying out specific fine adjustment according to a classification recognition task by using an existing pre-training deep learning model, and carrying out evidence source reliability assignment on the discrimination output of the image by using a deep learning network;
4-2-3, performing fusion processing on the reliability assignment by using a DSmT combination theory at a decision level layer, and matching with a corresponding model of a training set to further realize accurate identification of the article;
4-3 storing the target position information by using the depth camera for comparing information when returning the medicine.
6. The human-computer interaction based medicine identification and acquisition method according to claim 5, wherein the specific contents of the step (5) are as follows:
5-1, constructing a visual servo closed-loop control system;
5-2, searching a target with the first matching degree determined in the step (1) by using a depth camera, and calculating a specific capture position;
5-3, generating a feasible capturing track based on the rapid search random tree RRT, carrying out prior probability evaluation on the feasible track by using the probability theory in combination with Kalman filtering and the modern control theory, and taking the track with the maximum capturing target probability as the capturing track of the mechanical arm.
7. The human-computer interaction based medicine identification and acquisition method according to claim 6, wherein the specific contents of the step (7) are as follows:
7-1, after the old people send a use finishing instruction, the old people visually recognize that the old people hold the medicine, and the mechanical arm is driven to accurately grab the medicine;
7-2, moving the robot to a fixed medicine storage point, and determining a return place by using the position information of the medicine stored during grabbing;
7-3, driving the mechanical arm to return the medicine to the original position;
7-4 the robot moves to the initial position and enters the standby state.
8. A computer-readable storage medium, on which a computer program is stored which, when executed in a computer, causes the computer to carry out the method of any one of claims 1-7.
9. A computing device comprising a memory having executable code stored therein and a processor that, when executing the executable code, implements the method of any of claims 1-7.
CN202211486821.1A 2022-11-25 2022-11-25 Medicine identification acquisition method, equipment and storage medium based on man-machine interaction Active CN115730236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211486821.1A CN115730236B (en) 2022-11-25 2022-11-25 Medicine identification acquisition method, equipment and storage medium based on man-machine interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211486821.1A CN115730236B (en) 2022-11-25 2022-11-25 Medicine identification acquisition method, equipment and storage medium based on man-machine interaction

Publications (2)

Publication Number Publication Date
CN115730236A true CN115730236A (en) 2023-03-03
CN115730236B CN115730236B (en) 2023-09-22

Family

ID=85298233

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211486821.1A Active CN115730236B (en) 2022-11-25 2022-11-25 Medicine identification acquisition method, equipment and storage medium based on man-machine interaction

Country Status (1)

Country Link
CN (1) CN115730236B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
US20190375103A1 (en) * 2018-06-08 2019-12-12 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Navigation method, navigation system, movement control system and mobile robot
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method
CN114029963A (en) * 2022-01-12 2022-02-11 北京具身智能科技有限公司 Robot operation method based on visual and auditory fusion
CN115081567A (en) * 2022-06-15 2022-09-20 东南大学 Medicine high-reliability identification method based on image, RFID and voice multi-element data fusion
US20220362939A1 (en) * 2019-10-24 2022-11-17 Ecovacs Commercial Robotics Co., Ltd. Robot positioning method and apparatus, intelligent robot, and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 Service robot control platform system and multimode intelligent interaction and intelligent behavior realizing method thereof
US20190375103A1 (en) * 2018-06-08 2019-12-12 Ankobot (Shenzhen) Smart Technologies Co., Ltd. Navigation method, navigation system, movement control system and mobile robot
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
US20220362939A1 (en) * 2019-10-24 2022-11-17 Ecovacs Commercial Robotics Co., Ltd. Robot positioning method and apparatus, intelligent robot, and storage medium
CN112223288A (en) * 2020-10-09 2021-01-15 南开大学 Visual fusion service robot control method
CN114029963A (en) * 2022-01-12 2022-02-11 北京具身智能科技有限公司 Robot operation method based on visual and auditory fusion
CN115081567A (en) * 2022-06-15 2022-09-20 东南大学 Medicine high-reliability identification method based on image, RFID and voice multi-element data fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MAO GUO等: "Jointly Learning of Visual and Auditory: A New Approach for RS Image and Audio Cross-Modal Retrieval", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》, pages 4644 - 4654 *
达瓦里希也喝脉动: "语音识别第4讲:语音特征参数MFCC", pages 108 - 113, Retrieved from the Internet <URL:《https://zhuanlan.zhihu.com/p/88625876》> *

Also Published As

Publication number Publication date
CN115730236B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN110059558B (en) Orchard obstacle real-time detection method based on improved SSD network
US9403278B1 (en) Systems and methods for detecting and picking up a waste receptacle
CN111368759B (en) Monocular vision-based mobile robot semantic map construction system
CN113034600B (en) Template matching-based texture-free planar structure industrial part identification and 6D pose estimation method
CN111931654A (en) Intelligent monitoring method, system and device for personnel tracking
CN114677323A (en) Semantic vision SLAM positioning method based on target detection in indoor dynamic scene
CN113034497A (en) Vision-based thermos cup weld positioning detection method and system
CN107798329B (en) CNN-based adaptive particle filter target tracking method
CN110543817A (en) Pedestrian re-identification method based on posture guidance feature learning
CN113936210A (en) Anti-collision method for tower crane
CN114454875A (en) Urban road automatic parking method and system based on reinforcement learning
CN113681552B (en) Five-dimensional grabbing method for robot hybrid object based on cascade neural network
CN112507924B (en) 3D gesture recognition method, device and system
CN114399515A (en) Language description-based class-level target object 6D pose acquisition method and storage medium
Hwang et al. Object Detection for Cargo Unloading System Based on Fuzzy C Means.
CN115730236B (en) Medicine identification acquisition method, equipment and storage medium based on man-machine interaction
CN109543716B (en) K-line form image identification method based on deep learning
CN111240195A (en) Automatic control model training and target object recycling method and device based on machine vision
CN114359493B (en) Method and system for generating three-dimensional semantic map for unmanned ship
CN116188540A (en) Target identification and pose estimation method based on point cloud information
Raju et al. Convolutional neural network demystified for a comprehensive learning with industrial application
NL2025775B1 (en) Data processing system for acquiring tumor position and contour in ct image and electronic equipment
CN114764831A (en) Object grabbing, positioning and identifying algorithm and system based on multitask convolution and robot
CN114627339A (en) Intelligent recognition and tracking method for border crossing personnel in dense jungle area and storage medium
CN114495109A (en) Grabbing robot based on matching of target and scene characters and grabbing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant