CN112295617A - Intelligent beaker based on experimental scene situation perception - Google Patents

Intelligent beaker based on experimental scene situation perception Download PDF

Info

Publication number
CN112295617A
CN112295617A CN202010984196.8A CN202010984196A CN112295617A CN 112295617 A CN112295617 A CN 112295617A CN 202010984196 A CN202010984196 A CN 202010984196A CN 112295617 A CN112295617 A CN 112295617A
Authority
CN
China
Prior art keywords
beaker
intention
voice
scene
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010984196.8A
Other languages
Chinese (zh)
Other versions
CN112295617B (en
Inventor
冯志全
董頔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Jinan filed Critical University of Jinan
Priority to CN202010984196.8A priority Critical patent/CN112295617B/en
Publication of CN112295617A publication Critical patent/CN112295617A/en
Application granted granted Critical
Publication of CN112295617B publication Critical patent/CN112295617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B01PHYSICAL OR CHEMICAL PROCESSES OR APPARATUS IN GENERAL
    • B01LCHEMICAL OR PHYSICAL LABORATORY APPARATUS FOR GENERAL USE
    • B01L3/00Containers or dishes for laboratory use, e.g. laboratory glassware; Droppers
    • B01L3/50Containers for the purpose of retaining a material to be analysed, e.g. test tubes
    • B01L3/508Containers for the purpose of retaining a material to be analysed, e.g. test tubes rigid containers not provided for above
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Chemical & Material Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Analytical Chemistry (AREA)
  • Hematology (AREA)
  • Clinical Laboratory Science (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an intelligent beaker based on experimental scene situation perception, which comprises a multi-mode input perception module, an intention understanding module and an intention fusion module; the multi-module input sensing module is used for receiving the voice sensor and the visual information. The intention understanding module is used for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on the image obtained by the map, a motion track of the beaker is sensed by adopting a slam technology, and a scene situation is sensed by adopting a region generation network to obtain a scene sensing intention. The intention fusion module is used for carrying out fuzzy reasoning on the user intention by using a fuzzy logic operator and obtaining an intention fusion final result by combining the scene perception intention. The intelligent beaker solves the problems of process monitoring of user behaviors, experimental scene understanding and reality quantification interaction method through multi-mode fusion and intention understanding algorithm.

Description

Intelligent beaker based on experimental scene situation perception
Technical Field
The invention belongs to the technical field of chemical experiments in middle schools, and particularly relates to an intelligent beaker based on experimental scene situation perception.
Background
The middle school chemistry subject is a relatively important subject, while chemistry is a subject based on experiments, and the beaker plays a central role in the middle school chemistry experiments.
In the chemical experiment classroom of middle school students at the present stage, the problems of danger, unobvious experiment phenomenon and even incapability of observation exist in part of chemical experiments. Students often have the problems of irregular operation, incorrect experimental steps and the like during operation. And lacked the interdynamic between mr and the student in traditional experimental teaching, be unfavorable for mr to master the study condition of each student, also can not carry out timely suggestion if wrong operating procedure appears, hardly do to guide each student one by one. Most middle schools also cannot use a large amount of dangerous reagents to allow students to perform exploratory experiments.
Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent beaker based on experimental scene situation perception. The process monitoring, the experimental scene understanding and the realistic quantitative interaction of the user behaviors are realized through the multi-mode fusion and the intention understanding algorithm.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent beaker based on experimental scene situation perception comprises a multi-mode input perception module, an intention understanding module and an intention fusion module;
the multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; measuring the distance between the beaker and surrounding objects through distance measuring equipment;
the intention understanding module is used for obtaining a voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on an image acquired by image acquisition equipment, sensing a beaker motion track by adopting a slam technology and generating a network sensing scene situation by adopting a region to obtain a scene sensing intention;
the intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result.
Further, an image acquisition device and a distance measurement device are arranged on the same vertical line of the outer wall of the beaker; and an attitude sensor is arranged below the middle horizontal line of the outer wall of the beaker.
Further, the method for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library comprises the following steps:
obtaining a voice keyword k from a voice channel, and the relevance q of the keyword to each intention in a voice intention libraryi(ii) a Wherein q isi=[q1 q2 ... qm](ii) a All intents m correspond to the standard result of V ═ V1 v2 ... vm];
Calculating a distance D between k and V, wherein D { | k-Vi1- (1-q) | i ═ 1,2,.., m }; and vi corresponding to the minimum distance degree is the voice intention.
Further, the method for obtaining the operation action intention by interactively acquiring the three-dimensional information of the beaker through the attitude sensor and the distance measuring equipment comprises the following steps:
obtaining the initial inclination angle rho of the beaker through a sensing channel consisting of an attitude sensor and distance measuring equipment0
Figure BDA0002688661190000021
γo(ii) a The sensor acquires acceleration components Ax, Ay and Az of the beaker on the X axis, the Y axis and the Z axis of the three-dimensional coordinate system; (ii) a Where ρ is0Is the initial angle between the X axis and the ground;
Figure BDA0002688661190000022
is the initial angle between the Y axis and the ground; gamma rayoIs the initial angle of rotation about the Z axis;
the angular attitude is solved using a trigonometric function relationship, wherein,
Figure BDA0002688661190000031
wherein rho is the angle between the X axis and the ground;
Figure BDA0002688661190000032
is the angle between the Y axis and the ground; gamma is the angle of rotation around the Z axis; the obtained rho,
Figure BDA0002688661190000033
Gamma minus the initial angle p 0 of the beaker,
Figure BDA0002688661190000034
γoobtaining the current angular posture rho 1 of the beaker,
Figure BDA0002688661190000035
γ1;
calculating the inclination angle theta of the beaker by an average method; wherein
Figure BDA0002688661190000036
And judging the operation behavior intention of the user according to the current angle posture of the beaker.
Further, the method for sensing the movement track of the beaker by adopting the slam technology based on the image acquired by the image acquisition equipment comprises the following steps:
according to
Figure BDA0002688661190000037
Establishing a coordinate mapping relation between a virtual scene and the position of image acquisition equipment; it (POS)x,POSy,POSz) As the position coordinates of the image acquisition device, (U)x,Uy,Uz) The coordinate of the virtual scene is shown, and K is a proportional relation;
image acquisition device initially acquired by slam technologyThe coordinate value P [ i ] of the image acquisition equipment which is currently acquired by taking the standby position as the origin of coordinates]As a displacement value of a user's virtual hand in a virtual scene; so that the virtual hand coordinate value p (x, y, z) of the user is (p)0x+p[i].x,p0y+p[i].y,p0z+p[i]Z); wherein p iso(x, y, z) is the virtual hand initial coordinate position;
after p (x, y, z) is obtained, the virtual hand is smoothly transited from the initial coordinate position to the p (x, y, z) position, and the position of the image acquisition equipment in a coordinate system and the perception of the movement track of the beaker are realized.
Further, the process of obtaining the scene intention by adopting the area-generated network-aware scene situation based on the image acquired by the image acquisition device is as follows:
defining a single image loss function
Figure BDA0002688661190000038
Outputting a rectangular candidate region based on the image;
wherein L iscls(pi,pi *) Classifying a loss function for the frame; l isreg(ti,ti *) A frame regression loss function is obtained; λ is a balance factor; l iscls(pi,pi *)=-log[pipi *+(1-pi *)(1-pi)];piPredicting the probability of being the target, p, for the Anchori *Is the label of anchor, 0 represents a negative sample, 1 represents a positive sample; l isreg(ti,ti *)=R(ti-ti *) (ii) a R is a robustness loss function; r is represented as:
Figure BDA0002688661190000041
ti={tx,ty,tw,th};tiis a vector of 4 parameterized coordinates, t, representing predicted candidate boxesi *The coordinate vector of the candidate frame corresponding to the timing of the anchor point generation frame.
Further, the module controller comprises a fuzzification interface, a fuzzification inference machine, a defuzzification interface and a knowledge base;
the fuzzification interface is used for matching the phonetic key words with the degree q by using the continuous domainiFuzzification is carried out to obtain qi'; fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to a front object through a membership function to respectively obtain theta 'and d';
the fuzzification inference machine is used for converting theta ', d' and qiCombining a knowledge base to complete fuzzification reasoning by a fuzzy control rule to obtain a final intention I understood by the intention; the knowledge base is a fuzzy rule base;
and the defuzzification interface is used for outputting the final intention I after fuzzification reasoning.
Further, the calculation method for respectively obtaining the inclination angle theta of the beaker and the distance d from the beaker to the front object through fuzzification processing by using the membership function comprises the following steps:
Figure BDA0002688661190000042
wherein M is the maximum inclination angle, and S is the minimum inclination angle;
Figure BDA0002688661190000043
wherein L is the maximum distance from the beaker to the front object; and S is the minimum distance from the beaker to the front object.
Further, the fuzzy control rule is as follows:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides an intelligent beaker based on experimental scene situation perception, which comprises a multi-mode input perception module, an intention understanding module and an intention fusion module; the multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; the distance between the beaker and the surrounding object is measured by a distance measuring device. The intention understanding module is used for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on the image acquired by the image acquisition equipment, the motion track of the beaker is sensed by adopting a slam technology, and the scene sensing intention is obtained by adopting a region generation network sensing scene situation. The intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result. According to the intelligent beaker, on one hand, the problem of lack of teacher guidance in the teaching process is solved through an experiment navigation and experiment result scoring system. On the other hand, the intelligent beaker solves the problems of process monitoring of user behaviors, experimental scene understanding, reality quantification interaction method and the like through multi-mode fusion and intention understanding algorithm. Adopt vision, pronunciation and the mode of sensor fusion to interact in the experimentation, accomplish experiment teaching and guide through voice navigation, can also observe the phenomenon that is difficult for observing in traditional experiment in the experiment scene that virtual reality fuses, make the student easily remember the experiment main points, deepen the experiment impression.
Drawings
Fig. 1 is a schematic diagram of a basic structure of an intelligent beaker based on experimental scene situation awareness according to embodiment 1 of the present invention;
fig. 2 is a block diagram of an algorithm of an intelligent beaker based on situation awareness of an experimental scene in embodiment 1 of the present invention;
fig. 3 is a schematic diagram of an area generation network according to embodiment 1 of the present invention;
FIG. 4 is a schematic structural diagram of a fuzzy controller in embodiment 1 of the present invention;
fig. 5 is a diagram of a multi-modal information fusion framework based on fuzzy logic according to embodiment 1 of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example 1
The embodiment 1 of the invention provides an intelligent beaker based on experimental scene situation perception, which comprises a multi-mode input perception module, an intention understanding module and an intention fusion module.
The multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; the distance between the beaker and the surrounding object is measured by a distance measuring device.
The intention understanding module is used for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on the image acquired by the image acquisition equipment, the motion track of the beaker is sensed by adopting a slam technology, and the scene sensing intention is obtained by adopting a region generation network sensing scene situation.
The intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result.
Fig. 1 is a schematic diagram of a basic structure of an intelligent beaker based on situation awareness of an experimental scene in embodiment 1 of the present invention. An image acquisition device and a distance measuring device are arranged on the same vertical line of the outer wall of the beaker; an attitude sensor is arranged below the middle horizontal line of the outer wall of the beaker.
The image acquisition equipment uses a binocular camera, adopts slam algorithm to acquire the track of the beaker and sense the information of the object in front of the beaker, and acquires the relative position relation between the beaker and the object in front by identifying the object information in the scene. An attitude sensor: the device is used for sensing the self posture of the beaker in real time and sensing the change of the inclination angle of the beaker. An infrared distance meter: the distance sensor is used for sensing the distance change from the beaker to a front object when the camera loses a target object.
Fig. 2 is a block diagram of an algorithm of an intelligent beaker based on situation awareness of an experimental scene in embodiment 1 of the present invention. The multi-module input perception module comprises a multi-mode input layer and interactive equipment perception. The multimodal input layer includes a visual device input, a sensor device, and a voice input device. The accurate intention understanding is to analyze the acquired image data, sensor data and voice data, input user behavior operation for use and refine the behavior (such as quantification of water pouring and perception of relative position relation of scene objects). And the intention fusion layer is used for matching the user behaviors obtained by the two channels, outputting a result if the matching is successful, and identifying the error type if the matching is failed. The scene perception layer is used for identifying scene information through a camera, perceiving three-dimensional information of an object in front of the beaker, and judging behavior operation of a user through perceiving scene object information. And an interactive application layer, namely establishing an interactive system based on voice navigation and wrong behavior recognition, and combining the user behavior with the operation scene.
In the intelligent beaker, hundred-degree voice is adopted, the input voice is recognized into text characters through an intelligent voice recognition algorithm, and then the text characters are sent back to the experiment terminal through the Internet. Then, the keywords of the input voice are extracted through a text keyword extraction technology.
The method for obtaining the voice intention according to the relevance of the voice keyword and the intention in the voice intention library comprises the following steps: obtaining a voice keyword k from a voice channel and the correlation q between the keyword and each intention in a voice intention libraryi(ii) a Wherein q isi=[q1 q2... qm](ii) a All intents m correspond to the standard result of V ═ V1 v2 ... vm](ii) a Calculating a distance D between k and V, wherein D { | k-Vi1- (1-q) | i ═ 1,2,.., m }; and vi corresponding to the minimum distance degree is the voice intention.
Nine-axis attitude sensors are used to cooperate with infrared distance measuring sensors to obtain three-dimensional information of the beaker. The user can rotate and move the beaker in real object at will, and attitude sensor and infrared sensor can send data to the computer end with the information of perception through serial ports t communication to can give in the virtual scene with real-time accurate information feedback. Obtaining the initial inclination angle rho of the beaker through a sensing channel consisting of an attitude sensor and distance measuring equipment0
Figure BDA0002688661190000081
γo(ii) a The sensor acquires acceleration components Ax, Ay and Az of the beaker on the X axis, the Y axis and the Z axis of the three-dimensional coordinate system; (ii) a Where ρ is0Is the initial angle between the X axis and the ground;
Figure BDA0002688661190000082
is the initial angle between the Y axis and the ground; gamma rayoIs the initial angle of rotation about the Z axis.
The angular attitude is solved using a trigonometric function relationship, wherein,
Figure BDA0002688661190000083
wherein rho is the angle between the X axis and the ground;
Figure BDA0002688661190000084
is the angle between the Y axis and the ground; gamma is the angle of rotation around the Z axis; the obtained rho,
Figure BDA0002688661190000085
Gamma minus the initial angle p 0 of the beaker,
Figure BDA0002688661190000086
γoobtaining the current angular posture rho 1 of the beaker,
Figure BDA0002688661190000087
γ1;
calculating the inclination angle theta of the beaker by an average method; wherein
Figure BDA0002688661190000088
And judging the operation behavior intention of the user according to the current angle posture of the beaker.
Assigning the current angle posture of the beaker to the posture of the virtual beaker model under unity, and then placing the posture sensor in the real beaker, so that the postures of the beaker and the real beaker in the virtual scene can be synchronized, and the operation behavior intention of the user at the moment can be judged by judging the inclination angle of the beaker.
Based on the image acquired by the image acquisition equipment, the method for sensing the movement track of the beaker by adopting the slam technology comprises the following steps: according to
Figure BDA0002688661190000091
Establishing a coordinate mapping relation between a virtual scene and the position of image acquisition equipment; it (POS)x,POSy,POSz) As the position coordinates of the image acquisition device, (U)x,Uy,Uz) The coordinate of the virtual scene is shown, and K is a proportional relation;
taking the position of the image acquisition equipment initially acquired by slam technology as the origin of coordinates, and acquiring the coordinate value P [ i ] of the image acquisition equipment currently]As a displacement value of a user's virtual hand in a virtual scene; so that the virtual hand coordinate value p (x, y, z) of the user is (p)0x+p[i].x,p0y+p[i].y,p0z+p[i]Z); wherein p iso(x, y, z) is the virtual hand initial coordinate position;
after p (x, y, z) is obtained, the virtual hand is smoothly transited from the initial coordinate position to the p (x, y, z) position, and the position of the image acquisition equipment in a coordinate system and the perception of the movement track of the beaker are realized.
In a chemical experiment, various experimental devices are usually arranged on an experiment table, and in a designed virtual experiment scene, a visual beaker is needed to identify the categories of various reagents in the scene, so that the operation correctness of a user is judged. The convolutional neural networks AlexNet, google lenet and VGGNet appear, which greatly improves the accuracy of the image classification field, but the input of these networks is a fixed size, and for an input image, a fixed size candidate region needs to be provided. In the virtual experiment of our design, the position of the camera is placed on the beaker, the beaker is moved at any time, that is, the position of the camera is not fixed, and the visual angle of the camera is not a overlooking visual angle, so that the images of all objects in the scene are difficult to be acquired simultaneously. Therefore, the scene perception algorithm is fast in identification, low in calculation amount and accurate in identification rate, the fast-RCNN network selected by people is used for deep learning model training, neural network learning is introduced into the candidate region suggestion by the model, the candidate region suggestion and the whole process learning of image classification are achieved, and the calculation amount is greatly reduced.
As shown in fig. 3, which is a schematic diagram of a region-forming network according to embodiment 1 of the present invention, the fast-RCNN mainly comprises three parts: the basic structures of the convolution layer, the RPN layer and the coordinate regression layer are shown in the following figure. Firstly, acquiring an image; secondly, extracting the features of the image through convolution and pooling operations of a convolution neural network in the convolution layer to obtain high-dimensional features; these features are then input to the RPN network and coordinate regression layer. The RPN network is a full convolution based network that can simultaneously predict candidate regions and region scores (including probability values of objects) for each position of an input picture. And inputting the candidate region generated by the RPN into a coordinate regression layer, finely adjusting the target position information by the coordinate regression layer, acquiring more accurate position information, and outputting a final classification and detection result. The RPN network is used for inputting an image and outputting a batch of rectangular candidate regions, and is similar to the Selective Search method in the past target detection. Each point in the convolution feature map obtained by feature extraction can correspond to a certain point in the original picture, the point is called an anchor, and a rectangular frame generated by sliding and traversing the convolution feature map is called an anchor box. For each anchor box, two scores are given depending on whether there is an object, namely a positive anchor and a negative anchor, indicating the presence and absence of an object, respectively, and this process is called bounding box classification. For each rectangular box, the rectangular box shape is corrected using the transform vector, a process called bounding box regression.
Defining a single image loss function
Figure BDA0002688661190000101
Outputting a rectangular candidate region based on the image; wherein L iscls(pi,pi *) Classifying a loss function for the frame; l isreg(ti,ti *) A frame regression loss function is obtained; λ is a balance factor; bounding box classification loss function Lcls(pi,pi *) Comprises the following steps: l iscls(pi,pi *)=-log[pipi *+(1-pi *)(1-pi)](ii) a Wherein p isiPredicting the probability of being the target, p, for the Anchori *Is the label of anchor, 0 represents a negative sample, 1 represents a positive sample; bounding box regression loss function Lreg(ti,ti *) Comprises the following steps: l isreg(ti,ti *)=R(ti-ti *) (ii) a R is loss of robustnessA loss function; r is represented as:
Figure BDA0002688661190000102
ti={tx,ty,tw,th}; wherein, tiIs a vector of 4 parameterized coordinates, t, representing predicted candidate boxesi *The coordinate vector of the candidate frame corresponding to the timing of the anchor point generation frame.
FIG. 4 is a schematic structural diagram of a fuzzy controller in embodiment 1 of the present invention; the information fusion by fuzzy logic control is to directly express the uncertainty in the process of multi-sensor information fusion in the reasoning process and realize multi-mode information fusion by a fuzzy controller. The fuzzy controller is the core of the fuzzy control system and consists of four parts, namely a fuzzification interface, a knowledge base, a fuzzy inference engine and a defuzzification interface, wherein information acquired by different sensors is input firstly, secondly, a fuzzy set and a membership function are used for describing the input information, the membership function is used for representing the uncertainty of the information of each sensor, secondly, a fuzzy rule is established according to professional knowledge, and finally, a fuzzy logic operator is used for carrying out fuzzy inference to deduce the final result of information fusion.
FIG. 5 is a diagram of a multi-modal information fusion framework based on fuzzy logic according to embodiment 1 of the present invention; fuzzification interface for matching phonetic keywords with continous discourse domainiFuzzification is carried out to obtain qi'; fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to a front object through a membership function to respectively obtain theta 'and d'; the input variable fuzzification process is completed in the fuzzification interface.
The fuzzification interface is actually the input interface of the fuzzy controller, which can convert an accurate input variable into a fuzzy quantity. Input variables are obfuscated using continuum domains.
The ambiguous linguistic variable of the inclination angle θ of the beaker is { S, M, B } ═ Small, Middle, Big }, and the domain is [0, 90], representing 0 ° to 90 °. The inclination angle of the beaker is S within [0, 30], M within [30, 60] and B within [60, 90 ].
The fuzzy linguistic variable of the distance d between the beaker and the front object is { S, M, B } - { Small, Middle, Big }, the domain of discourse is [ -6, 6], and the language is-6 cm. Negative values indicate that the beaker is inside the beaker and positive values indicate that the beaker is outside the beaker.
Matching degree q of voice keywordsi' fuzzy linguistic variables are { Si,Fi{ Success, Failure }, and argument is [0, 1 }]Indicating that the match probability lies between 0 and 1. The matching degree is 0, 0.9]Within is F, is [0.9, 1 ]]The interior is S. i represents that the voice keyword is matched with the ith intention in the voice intention library.
The calculation method for fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to the front object through the membership function to respectively obtain theta 'and d' comprises the following steps:
Figure BDA0002688661190000111
wherein M is the maximum inclination angle, and S is the minimum inclination angle;
Figure BDA0002688661190000112
wherein L is the maximum distance from the beaker to the front object; and S is the minimum distance from the beaker to the front object.
Fuzzification inference engine is used for converting theta ', d', qiCombining a knowledge base to complete fuzzification reasoning by a fuzzy control rule to obtain a final intention I understood by the intention; the knowledge base is a fuzzy rule base; the fuzzy control rule is as follows:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
And the defuzzification interface is used for outputting the final intentions I after the fuzzification inference.
The fuzzy reasoning process is to complete fuzzy reasoning by a fuzzy reasoning machine according to the fuzzified input variable and a fuzzy control rule to obtain a fuzzy output quantity. We summarize the experience and knowledge of the domain experts and the operators, we can get 18 fuzzy control rules for different speech keywords, some of which are shown in the table below. The fuzzy control rules are given in the following table.
θ′ d′ ′ qi I
M S S1 I1
M VS S1 I1
M S S2 I2
M VS S2 I2
The general form of each fuzzy control rule is described by IF-THEN, i.e.:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
When the matching of the ith voice keyword is detected to be successful, namely the linguistic variable is SiThen, the fuzzy control rule is as follows:
1、If(θ′is S)and(d′is S)and(qi′is S1),then(I is I3);
2、If(θ′is M)and(d′is S)and(qi′is S1),then(I is I1);
3、If(θ′is B)and(d′is S)and(qi′is S1),then(I is I3);
4、If(θ′is S)and(d′is M)and(qi′is S1),then(I is I3);
5、If(θ′is M)and(d′is M)and(qi′is S1),then(I is I1);
6、If(θ′is B)and(d′is M)and(qi′is S1),then(I is I3);
7、If(θ′is S)and(d′is B)and(qi′is S1),then(I is I3);
8、If(θ′is M)and(d′is B)and(qi′is S1),then(I is I3);
9、If(θ′is B)and(d′is B)and(qi′is S1),then(I is I3);
for example, rule 2 fuzzy rule, when the tilt angle θ' of the beaker is [30, 60]]The distance d' from the beaker to the front object is [ -2, 2]In between, the matching degree q of the voice keywordsiIs' S1I.e. the keyword is successfully matched with the intention I in the speech meaning library, the final intention is I1
According to the intelligent beaker, on one hand, the problem of lack of teacher guidance in the teaching process is solved through an experiment navigation and experiment result scoring system. On the other hand, the intelligent beaker solves the problems of process monitoring of user behaviors, experimental scene understanding, reality quantification interaction method and the like through multi-mode fusion and intention understanding algorithm. Adopt vision, pronunciation and the mode of sensor fusion to interact in the experimentation, accomplish experiment teaching and guide through voice navigation, can also observe the phenomenon that is difficult for observing in traditional experiment in the experiment scene that virtual reality fuses, make the student easily remember the experiment main points, deepen the experiment impression.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto. Various modifications and alterations will occur to those skilled in the art based on the foregoing description. And are neither required nor exhaustive of all embodiments. On the basis of the technical scheme of the invention, various modifications or changes which can be made by a person skilled in the art without creative efforts are still within the protection scope of the invention.

Claims (9)

1. An intelligent beaker based on experimental scene situation perception is characterized by comprising a multi-mode input perception module, an intention understanding module and an intention fusion module;
the multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; measuring the distance between the beaker and surrounding objects through distance measuring equipment;
the intention understanding module is used for obtaining a voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on an image acquired by image acquisition equipment, sensing a beaker motion track by adopting a slam technology and generating a network sensing scene situation by adopting a region to obtain a scene sensing intention;
the intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result.
2. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein an image acquisition device and a distance measurement device are arranged on the same vertical line of the outer wall of the beaker; and an attitude sensor is arranged below the middle horizontal line of the outer wall of the beaker.
3. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein the method for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library comprises:
obtaining a voice keyword k from a voice channel, and the relevance q of the keyword to each intention in a voice intention libraryi(ii) a Wherein q isi=[q1 q2 ... qm](ii) a All intents m correspond to the standard result of V ═ V1v2...vm];
Calculating a distance D between k and V, wherein D { | k-Vi1- (1-q) | i ═ 1,2,.., m }; and vi corresponding to the minimum distance degree is the voice intention.
4. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein the method for obtaining the operation action intention by interactively acquiring the three-dimensional information of the beaker through the attitude sensor and the distance measuring equipment comprises the following steps:
obtaining the initial inclination angle rho of the beaker through a sensing channel consisting of an attitude sensor and distance measuring equipment0
Figure FDA0002688661180000029
γo(ii) a The sensor acquires acceleration components Ax, Ay and Az of the beaker on the X axis, the Y axis and the Z axis of the three-dimensional coordinate system; (ii) a Where ρ is0Is the initial angle between the X axis and the ground;
Figure FDA00026886611800000210
is the initial angle between the Y axis and the ground; gamma rayoIs the initial angle of rotation about the Z axis;
the angular attitude is solved using a trigonometric function relationship, wherein,
Figure FDA0002688661180000021
wherein rho is the angle between the X axis and the ground;
Figure FDA0002688661180000022
is the angle between the Y axis and the ground; gamma is the angle of rotation around the Z axis; the obtained rho,
Figure FDA0002688661180000023
Gamma minus the initial angle p 0 of the beaker,
Figure FDA0002688661180000024
γoobtaining the current angular posture rho 1 of the beaker,
Figure FDA0002688661180000025
γ1;
determining the beaker by averagingAn inclination angle theta; wherein
Figure FDA0002688661180000026
And judging the operation behavior intention of the user according to the current angle posture of the beaker.
5. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein based on the image acquired by the image acquisition device, the method for perceiving the movement track of the beaker by adopting the slam technology comprises the following steps:
according to
Figure FDA0002688661180000031
Establishing a coordinate mapping relation between a virtual scene and the position of image acquisition equipment; it (POS)x,POSy,POSz) As the position coordinates of the image acquisition device, (U)x,Uy,Uz) The coordinate of the virtual scene is shown, and K is a proportional relation;
taking the position of the image acquisition equipment initially acquired by slam technology as the origin of coordinates, and acquiring the coordinate value P [ i ] of the image acquisition equipment currently]As a displacement value of a user's virtual hand in a virtual scene; so that the virtual hand coordinate value p (x, y, z) of the user is (p)0x+p[i].x,p0y+p[i].y,p0z+p[i]Z); wherein p iso(x, y, z) is the virtual hand initial coordinate position;
after p (x, y, z) is obtained, the virtual hand is smoothly transited from the initial coordinate position to the p (x, y, z) position, and the position of the image acquisition equipment in a coordinate system and the perception of the movement track of the beaker are realized.
6. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein the process of obtaining the scene intention by adopting the area-generated network-aware scene situation based on the image acquired by the image acquisition device is as follows:
defining a single image loss function
Figure FDA0002688661180000032
Outputting a rectangular candidate region based on the image;
wherein L iscls(pi,pi *) Classifying a loss function for the frame; l isreg(ti,ti *) A frame regression loss function is obtained; λ is a balance factor; l iscls(pi,pi *)=-log[pipi *+(1-pi *)(1-pi)];piPredicting the probability of being the target, p, for the Anchori *Is the label of anchor, 0 represents a negative sample, 1 represents a positive sample; l isreg(ti,ti *)=R(ti-ti *) (ii) a R is a robustness loss function; r is represented as:
Figure FDA0002688661180000041
ti={tx,ty,tw,th};tiis a vector of 4 parameterized coordinates, t, representing predicted candidate boxesi *The coordinate vector of the candidate frame corresponding to the timing of the anchor point generation frame.
7. The intelligent beaker based on experimental scene situation awareness according to claim 1, wherein the module controller comprises a fuzzification interface, a fuzzification inference engine, a defuzzification interface and a knowledge base;
the fuzzification interface is used for matching the phonetic key words with the degree q by using the continuous domainiFuzzification is carried out to obtain q'i(ii) a Fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to a front object through a membership function to respectively obtain theta 'and d';
the fuzzification inference machine is used for converting theta ', d ', q 'iCompleting fuzzification reasoning by combining a knowledge base and a fuzzy control rule to obtain a final intention I understood by the intention; the knowledge base is a fuzzy rule base;
and the defuzzification interface is used for outputting the final intention I after fuzzification reasoning.
8. The intelligent beaker based on experimental scene situational awareness of claim 7, wherein the calculation method for fuzzifying the inclination angle θ of the beaker and the distance d from the beaker to the front object by the membership function to respectively obtain θ 'and d' comprises the following steps:
Figure FDA0002688661180000042
wherein M is the maximum inclination angle, and S is the minimum inclination angle;
Figure FDA0002688661180000051
wherein L is the maximum distance from the beaker to the front object; and S is the minimum distance from the beaker to the front object.
9. The intelligent beaker based on experimental scene situation awareness according to claim 7, wherein the fuzzy control rule is:
Rj:If θ′is Am and d′is Bn and qi′is Cik then I is I1
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d ', q 'iAnd I linguistic variables in the domain of discourse.
CN202010984196.8A 2020-09-18 2020-09-18 Intelligent beaker based on experimental scene situation perception Active CN112295617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010984196.8A CN112295617B (en) 2020-09-18 2020-09-18 Intelligent beaker based on experimental scene situation perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010984196.8A CN112295617B (en) 2020-09-18 2020-09-18 Intelligent beaker based on experimental scene situation perception

Publications (2)

Publication Number Publication Date
CN112295617A true CN112295617A (en) 2021-02-02
CN112295617B CN112295617B (en) 2022-04-01

Family

ID=74483922

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010984196.8A Active CN112295617B (en) 2020-09-18 2020-09-18 Intelligent beaker based on experimental scene situation perception

Country Status (1)

Country Link
CN (1) CN112295617B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019240A (en) * 2022-08-04 2022-09-06 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN117007825A (en) * 2023-10-07 2023-11-07 北京众驰伟业科技发展有限公司 Reagent automatic identification and positioning system and method for full-automatic coagulation tester

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056671A1 (en) * 2004-09-15 2006-03-16 Jayati Ghosh Automated feature extraction processes and systems
CN101169867A (en) * 2007-12-04 2008-04-30 北京中星微电子有限公司 Image dividing method, image processing apparatus and system
EP2383683A1 (en) * 2010-04-30 2011-11-02 Alcatel Lucent Method for improving inter-worlds immersion through social interactions analysis and transparent communication services
CN102646200A (en) * 2012-03-08 2012-08-22 武汉大学 Image classifying method and system for self-adaption weight fusion of multiple classifiers
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107958434A (en) * 2017-11-24 2018-04-24 泰康保险集团股份有限公司 Intelligence nurse method, apparatus, electronic equipment and storage medium
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机系统有限公司 Motion characteristic acquisition methods, device and storage medium
CN109444912A (en) * 2018-10-31 2019-03-08 电子科技大学 A kind of driving environment sensory perceptual system and method based on Collaborative Control and deep learning
CN110286765A (en) * 2019-06-21 2019-09-27 济南大学 A kind of intelligence experiment container and its application method
CN110286835A (en) * 2019-06-21 2019-09-27 济南大学 A kind of interactive intelligent container understanding function with intention
CN111190690A (en) * 2019-12-25 2020-05-22 中科曙光国际信息产业有限公司 Intelligent training device based on container arrangement tool
CN111241943A (en) * 2019-12-31 2020-06-05 浙江大学 Scene recognition and loopback detection method based on background target detection and triple loss in automatic driving scene
US20200184556A1 (en) * 2018-05-06 2020-06-11 Strong Force TX Portfolio 2018, LLC Adaptive intelligence and shared infrastructure lending transaction enablement platform responsive to crowd sourced information
CN111274372A (en) * 2020-01-15 2020-06-12 上海浦东发展银行股份有限公司 Method, electronic device, and computer-readable storage medium for human-computer interaction
CN111414839A (en) * 2020-03-16 2020-07-14 清华大学 Emotion recognition method and device based on gestures
CN111507378A (en) * 2020-03-24 2020-08-07 华为技术有限公司 Method and apparatus for training image processing model
CN111539884A (en) * 2020-04-21 2020-08-14 温州大学 Neural network video deblurring method based on multi-attention machine mechanism fusion
CN111651035A (en) * 2020-04-13 2020-09-11 济南大学 Multi-modal interaction-based virtual experiment system and method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060056671A1 (en) * 2004-09-15 2006-03-16 Jayati Ghosh Automated feature extraction processes and systems
CN101169867A (en) * 2007-12-04 2008-04-30 北京中星微电子有限公司 Image dividing method, image processing apparatus and system
EP2383683A1 (en) * 2010-04-30 2011-11-02 Alcatel Lucent Method for improving inter-worlds immersion through social interactions analysis and transparent communication services
CN102646200A (en) * 2012-03-08 2012-08-22 武汉大学 Image classifying method and system for self-adaption weight fusion of multiple classifiers
CN107203753A (en) * 2017-05-25 2017-09-26 西安工业大学 A kind of action identification method based on fuzzy neural network and graph model reasoning
CN107958434A (en) * 2017-11-24 2018-04-24 泰康保险集团股份有限公司 Intelligence nurse method, apparatus, electronic equipment and storage medium
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机系统有限公司 Motion characteristic acquisition methods, device and storage medium
US20200184556A1 (en) * 2018-05-06 2020-06-11 Strong Force TX Portfolio 2018, LLC Adaptive intelligence and shared infrastructure lending transaction enablement platform responsive to crowd sourced information
CN109444912A (en) * 2018-10-31 2019-03-08 电子科技大学 A kind of driving environment sensory perceptual system and method based on Collaborative Control and deep learning
CN110286765A (en) * 2019-06-21 2019-09-27 济南大学 A kind of intelligence experiment container and its application method
CN110286835A (en) * 2019-06-21 2019-09-27 济南大学 A kind of interactive intelligent container understanding function with intention
CN111190690A (en) * 2019-12-25 2020-05-22 中科曙光国际信息产业有限公司 Intelligent training device based on container arrangement tool
CN111241943A (en) * 2019-12-31 2020-06-05 浙江大学 Scene recognition and loopback detection method based on background target detection and triple loss in automatic driving scene
CN111274372A (en) * 2020-01-15 2020-06-12 上海浦东发展银行股份有限公司 Method, electronic device, and computer-readable storage medium for human-computer interaction
CN111414839A (en) * 2020-03-16 2020-07-14 清华大学 Emotion recognition method and device based on gestures
CN111507378A (en) * 2020-03-24 2020-08-07 华为技术有限公司 Method and apparatus for training image processing model
CN111651035A (en) * 2020-04-13 2020-09-11 济南大学 Multi-modal interaction-based virtual experiment system and method
CN111539884A (en) * 2020-04-21 2020-08-14 温州大学 Neural network video deblurring method based on multi-attention machine mechanism fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DI DONG1,ZHIQUAN FENG1,JIE YUAN,XIN MENG: "A Design of Smart Beaker Structure and Interaction Paradigm Based on Multimodal Fusion Understanding", 《IEEE ACCESS》 *
杨林,马宏伟,王川伟,夏伟: "校园巡检机器人智能导航与控制", 《西安科技大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115019240A (en) * 2022-08-04 2022-09-06 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN115019240B (en) * 2022-08-04 2022-11-11 成都西交智汇大数据科技有限公司 Grading method, device and equipment for chemical experiment operation and readable storage medium
CN117007825A (en) * 2023-10-07 2023-11-07 北京众驰伟业科技发展有限公司 Reagent automatic identification and positioning system and method for full-automatic coagulation tester
CN117007825B (en) * 2023-10-07 2023-12-22 北京众驰伟业科技发展有限公司 Reagent automatic identification and positioning system and method for full-automatic coagulation tester

Also Published As

Publication number Publication date
CN112295617B (en) 2022-04-01

Similar Documents

Publication Publication Date Title
KR102266529B1 (en) Method, apparatus, device and readable storage medium for image-based data processing
CN111709409B (en) Face living body detection method, device, equipment and medium
CN111325347B (en) Automatic danger early warning description generation method based on interpretable visual reasoning model
CN110554774B (en) AR-oriented navigation type interactive normal form system
CN112295617B (en) Intelligent beaker based on experimental scene situation perception
CN110148318A (en) A kind of number assiatant system, information interacting method and information processing method
CN112541529A (en) Expression and posture fusion bimodal teaching evaluation method, device and storage medium
CN112990296A (en) Image-text matching model compression and acceleration method and system based on orthogonal similarity distillation
CN114120432A (en) Online learning attention tracking method based on sight estimation and application thereof
CN117523275A (en) Attribute recognition method and attribute recognition model training method based on artificial intelligence
Aly et al. A generative framework for multimodal learning of spatial concepts and object categories: An unsupervised part-of-speech tagging and 3D visual perception based approach
Tan et al. Towards embodied scene description
CN116385937A (en) Method and system for solving video question and answer based on multi-granularity cross-mode interaction framework
CN118293999A (en) Liquid level detection system for reagent bottle
Saleh et al. D-talk: sign language recognition system for people with disability using machine learning and image processing
CN116561262A (en) Test question correcting method and related device
CN112099633A (en) Intelligent experimental method and device for multi-modal perception
Rozaliev et al. Detailed analysis of postures and gestures for the identification of human emotional reactions
Wu et al. Question-driven multiple attention (dqma) model for visual question answer
Zhang et al. Lp-slam: Language-perceptive rgb-d slam system based on large language model
CN113505750B (en) Identification method, identification device, electronic equipment and computer readable storage medium
CN116012866A (en) Method and device for detecting heavy questions, electronic equipment and storage medium
Betancourt et al. A gesture recognition system for the Colombian sign language based on convolutional neural networks
Yao et al. Decision-tree-based algorithm for 3D sign classification
Martínez et al. Identifier of human emotions based on convolutional neural network for assistant robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant