Disclosure of Invention
In order to solve the technical problems, the invention provides an intelligent beaker based on experimental scene situation perception. The process monitoring, the experimental scene understanding and the realistic quantitative interaction of the user behaviors are realized through the multi-mode fusion and the intention understanding algorithm.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent beaker based on experimental scene situation perception comprises a multi-mode input perception module, an intention understanding module and an intention fusion module;
the multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; measuring the distance between the beaker and surrounding objects through distance measuring equipment;
the intention understanding module is used for obtaining a voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on an image acquired by image acquisition equipment, sensing a beaker motion track by adopting a slam technology and generating a network sensing scene situation by adopting a region to obtain a scene sensing intention;
the intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result.
Further, an image acquisition device and a distance measurement device are arranged on the same vertical line of the outer wall of the beaker; and an attitude sensor is arranged below the middle horizontal line of the outer wall of the beaker.
Further, the method for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library comprises the following steps:
obtaining a voice keyword k from a voice channel, and the relevance q of the keyword to each intention in a voice intention libraryi(ii) a Wherein q isi=[q1 q2 ... qm](ii) a All intents m correspond to the standard result of V ═ V1 v2 ... vm];
Calculating a distance D between k and V, wherein D { | k-Vi1- (1-q) | i ═ 1,2,.., m }; and vi corresponding to the minimum distance degree is the voice intention.
Further, the method for obtaining the operation action intention by interactively acquiring the three-dimensional information of the beaker through the attitude sensor and the distance measuring equipment comprises the following steps:
obtaining the initial inclination angle rho of the beaker through a sensing channel consisting of an attitude sensor and distance measuring equipment
0、
γ
o(ii) a The sensor acquires acceleration components Ax, Ay and Az of the beaker on the X axis, the Y axis and the Z axis of the three-dimensional coordinate system; (ii) a Where ρ is
0Is the initial angle between the X axis and the ground;
is the initial angle between the Y axis and the ground; gamma ray
oIs the initial angle of rotation about the Z axis;
the angular attitude is solved using a trigonometric function relationship, wherein,
wherein rho is the angle between the X axis and the ground;
is the angle between the Y axis and the ground; gamma is the angle of rotation around the Z axis; the obtained rho,
Gamma minus the initial angle p 0 of the beaker,
γ
oobtaining the current angular posture rho 1 of the beaker,
γ1;
calculating the inclination angle theta of the beaker by an average method; wherein
And judging the operation behavior intention of the user according to the current angle posture of the beaker.
Further, the method for sensing the movement track of the beaker by adopting the slam technology based on the image acquired by the image acquisition equipment comprises the following steps:
according to
Establishing a coordinate mapping relation between a virtual scene and the position of image acquisition equipment; it (POS)
x,POS
y,POS
z) As the position coordinates of the image acquisition device, (U)
x,U
y,U
z) The coordinate of the virtual scene is shown, and K is a proportional relation;
image acquisition device initially acquired by slam technologyThe coordinate value P [ i ] of the image acquisition equipment which is currently acquired by taking the standby position as the origin of coordinates]As a displacement value of a user's virtual hand in a virtual scene; so that the virtual hand coordinate value p (x, y, z) of the user is (p)0x+p[i].x,p0y+p[i].y,p0z+p[i]Z); wherein p iso(x, y, z) is the virtual hand initial coordinate position;
after p (x, y, z) is obtained, the virtual hand is smoothly transited from the initial coordinate position to the p (x, y, z) position, and the position of the image acquisition equipment in a coordinate system and the perception of the movement track of the beaker are realized.
Further, the process of obtaining the scene intention by adopting the area-generated network-aware scene situation based on the image acquired by the image acquisition device is as follows:
defining a single image loss function
Outputting a rectangular candidate region based on the image;
wherein L is
cls(p
i,p
i *) Classifying a loss function for the frame; l is
reg(t
i,t
i *) A frame regression loss function is obtained; λ is a balance factor; l is
cls(p
i,p
i *)=-log[p
ip
i *+(1-p
i *)(1-p
i)];p
iPredicting the probability of being the target, p, for the Anchor
i *Is the label of anchor, 0 represents a negative sample, 1 represents a positive sample; l is
reg(t
i,t
i *)=R(t
i-t
i *) (ii) a R is a robustness loss function; r is represented as:
t
i={t
x,t
y,t
w,t
h};t
iis a vector of 4 parameterized coordinates, t, representing predicted candidate boxes
i *The coordinate vector of the candidate frame corresponding to the timing of the anchor point generation frame.
Further, the module controller comprises a fuzzification interface, a fuzzification inference machine, a defuzzification interface and a knowledge base;
the fuzzification interface is used for matching the phonetic key words with the degree q by using the continuous domainiFuzzification is carried out to obtain qi'; fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to a front object through a membership function to respectively obtain theta 'and d';
the fuzzification inference machine is used for converting theta ', d' and qiCombining a knowledge base to complete fuzzification reasoning by a fuzzy control rule to obtain a final intention I understood by the intention; the knowledge base is a fuzzy rule base;
and the defuzzification interface is used for outputting the final intention I after fuzzification reasoning.
Further, the calculation method for respectively obtaining the inclination angle theta of the beaker and the distance d from the beaker to the front object through fuzzification processing by using the membership function comprises the following steps:
wherein M is the maximum inclination angle, and S is the minimum inclination angle;
wherein L is the maximum distance from the beaker to the front object; and S is the minimum distance from the beaker to the front object.
Further, the fuzzy control rule is as follows:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1;
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides an intelligent beaker based on experimental scene situation perception, which comprises a multi-mode input perception module, an intention understanding module and an intention fusion module; the multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; the distance between the beaker and the surrounding object is measured by a distance measuring device. The intention understanding module is used for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on the image acquired by the image acquisition equipment, the motion track of the beaker is sensed by adopting a slam technology, and the scene sensing intention is obtained by adopting a region generation network sensing scene situation. The intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result. According to the intelligent beaker, on one hand, the problem of lack of teacher guidance in the teaching process is solved through an experiment navigation and experiment result scoring system. On the other hand, the intelligent beaker solves the problems of process monitoring of user behaviors, experimental scene understanding, reality quantification interaction method and the like through multi-mode fusion and intention understanding algorithm. Adopt vision, pronunciation and the mode of sensor fusion to interact in the experimentation, accomplish experiment teaching and guide through voice navigation, can also observe the phenomenon that is difficult for observing in traditional experiment in the experiment scene that virtual reality fuses, make the student easily remember the experiment main points, deepen the experiment impression.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example 1
The embodiment 1 of the invention provides an intelligent beaker based on experimental scene situation perception, which comprises a multi-mode input perception module, an intention understanding module and an intention fusion module.
The multi-module input sensing module is used for receiving voice information and extracting keywords; acquiring a motion track of the beaker and a relative position relation between the beaker and surrounding objects through image acquisition equipment arranged on the beaker; sensing the change of the posture of the beaker by a posture sensor arranged on the beaker; the distance between the beaker and the surrounding object is measured by a distance measuring device.
The intention understanding module is used for obtaining the voice intention according to the relevance between the voice keyword and the intention in the voice intention library; the three-dimensional information of the beaker is obtained through interaction of the attitude sensor and the distance measuring equipment to obtain the operation behavior intention; based on the image acquired by the image acquisition equipment, the motion track of the beaker is sensed by adopting a slam technology, and the scene sensing intention is obtained by adopting a region generation network sensing scene situation.
The intention fusion module is used for fuzzifying the operation behavior intention through a membership function; fuzzifying the voice intention through a continuous domain; and carrying out fuzzy reasoning on the voice intention and the operation intention after the fuzzification processing by using a fuzzy logic operator, and combining the scene perception intention to obtain an intention fusion final result.
Fig. 1 is a schematic diagram of a basic structure of an intelligent beaker based on situation awareness of an experimental scene in embodiment 1 of the present invention. An image acquisition device and a distance measuring device are arranged on the same vertical line of the outer wall of the beaker; an attitude sensor is arranged below the middle horizontal line of the outer wall of the beaker.
The image acquisition equipment uses a binocular camera, adopts slam algorithm to acquire the track of the beaker and sense the information of the object in front of the beaker, and acquires the relative position relation between the beaker and the object in front by identifying the object information in the scene. An attitude sensor: the device is used for sensing the self posture of the beaker in real time and sensing the change of the inclination angle of the beaker. An infrared distance meter: the distance sensor is used for sensing the distance change from the beaker to a front object when the camera loses a target object.
Fig. 2 is a block diagram of an algorithm of an intelligent beaker based on situation awareness of an experimental scene in embodiment 1 of the present invention. The multi-module input perception module comprises a multi-mode input layer and interactive equipment perception. The multimodal input layer includes a visual device input, a sensor device, and a voice input device. The accurate intention understanding is to analyze the acquired image data, sensor data and voice data, input user behavior operation for use and refine the behavior (such as quantification of water pouring and perception of relative position relation of scene objects). And the intention fusion layer is used for matching the user behaviors obtained by the two channels, outputting a result if the matching is successful, and identifying the error type if the matching is failed. The scene perception layer is used for identifying scene information through a camera, perceiving three-dimensional information of an object in front of the beaker, and judging behavior operation of a user through perceiving scene object information. And an interactive application layer, namely establishing an interactive system based on voice navigation and wrong behavior recognition, and combining the user behavior with the operation scene.
In the intelligent beaker, hundred-degree voice is adopted, the input voice is recognized into text characters through an intelligent voice recognition algorithm, and then the text characters are sent back to the experiment terminal through the Internet. Then, the keywords of the input voice are extracted through a text keyword extraction technology.
The method for obtaining the voice intention according to the relevance of the voice keyword and the intention in the voice intention library comprises the following steps: obtaining a voice keyword k from a voice channel and the correlation q between the keyword and each intention in a voice intention libraryi(ii) a Wherein q isi=[q1 q2... qm](ii) a All intents m correspond to the standard result of V ═ V1 v2 ... vm](ii) a Calculating a distance D between k and V, wherein D { | k-Vi1- (1-q) | i ═ 1,2,.., m }; and vi corresponding to the minimum distance degree is the voice intention.
Nine-axis attitude sensors are used to cooperate with infrared distance measuring sensors to obtain three-dimensional information of the beaker. The user can rotate and move the beaker in real object at will, and attitude sensor and infrared sensor can send data to the computer end with the information of perception through serial ports t communication to can give in the virtual scene with real-time accurate information feedback. Obtaining the initial inclination angle rho of the beaker through a sensing channel consisting of an attitude sensor and distance measuring equipment
0、
γ
o(ii) a The sensor acquires acceleration components Ax, Ay and Az of the beaker on the X axis, the Y axis and the Z axis of the three-dimensional coordinate system; (ii) a Where ρ is
0Is the initial angle between the X axis and the ground;
is the initial angle between the Y axis and the ground; gamma ray
oIs the initial angle of rotation about the Z axis.
The angular attitude is solved using a trigonometric function relationship, wherein,
wherein rho is the angle between the X axis and the ground;
is the angle between the Y axis and the ground; gamma is the angle of rotation around the Z axis; the obtained rho,
Gamma minus the initial angle p 0 of the beaker,
γ
oobtaining the current angular posture rho 1 of the beaker,
γ1;
calculating the inclination angle theta of the beaker by an average method; wherein
And judging the operation behavior intention of the user according to the current angle posture of the beaker.
Assigning the current angle posture of the beaker to the posture of the virtual beaker model under unity, and then placing the posture sensor in the real beaker, so that the postures of the beaker and the real beaker in the virtual scene can be synchronized, and the operation behavior intention of the user at the moment can be judged by judging the inclination angle of the beaker.
Based on the image acquired by the image acquisition equipment, the method for sensing the movement track of the beaker by adopting the slam technology comprises the following steps: according to
Establishing a coordinate mapping relation between a virtual scene and the position of image acquisition equipment; it (POS)
x,POS
y,POS
z) As the position coordinates of the image acquisition device, (U)
x,U
y,U
z) The coordinate of the virtual scene is shown, and K is a proportional relation;
taking the position of the image acquisition equipment initially acquired by slam technology as the origin of coordinates, and acquiring the coordinate value P [ i ] of the image acquisition equipment currently]As a displacement value of a user's virtual hand in a virtual scene; so that the virtual hand coordinate value p (x, y, z) of the user is (p)0x+p[i].x,p0y+p[i].y,p0z+p[i]Z); wherein p iso(x, y, z) is the virtual hand initial coordinate position;
after p (x, y, z) is obtained, the virtual hand is smoothly transited from the initial coordinate position to the p (x, y, z) position, and the position of the image acquisition equipment in a coordinate system and the perception of the movement track of the beaker are realized.
In a chemical experiment, various experimental devices are usually arranged on an experiment table, and in a designed virtual experiment scene, a visual beaker is needed to identify the categories of various reagents in the scene, so that the operation correctness of a user is judged. The convolutional neural networks AlexNet, google lenet and VGGNet appear, which greatly improves the accuracy of the image classification field, but the input of these networks is a fixed size, and for an input image, a fixed size candidate region needs to be provided. In the virtual experiment of our design, the position of the camera is placed on the beaker, the beaker is moved at any time, that is, the position of the camera is not fixed, and the visual angle of the camera is not a overlooking visual angle, so that the images of all objects in the scene are difficult to be acquired simultaneously. Therefore, the scene perception algorithm is fast in identification, low in calculation amount and accurate in identification rate, the fast-RCNN network selected by people is used for deep learning model training, neural network learning is introduced into the candidate region suggestion by the model, the candidate region suggestion and the whole process learning of image classification are achieved, and the calculation amount is greatly reduced.
As shown in fig. 3, which is a schematic diagram of a region-forming network according to embodiment 1 of the present invention, the fast-RCNN mainly comprises three parts: the basic structures of the convolution layer, the RPN layer and the coordinate regression layer are shown in the following figure. Firstly, acquiring an image; secondly, extracting the features of the image through convolution and pooling operations of a convolution neural network in the convolution layer to obtain high-dimensional features; these features are then input to the RPN network and coordinate regression layer. The RPN network is a full convolution based network that can simultaneously predict candidate regions and region scores (including probability values of objects) for each position of an input picture. And inputting the candidate region generated by the RPN into a coordinate regression layer, finely adjusting the target position information by the coordinate regression layer, acquiring more accurate position information, and outputting a final classification and detection result. The RPN network is used for inputting an image and outputting a batch of rectangular candidate regions, and is similar to the Selective Search method in the past target detection. Each point in the convolution feature map obtained by feature extraction can correspond to a certain point in the original picture, the point is called an anchor, and a rectangular frame generated by sliding and traversing the convolution feature map is called an anchor box. For each anchor box, two scores are given depending on whether there is an object, namely a positive anchor and a negative anchor, indicating the presence and absence of an object, respectively, and this process is called bounding box classification. For each rectangular box, the rectangular box shape is corrected using the transform vector, a process called bounding box regression.
Defining a single image loss function
Outputting a rectangular candidate region based on the image; wherein L is
cls(p
i,p
i *) Classifying a loss function for the frame; l is
reg(t
i,t
i *) A frame regression loss function is obtained; λ is a balance factor; bounding box classification loss function L
cls(p
i,p
i *) Comprises the following steps: l is
cls(p
i,p
i *)=-log[p
ip
i *+(1-p
i *)(1-p
i)](ii) a Wherein p is
iPredicting the probability of being the target, p, for the Anchor
i *Is the label of anchor, 0 represents a negative sample, 1 represents a positive sample; bounding box regression loss function L
reg(t
i,t
i *) Comprises the following steps: l is
reg(t
i,t
i *)=R(t
i-t
i *) (ii) a R is loss of robustnessA loss function; r is represented as:
t
i={t
x,t
y,t
w,t
h}; wherein, t
iIs a vector of 4 parameterized coordinates, t, representing predicted candidate boxes
i *The coordinate vector of the candidate frame corresponding to the timing of the anchor point generation frame.
FIG. 4 is a schematic structural diagram of a fuzzy controller in embodiment 1 of the present invention; the information fusion by fuzzy logic control is to directly express the uncertainty in the process of multi-sensor information fusion in the reasoning process and realize multi-mode information fusion by a fuzzy controller. The fuzzy controller is the core of the fuzzy control system and consists of four parts, namely a fuzzification interface, a knowledge base, a fuzzy inference engine and a defuzzification interface, wherein information acquired by different sensors is input firstly, secondly, a fuzzy set and a membership function are used for describing the input information, the membership function is used for representing the uncertainty of the information of each sensor, secondly, a fuzzy rule is established according to professional knowledge, and finally, a fuzzy logic operator is used for carrying out fuzzy inference to deduce the final result of information fusion.
FIG. 5 is a diagram of a multi-modal information fusion framework based on fuzzy logic according to embodiment 1 of the present invention; fuzzification interface for matching phonetic keywords with continous discourse domainiFuzzification is carried out to obtain qi'; fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to a front object through a membership function to respectively obtain theta 'and d'; the input variable fuzzification process is completed in the fuzzification interface.
The fuzzification interface is actually the input interface of the fuzzy controller, which can convert an accurate input variable into a fuzzy quantity. Input variables are obfuscated using continuum domains.
The ambiguous linguistic variable of the inclination angle θ of the beaker is { S, M, B } ═ Small, Middle, Big }, and the domain is [0, 90], representing 0 ° to 90 °. The inclination angle of the beaker is S within [0, 30], M within [30, 60] and B within [60, 90 ].
The fuzzy linguistic variable of the distance d between the beaker and the front object is { S, M, B } - { Small, Middle, Big }, the domain of discourse is [ -6, 6], and the language is-6 cm. Negative values indicate that the beaker is inside the beaker and positive values indicate that the beaker is outside the beaker.
Matching degree q of voice keywordsi' fuzzy linguistic variables are { Si,Fi{ Success, Failure }, and argument is [0, 1 }]Indicating that the match probability lies between 0 and 1. The matching degree is 0, 0.9]Within is F, is [0.9, 1 ]]The interior is S. i represents that the voice keyword is matched with the ith intention in the voice intention library.
The calculation method for fuzzifying the inclination angle theta of the beaker and the distance d from the beaker to the front object through the membership function to respectively obtain theta 'and d' comprises the following steps:
wherein M is the maximum inclination angle, and S is the minimum inclination angle;
wherein L is the maximum distance from the beaker to the front object; and S is the minimum distance from the beaker to the front object.
Fuzzification inference engine is used for converting theta ', d', qiCombining a knowledge base to complete fuzzification reasoning by a fuzzy control rule to obtain a final intention I understood by the intention; the knowledge base is a fuzzy rule base; the fuzzy control rule is as follows:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1;
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
And the defuzzification interface is used for outputting the final intentions I after the fuzzification inference.
The fuzzy reasoning process is to complete fuzzy reasoning by a fuzzy reasoning machine according to the fuzzified input variable and a fuzzy control rule to obtain a fuzzy output quantity. We summarize the experience and knowledge of the domain experts and the operators, we can get 18 fuzzy control rules for different speech keywords, some of which are shown in the table below. The fuzzy control rules are given in the following table.
θ′
|
d′
|
′
qi |
I
|
M
|
S
|
S1 |
I1 |
M
|
VS
|
S1 |
I1 |
M
|
S
|
S2 |
I2 |
M
|
VS
|
S2 |
I2 |
The general form of each fuzzy control rule is described by IF-THEN, i.e.:
Rj:Ifθ′is Am and d′is Bn and qi′is Cik then I is I1;
for j=1,2,...;m=1,2,3;n=1,2,3;i=1,2,...,6;k=1,2;l=1,2,3
wherein R isjIs the jth fuzzy rule; a. them、Bn、Cik、IlRespectively represent theta ', d', qi', I, linguistic variables in the domain of discourse.
When the matching of the ith voice keyword is detected to be successful, namely the linguistic variable is SiThen, the fuzzy control rule is as follows:
1、If(θ′is S)and(d′is S)and(qi′is S1),then(I is I3);
2、If(θ′is M)and(d′is S)and(qi′is S1),then(I is I1);
3、If(θ′is B)and(d′is S)and(qi′is S1),then(I is I3);
4、If(θ′is S)and(d′is M)and(qi′is S1),then(I is I3);
5、If(θ′is M)and(d′is M)and(qi′is S1),then(I is I1);
6、If(θ′is B)and(d′is M)and(qi′is S1),then(I is I3);
7、If(θ′is S)and(d′is B)and(qi′is S1),then(I is I3);
8、If(θ′is M)and(d′is B)and(qi′is S1),then(I is I3);
9、If(θ′is B)and(d′is B)and(qi′is S1),then(I is I3);
for example, rule 2 fuzzy rule, when the tilt angle θ' of the beaker is [30, 60]]The distance d' from the beaker to the front object is [ -2, 2]In between, the matching degree q of the voice keywordsiIs' S1I.e. the keyword is successfully matched with the intention I in the speech meaning library, the final intention is I1。
According to the intelligent beaker, on one hand, the problem of lack of teacher guidance in the teaching process is solved through an experiment navigation and experiment result scoring system. On the other hand, the intelligent beaker solves the problems of process monitoring of user behaviors, experimental scene understanding, reality quantification interaction method and the like through multi-mode fusion and intention understanding algorithm. Adopt vision, pronunciation and the mode of sensor fusion to interact in the experimentation, accomplish experiment teaching and guide through voice navigation, can also observe the phenomenon that is difficult for observing in traditional experiment in the experiment scene that virtual reality fuses, make the student easily remember the experiment main points, deepen the experiment impression.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, the scope of the present invention is not limited thereto. Various modifications and alterations will occur to those skilled in the art based on the foregoing description. And are neither required nor exhaustive of all embodiments. On the basis of the technical scheme of the invention, various modifications or changes which can be made by a person skilled in the art without creative efforts are still within the protection scope of the invention.