Disclosure of Invention
The invention provides an unmanned control system and a control method suitable for a roundabout scene, which are specially designed for a reinforcement state and an action according to driving decision characteristics aiming at the driving requirement of the roundabout driving scene, and optimize a network framework of an Actor-critical framework for reinforcement learning, so that the decision method can be better suitable for the driving decision problem of the roundabout unmanned scene.
The invention provides an unmanned control system suitable for a roundabout scene, which comprises a perception and cognition module, a driving control module and a track control module;
the perception cognition module is used for acquiring the running state information of the current vehicle and the environmental vehicle and processing signals;
the driving control module is used for learning appropriate decision parameter values;
and the track control module is used for obtaining the feasible track after the optimization planning.
Another aspect of the present invention provides an unmanned control method for a roundabout scene, which is implemented by an unmanned control system for a roundabout scene according to an aspect of the present invention, comprising the following steps,
step one, designing states and actions in a Markov driving decision process;
the driving decision is modeled into a Markov decision process based on a reinforcement learning method, and comprises a state vector S for representing factors influencing the driving decision of the intelligent agent and a design of an action vector A for enhancing the refined decision of the decision intelligence of the intelligent agent;
step two, designing a network framework of the Actor;
in a reinforcement learning Actor-criticic framework, an Actor selects an action according to a state vector, namely representing a driving decision; the state vector comprises two parts of an environment characterization part and a task characterization part; through the redesign of the network framework of the Actor, the state vector has different strategies in different stages, and different dimensions of the balance environment characterization and the task characterization are achieved, so that the driving environment under different conditions can be accurately identified and the driving task can be accurately completed when the intelligent vehicle runs in the roundabout;
step three, designing a return function;
the agent selects an action A in the environment according to the state vector S to obtain a return signal, and updates the strategy according to the return signal.
The invention relates to an unmanned control method suitable for a rotary island scene, further comprising the following steps in the Markov driving decision process state and action design of the first step,
firstly, designing a state variable;
the state variables are used for action selection and value function estimation in a reinforcement learning algorithm, and the state variables comprise state variable designs of an Environment Representation (ER) related to the relative state of the vehicle and the surrounding vehicle and a Task Representation (TR) related to a vehicle driving task, wherein the environment representation is used for an intelligent agent to complete safety decision, and the task representation is used for the intelligent agent to complete the driving task;
secondly, designing action variables;
taking multi-layer driving behaviors into consideration at a decision layer; the motion vector A representing the driving decision of the vehicle comprises discrete macroscopic driving behaviors which are the lateral deviation T of the terminal relative to the central line of the vehicle channelyAnd continuous micro-and meso-driving behavior, anticipating acceleration a for adding decision variablestarTime of action ta(ii) a Lateral offset T of terminal relative to central line of laneyBelongs to E { -L,0, L }, and respectively represents left lane changing, lane keeping and right lane changing; l is the distance between two adjacent lanes; then by motion vector a ═ Ty,atar,ta)TAnd comprehensively representing the driving decision, and inputting the driving decision as an input variable into a lower track planning layer and a vehicle control layer.
The unmanned control method suitable for the roundabout scene further comprises the steps of designing a state variable in the first step; for environmental characterization, in the roundabout, a part of the vehicles in the week is adjacent to the vehicle, and the vehicles are vehicles which are in direct contact interaction and need attention; the position of these vehicles is P
1,P
2,....,P
7(ii) a Relative lanes Δ L of the vehicle at these positions at time k
n(k) Relative velocity Δ v
n(k) Acceleration a
n(k) Relative distance d
n(k) Intention of driving I
n(k) Considered in the environment characterization, the subscript n corresponds to the position number P
nVehicle information of (d); here relative lane Δ L
n(k) By Δ L
n(k)=L
n(k)-L
h(k) Is calculated to obtain wherein L
n(k),L
h(k) Respectively at time point P of k
nA lane of a host vehicle and a lane of the host vehicle; relative velocity Δ v
n(k) By Δ v
n(k)=v
n(k)-v
h(k) Is calculated to obtain wherein v
n(k),v
h(k) Respectively at time point P of k
nThe speed of the host vehicle and the speed of the host vehicle; driving intention I
n(k) E { -1,0,1} represents time P of k
nThe vehicle has the intention of changing lanes left, keeping lanes and changing lanes right; meanwhile, a human driver makes a decision according to the state of surrounding vehicles and selects a smooth lane according to traffic flow information on a certain lane, so that traffic jam is reducedThe probability of a pause; near forward and backward traffic, e.g. position P
8,P
9,....,P
12As another part of the environmental characterization; position P
8,P
9,....,P
12The state of (A) is determined by the average relative speed of the traffic at time k
Average headway
And (4) showing. Here k time P
nThe time interval between the front vehicle and the vehicle j is TH
n,j(k)=d
n,j(k)/v
n,j(k) Wherein d is
n,j(k),v
n,j(k) The relative distance between the vehicle j and the front vehicle at the moment k and the vehicle speed of the vehicle j are respectively; then k time, position P
1,P
2,....,P
7At each position P
nIs expressed by equation (1),
SPn(k)=(Fn(k),ΔLn(k),Δvn(k),an(k),dn(k),In(k))T, (1)
wherein FnE {1,0} indicates whether the corresponding location is a viable lane; time k, position P8,P9,....,P12The state variable at the state variable is expressed as equation (2),
then at time k, the Environment Representation (ER) is expressed as equation (3),
for the task representation, in the roundabout, the driving control module finishes a set driving task in route navigation planning, so that the intelligent vehicle drives in the roundabout from a certain entrance and then drives out from another exit; then at time k, the host vehicle is opposite to the exitRelative longitudinal distance Δ lh(k) And relative lane Δ Lh(k) In task characterization; relative longitudinal distance Deltal of the vehicle relative to the exith(k) Represented by the formula (4),
wherein Δ αh(k),DE,Dh(k),αE,αh(k) The central angles corresponding to the central angle of the vehicle at the moment k relative to the exit position E, the diameter of a lane where the vehicle is located at the moment k, the exit position E and the position of the vehicle at the moment k are respectively the central angles; relative lane Δ Lh(k)=LE-Lh(k) Wherein L isE,Lh(k) Respectively as an exit position E and a lane where the vehicle k is located at the moment; then at time k, the task characterization (TR) is expressed as equation (5),
STR(k)=(Δlh(k),ΔLh(k))T. (5)
the state vector S is then jointly characterized using the environmental characterization and task characterization of the above design.
The unmanned control method suitable for the roundabout scene further comprises the step three of returning r according to the safety in the return function designsTasking reward rtExecutive reporting reThree layers; time k security report rs(k) According to the own lane Lh(k) And a target lane Ltar(k)=Lh(k)+sign(Ty(k) Distance of the vehicle from the host vehicle, where sign (T)y(k) Left and right lane changing actions selected by the vehicle at the time k; also including vehicles that will switch into both lanes in the future 5S; when the terminal is laterally offset T relative to the center line of the laney(k) When the vehicle speed is 0, the vehicle performs a lane keeping operation, and only the front part P of the vehicle4Vehicle considerations of location; when the terminal is laterally offset T relative to the center line of the laney(k) If < 0, P is considered1,P2,P3,P4Four positionsThe vehicle of (1). Suppose a position P at time knAt a distance d from the host vehicle in the lane directionn(k) Then the security at this moment is reported rs(k) Can be incrementally calculated as equation (6),
wherein d iseIs a dangerous distance, dcIs the collision distance;
time k mission-specific reporting rt(k) The calculation is carried out from the following three aspects, the first aspect is the final completion situation of the intelligent vehicle for the driving task of going out of the roundabout, the incremental calculation is the formula (7),
wherein | Δ lh(k)|=|(αE-αh(k))DEI is the longitudinal distance of the vehicle from the exit E on the lane, alphaE,αh(k),DEThe central angle of the vehicle relative to the exit position E at the time of the exit position E and k, and the diameter of the lane where the exit position E is located. Relative lane Δ Lh(k)=LE-Lh(k),LE,Lh(k) The exit position E and the time k are the lanes where the vehicle is located;
the second aspect is related to the decision of different positions of the intelligent vehicle, and due to the fact that the inner lane has higher traffic efficiency, the vehicle tends to select the inner lane to pass through the rotary island faster, and then the expected relative lane delta L at the moment k isexp(k) The calculation is as in equation (8),
wherein alpha is
E,α
lcRespectively the exit position E and the central angle required for completing one lane change operation,
for rounding-down the sign of the operation, the relative lane Δ L
h(k)=L
E-L
h(k),L
E,L
h(k) The exit position E and the time k are the lanes where the vehicle is located; then another portion of the tasking return at time k r
t(k) The incremental calculation is equation (9),
wherein, Δ Lexp(k) Desired relative lane at time k, Ty(k) Is the lateral offset of the terminal relative to the central line of the lane; meanwhile, when the vehicle selects the lane change decision-making behavior, the target lane L is processedtar(k) And the road Lh(k) Comparing the front vehicle with the traffic flow conditions; assume that the preceding vehicle requiring comparison is position P1,P4If the traffic flow condition to be compared is P8,P9The reward is calculated as equations (10a), (10b), (10c) and (10d),
wherein v is
1(k),v
4(k),TH
1(k),TH
4(k),d
1(k),d
4(k),
Respectively, at time k
1,P
4Speed, headway from host vehicle, longitudinal distance, and k time position P
8,P
9Average time interval of traffic flow of (1);
the last part of the mission-based reward r at time kt(k) The incremental calculation is equation (11),
rt(t)=rt(t)+k1rt,1+k2rt,2+k3rt,3+k4rt,4 (11)
wherein k is1,k2,k3,k4Are parameters respectively;
finally, an executive report r is given at time ke(k) As shown in formula (12),
wherein k is5,k6Are respectively a parameter, LTIs the total number of lanes in the roundabout, Lh(k) Time k of the vehicle lane, Ty(k) Is the lateral offset of the terminal relative to the central line of the lane;
finally, the time k returns r (k) as the formula (13),
r(t)=rs(t)+rt(t)+re(t) (13)
wherein r iss(t),rt(t),re(t) security returns r at time k, respectivelys(k) Tasking reward rt(k) Executive reporting re(k)。
The unmanned control system and the unmanned control method suitable for the roundabout scene can achieve the following beneficial effects:
the unmanned control system and the unmanned control method suitable for the roundabout scene have the following advantages: (1) considering an Environment Representation (ER) related to the relative state of the vehicle and the surrounding vehicle and a Task Representation (TR) related to a vehicle driving task aiming at the driving requirement of the roundabout driving scene so as to better adapt to the driving decision problem of the roundabout unmanned driving scene; (2) the method is based on refined driving decision requirements, and the decided action vector simultaneously comprises a discrete variable pointing to macro driving behavior of lane changing and a continuous variable pointing to micro driving behavior of lane changing, so that better system performance is realized; (3) according to different characteristics and characteristics of an environment characterization (ER) and a task characterization (TR), an Actor network framework of an Actor-critical framework established by a reinforcement learning decision algorithm is specially designed to balance the dimension difference of the two characterization modes; and (4) the return function is designed by considering the performance indexes of safety return, mission return and executive return, so that the intelligent agent can effectively learn to obtain the driving strategy.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Example 1
The unmanned control system suitable for the roundabout scene, as shown in fig. 1, comprises a perception and cognition module, a driving control module and a track control module;
the perception cognition module is used for acquiring the running state information of the current vehicle and the environmental vehicle and processing signals;
the driving control module is used for learning appropriate decision parameter values;
and the track control module is used for obtaining the feasible track after the optimization planning.
Example 2
The unmanned control method suitable for the roundabout scene is realized by the unmanned control system suitable for the roundabout scene as described in embodiment 1, and comprises the following steps,
step one, designing states and actions in a Markov driving decision process;
the driving decision may be modeled as a markov decision process based on a reinforcement learning approach. The method comprises a state vector S representing factors influencing intelligent driving decision factors, and can enhance the design of an action vector A for the refined decision of intelligent decision making intelligence of the intelligent agent. The specific method comprises the following steps:
the first step, the state variable design,
the state variables are used for action selection and value function estimation in the reinforcement learning algorithm, so that the relation between the current state of the intelligent agent and the environment and the characteristics among tasks required to be completed by the current state of the intelligent agent can be accurately represented in the design of the state variables, the sensitivity of the intelligent agent to the environment and the state of the intelligent agent can be improved, the intelligent agent can be helped to reasonably act in the changing environment, and the learning process can be more effective. Meanwhile, the efficiency of the learning algorithm and the learning result are not only related to the design of the return function, but also have a certain degree of relation with the design of the state variable.
In the design of the state variables, the present embodiment considers two parts of the state variable design, an environment representation 1 related to the relative state of the vehicle and the surrounding vehicle, and a task representation 2 related to the driving task of the vehicle. The environment representation 1 can help the intelligent agent to complete safety decision, and the task representation 2 helps the intelligent agent to smoothly complete driving task.
For environmental characterization 1, in the roundabout, the week vehicle can be divided into two parts as shown in fig. 2, and the numbering is as shown in the figure. P1-P7The range of different positions is shown in table 1.
Table 1:
position of
|
Range
|
Position of
|
Range
|
P4 |
THn∈[0,3]
|
P1,P5 |
dn∈[10,40]
|
P2,P6 |
dn∈[-10,10]
|
P3,P7 |
dn∈[-40,-10]
|
P8,P9,P10 |
dn<40
|
P11,P12 |
dn>-40 |
TH in Table 1n(k)=dn(k)/vh(k),THn(k),dn(k),vh(k) Respectively at time point P of knThe location is relative to the headway, relative distance of the host vehicle, and the speed of the host vehicle.
Some of the vehicles shown in fig. 2, which are adjacent to the host vehicle, may interact with each other in direct contact and require close attention. The position of these vehicles is P
1,P
2,....,P
7. Relative lanes Δ L of the vehicle at these positions at time k
n(k) Relative velocity Δ v
n(k) Acceleration a
n(k) Relative distance d
n(k) Intention of driving I
n(k) Considered in the
environment representation 1, the subscript n corresponds to the position number P
nThe vehicle information of (c). Here relative lane Δ L
n(k) By Δ L
n(k)=L
n(k)-L
h(k) Is calculated to obtain wherein L
n(k),L
h(k) Respectively at time point P of k
nThe lane of the host vehicle and the lane of the host vehicle. Relative velocity Δ v
n(k) By Δ v
n(k)=v
n(k)-v
h(k) Is calculated to obtain wherein v
n(k),v
h(k) Respectively at time point P of k
nThe speed of the vehicle and the speed of the vehicle. Driving intention I
n(k) E { -1,0,1} represents time P of k
nThe vehicle has the intention to change lanes left, to keep lanes, and to change lanes right. Meanwhile, a human driver can make a decision according to the state of surrounding vehicles and also consider traffic flow information on a certain lane, and if a smooth lane is selected, the probability of traffic jam and pause can be reduced. Thus, near forward and backward traffic, e.g. position P
8,P
9,....,P
12And the environment as another part is characterized 1. Position P
8,P
9,....,P
12The state of (A) is determined by the average relative speed of the traffic at time k
Average headTime distance
And (4) showing. Here k time P
nThe time interval between the front vehicle and the vehicle j is TH
n,j(k)=d
n,j(k)/v
n,j(k) Wherein d is
n,j(k),v
n,j(k) The relative distance between the vehicle j and the front vehicle at the moment k and the vehicle speed of the vehicle j are respectively. From the above, time k, position P
1,P
2,....,P
7At each position P
nCan be expressed as equation (1),
SPn(k)=(Fn(k),ΔLn(k),Δvn(k),an(k),dn(k),In(k))T, (1)
wherein FnE 1,0 indicates whether the corresponding location is a feasible lane. Time k, position P8,P9,....,P12The state variable at the state variable may be represented by equation (2),
therefore, at time k, the environment characterization 1 can be expressed as formula (3),
for the task characterization 2, in the roundabout, the driving decision module needs to complete a specific driving task in the route navigation planning, that is, the intelligent vehicle enters the roundabout from a certain entrance and then exits from another exit. Thus, at time k, the relative longitudinal distance Δ l of the host vehicle with respect to the exith(k) And relative lane Δ Lh(k) Are considered in task characterization 2. Relative longitudinal distance Deltal of the vehicle relative to the exith(k) Can be represented by the formula (4),
wherein Δ αh(k),DE,Dh(k),αE,αh(k) The central angle of the vehicle at the time k with respect to the exit position E, the diameters of the exit position E and the lane where the vehicle is located at the time k, and the central angles of the exit position E and the vehicle at the time k are respectively corresponding to the central angles. Relative lane Δ Lh(k)=LE-Lh(k) Wherein L isE,Lh(k) The exit position E and the lane where the host vehicle k is located at the moment are respectively. Therefore, at time k, task representation 2 can be expressed as equation (5),
STR(k)=(Δlh(k),ΔLh(k))T. (5)
finally, the state vector S is jointly characterized using the environment characterization 1 and the task characterization 2 of the above design.
The second step, the design of the action variables,
the refined driving decision should consider more driving behaviors in the decision layer. The motion vector A representing the driving decision of the vehicle comprises discrete macroscopic driving behaviors, namely the lateral deviation T of the terminal relative to the central line of the vehicle channelyAnd continuous microscopic driving behavior, i.e. adding a decision variable to the desired acceleration atarTime of action ta. Lateral offset T of terminal relative to central line of laneyAnd e { -L,0, L }, which respectively represent a left lane change, a lane keeping and a right lane change. And L is the distance between two adjacent lanes. Final use motion vector a ═ Ty,atar,ta)TAnd comprehensively representing a more refined driving decision, and inputting the driving decision as an input variable into a lower track planning layer and a vehicle control layer. In particular, when the motion vector a takes different values, it can be described as different driving behaviors as shown in table 2.
TABLE 2
(Ty,atar,ta)T |
Description of the invention
|
(-L,0.5,4)T |
Accelerating left lane change in a gentle manner
|
(-L,1,1)T |
Accelerated lane keeping
|
(-L,-1,1)T |
Deceleration lane keeping
|
(L,0,2)T |
Speed-keeping fast right lane change |
Step two, designing a network framework of the Actor;
the reinforcement learning decision algorithm of the embodiment is built on an Actor-Critic framework. In a reinforcement learning Actor-critical framework, an Actor selects an action according to a state vector, namely, a driving decision is represented. The state vector considered by this patent contains two parts, environment characterization 1 and task characterization 2. These two parts have equal effect in driving decision. For example, when the intelligent vehicle enters a lane change scene, the intelligent agent has more freedom to select actions with higher returns, for example, entering an inner lane or a lane with sparse traffic flow to obtain higher traffic efficiency, and when approaching an exit of the roundabout, the intelligent agent should change the lane outside the lane as much as possible so as to smoothly leave the roundabout from a given exit. These cases cause the state vector to have different policies at different stages. As described in step 21), the dimension of the environment characterization 1 is 52 and the dimension of the task characterization 2 is 2. Such dimension differences can make it difficult for a few-dimensional task representation 2 to function as a state representation as does environment representation 1 in a fully connected BP neural network. Therefore, in order to balance the dimension difference, the patent redesigns the network framework of the Actor, and the specific method is as follows:
as shown in fig. 3, at the input layer, task representation 2 is copied into 26 copies, and environment representation 1 is input to the Actor network, while at the first hidden layer and the second hidden layer, assuming that the number of neurons input by the previous layer is 2m, environment representation 1 is copied into the current layer m times. The above steps are repeated once for both hidden layers. Through the redesign of the network framework of the Actor, the problem of different dimensions of the environment representation 1 and the task representation 2 can be balanced, and finally, the driving environments of different conditions can be accurately identified and the driving tasks can be accurately completed when the intelligent vehicle runs in the roundabout.
Step three, designing a return function;
the agent selects an action A in the environment according to the state vector S to obtain a return signal, and updates the strategy according to the return signal. Therefore, the design of the reward function is closely related to the driving problem, and is the key to effectively learn the driving strategy.
The specific method for designing the return function under the roundabout scene considered in the patent is as follows:
the design of the return function mainly considers the safety return rsTasking reward rtExecutive reporting reThree levels. Time k security report rs(k) Mainly considering the own lane Lh(k) And a target lane Ltar(k)=Lh(k)+sign(Ty(k) Distance of the vehicle from the host vehicle, where sign (T)y(k) Left and right lane-changing actions selected by the host vehicle at time k. Also included are vehicles that will switch in both lanes in the future 5S. In particular, the lateral offset T of the terminal relative to the center line of the laney(k) When the vehicle speed is 0, the vehicle performs a lane keeping operation, and only the front part P of the vehicle4The vehicle of the location needs to be considered. When the terminal is laterally offset T relative to the center line of the laney(k) If < 0, P is considered1,P2,P3,P4Four position vehicles.Suppose a position P at time knAt a distance d from the host vehicle in the lane directionn(k) Then the security at this moment is reported rs(k) Can be incrementally calculated as equation (6),
wherein d iseIs a dangerous distance, dcIs the collision distance.
Time k mission-specific reporting rt(k) The calculation can be carried out from the following three aspects, one is the final completion situation of the intelligent vehicle for the driving task of going out of the roundabout, the calculation can be carried out in an incremental mode as an equation (7),
wherein | Δ lh(k)|=|(αE-αh(k))DEI is the longitudinal distance of the vehicle from the exit E on the lane, alphaE,αh(k),DEThe central angle of the vehicle relative to the exit position E at the time of the exit position E and k, and the diameter of the lane where the exit position E is located. Relative lane Δ Lh(k)=LE-Lh(k),LE,Lh(k) The exit positions E and k are the lanes in which the host vehicle is located.
One is related to the decision of different positions of the intelligent vehicle. Because of the higher traffic efficiency of the inboard lane, vehicles tend to select the inboard lane to pass through the rotary faster. The desired relative lane Δ L at time kexp(k) Can be calculated as equation (8),
wherein alpha is
E,α
lcRespectively the exit position E and the central angle required for completing one lane change operation,
for rounding-down the sign of the operation, the relative lane Δ L
h(k)=L
E-L
h(k),L
E,L
h(k) The exit positions E and k are the lanes in which the host vehicle is located. Thus, another portion of the tasking reward r at time k
t(k) Can be incrementally calculated as equation (9),
wherein, Δ Lexp(k) Desired relative lane at time k, Ty(k) Is the lateral offset of the terminal relative to the centerline of the lane. Meanwhile, when the vehicle selects the lane change decision-making behavior, the target lane L is processedtar(k) And the road Lh(k) Comparing the front vehicle with the traffic flow. Assume that the preceding vehicle requiring comparison is position P1,P4If the traffic flow condition to be compared is P8,P9The reward is calculated as equations (10a) to (10d),
wherein v is
1(k),v
4(k),TH
1(k),TH
4(k),d
1(k),d
4(k),
Respectively, at time k
1,P
4Speed, headway from host vehicle, longitudinal distance, and k time position P
8,P
9Average time interval of traffic flow.
Correspondingly, the last portion of the tasking return at time k, rt(k) Can be incrementally calculated as equation (11),
rt(t)=rt(t)+k1rt,1+k2rt,2+k3rt,3+k4rt,4 (11)
wherein k is1,k2,k3,k4Are parameters respectively.
Finally, the k-time executive report re(k) As shown in formula (12),
wherein k is5,k6Are respectively a parameter, LTIs the total number of lanes in the roundabout, Lh(k) Time k of the vehicle lane, Ty(k) Is the lateral offset of the terminal relative to the centerline of the lane.
Finally, the time k returns r (k) as the formula (13),
r(t)=rs(t)+rt(t)+re(t) (13)
wherein r iss(t),rt(t),re(t) security returns r at time k, respectivelys(k) Tasking reward rt(k) Executive reporting re(k)。
The unmanned control system and the unmanned control method suitable for the roundabout scene belong to the technical field of automatic driving, and relate to a driving decision method based on a reinforcement learning method for designing, wherein the reinforcement learning state and action are specially designed according to driving decision characteristics, and a network framework of an Actor-critical framework of reinforcement learning is optimized, so that the decision method can be better suitable for the driving decision of the roundabout unmanned scene. Each sub-control system of the automatic driving control system of the unmanned vehicle needs to realize automatic control through system design, as shown in fig. 1, the automatic driving control system comprises a perception and cognition module, a driving control module and a track control module, and the embodiment mainly relates to the driving control module.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.