Disclosure of Invention
In view of the foregoing, it is desirable to provide a data requesting method, apparatus, computer device, storage medium, and computer program product capable of improving slice response efficiency in response to the above technical problems.
In a first aspect, the present application provides a data request method, including:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the method further comprises:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, storing the target data into the first-level cache database and the second-level cache database includes:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, storing the target data into the first-level cache database and the second-level cache database includes:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the data screening of the primary cache database when the storage capacity of the primary cache database reaches the upper limit of the capacity includes:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the method further comprises:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In a second aspect, the present application further provides a data request apparatus, including:
the acquisition module is used for acquiring a data identifier of data to be requested;
the searching module is used for searching target data from the first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and the slicing module is used for carrying out data slicing on the global data according to the data identification to obtain target data if the target data does not exist in the secondary cache database, and sending the target data to the terminal so that the terminal can display the target data.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In a fourth aspect, the present application further provides a computer-readable storage medium. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In a fifth aspect, the present application further provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
The data request method, the data request device, the computer equipment, the storage medium and the computer program product can improve the data request efficiency. The method comprises the steps of acquiring a data identifier of data to be requested in the process of data request; searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification; and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data. The method can remove manual pretreatment of manual slices, greatly reduce operation and maintenance problems and storage requirements, simultaneously store the generated slices, realize that hot spot data is always acquired from the cache, and greatly improve the corresponding speed and concurrency capability of the interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The segmentation of huge data volumes to obtain data slices is a relatively basic and common operation in the field of big data technology for data mining and data analysis.
Common slicing methods include a grid slicing method and a vector slicing method, wherein the generation of vector slices is divided into a manual slicing mode and an automatic slicing mode, the manual slicing mode refers to that slices are pre-generated before a calling party uses the slices, a request is analyzed to a front-end rendering display in a static resource mode according to a certain folder organizational structure, although the manual slicing mode is quick and convenient to use, a large amount of time is needed for preprocessing slice data, including stock slices and incremental slices, and great manual workload and storage space are brought; the automatic slicing is that when a client requests sliced data, slicing processing is carried out, and meanwhile, the problem brought about is the problem of interface performance, long-time slicing processing time for a single request is too long, so that the experience of a user is greatly reduced, and meanwhile, under the condition of high concurrency, the high concurrency performance of the framework is also limited by the pressure of a rapidly-expanded server.
The data request method provided by the application integrates the advantages of manual slicing and automatic slicing modes, manual preprocessing of manual slicing is removed by adopting a two-dimensional power grid dynamic vector slicing method based on a hot spot caching mechanism, the operation and maintenance problems and the storage requirements are greatly reduced, and meanwhile, a redis hot spot data caching mechanism and a fastdfs distributed caching technology are adopted to store generated slices; and an automatic slicing mechanism is adopted to dynamically cut the slice data requested for the first time and cache or cache update is carried out, so that the hot spot data is always acquired from the cache, and the corresponding speed and the concurrency capability of the interface are greatly improved.
In an embodiment, as shown in fig. 1, a data request method is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including the terminal and the server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 101, obtaining a data identifier of data to be requested.
In the embodiment of the application, a user can input a data request instruction through an interactive interface, and a terminal obtains a data identifier of data to be requested by analyzing the data request instruction.
Optionally, the data identifier of the data to be requested is a unique identifier of the data to be requested, and may be, for example, a number, a code, or the like.
102, searching target data from a first-level cache database according to the data identification; and if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identifier.
In the embodiment of the application, sliced data are stored in both the first-level cache database and the second-level cache database, and when a data request is made, the sliced data are searched in the first-level cache database and the second-level cache database. Optionally, in this embodiment of the present application, the first-level cache database may run in a first server, and the second-level cache database may run in a plurality of distributed second servers, where the first-level cache database performs data maintenance by using a redis hot spot data cache mechanism. And the secondary cache database maintains data by adopting a fastdfs distributed cache technology.
The data stored in the first-level cache database is hot spot data, and the data stored in the second-level cache database is all sliced data. The data stored in the first-level cache database must exist in the second-level cache database, but the data stored in the second-level cache database is not necessarily in the first-level cache database.
In the embodiment of the application, target data is searched in a first-level cache database, and the target data is data corresponding to a data identifier. And if the target data is found in the first-level cache database, sending the target data to the terminal so that the terminal can display the target data. And if the target data is not searched in the first-level cache database, searching in the second-level cache database.
And if the target data is found in the secondary cache database, sending the target data to the terminal so that the terminal can display the target data. And if the target data is not found in the secondary cache database, the target data corresponding to the data identifier is not sliced.
And 103, if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In the embodiment of the application, when the target data corresponding to the data identifier is not sliced, the first-level cache database and the second-level cache database do not store the target data, so that the data is sliced according to the data identifier to obtain the target data. In the embodiment of the present application, the global data may refer to data in a GIS (Geographic Information System). But may also refer to any raw data. The method of slicing the global data may be, for example, a vector slicing method or a grid slicing method.
The data request method comprises the steps of acquiring a data identifier of data to be requested in the process of data request; searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification; and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data. The method can remove manual pretreatment of manual slices, greatly reduce operation and maintenance problems and storage requirements, simultaneously store the generated slices, realize that hot spot data is always acquired from the cache, and greatly improve the corresponding speed and concurrency capability of the interface.
In an embodiment of the present application, in order to improve the first access efficiency, global data or data of interest may be pre-sliced, and then the pre-sliced result is stored in the first-level cache database and the second-level cache database, so that the first request efficiency of data may be improved by pre-slicing data.
In an embodiment of the present application, if the target data is obtained by data slicing of the global data, the target data is stored in the first-level cache database and the second-level cache database.
If the target data is obtained by data slicing of the global data, it indicates that the target data does not exist in the first-level cache database and the second-level cache database at the current moment, and in this case, the target data may be stored in the first-level cache database and the second-level cache database.
If the target data is searched from the first-level cache database or the second-level cache database, the target data is indicated to be already present in the first-level cache database or the second-level cache database, and therefore the target data does not need to be stored again.
The embodiment of the application ensures that the cache service is balanced in performance and memory consumption through two layers of caches.
Optionally, the step of storing the target data into the first-level cache database and the second-level cache database may be to store the target data into the first-level cache database, and when the first-level cache database is full of data, the redundant data in the first-level cache database may be stored into the second-level cache database according to a time sequence, and then the current target data is stored into the first-level cache database. The data slice removed from the first-level cache database is stored into the second-level cache database, and the first-level cache database stores data with short slicing time.
Optionally, in this embodiment of the application, storing the target data in the first-level cache database and the second-level cache database may refer to storing the target data in the first-level cache database and the second-level cache database at the same time without any sequence, that is, storing the target data in both the first-level cache database and the second-level cache database.
It should be noted that the first-level cache database is at the upper limit of the capacity, and the second-level cache database can theoretically make the storage space infinite by adding a hard disk or the like. The second-level cache database is stored on a disk by using fastdfs, so that the persistence of the cache is ensured, and meanwhile, the non-hotspot cache is stored by using a lower-cost hard disk space, and the purpose of reducing the cost is realized.
Optionally, in this embodiment of the application, the target data is stored in the second-level cache database, and then the target data is stored in the first-level cache database.
All read operations need to access the first-level cache database first and then the second-level cache database; all write operations require operating the secondary cache database first and then the primary cache database.
Optionally, in an embodiment, as shown in fig. 2, storing the target data into the first-level cache database and the second-level cache database includes:
step 201, detecting whether the storage capacity of the primary cache database has reached the upper limit of the capacity.
In the embodiment of the application, before storing the target data in the first-level cache database, whether the storage capacity of the first-level cache database reaches the upper limit of the capacity may be detected, and if the storage capacity reaches the upper limit of the capacity, it indicates that the first-level cache database is full.
And 202, screening the data of the primary cache database under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity. And storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In the embodiment of the application, under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, data screening needs to be performed on the first-level cache database, and the screened data is removed from the first-level cache database, so that the first-level cache database has a new storage space for storing target data.
Optionally, in this embodiment of the present application, the process of performing data screening on the first-level cache database may further include the following steps: the prior data may be deleted from the level one cache database in chronological order. Or for the data in the first-level cache database, counting the data heat, sorting according to the data heat, deleting the data sorted at the tail from the first-level cache database, and then storing the target data into the first-level cache database.
The method for counting the heat of the data may be, for example: and determining the data heat degree by counting the access times of the data within the preset time length.
Optionally, in this embodiment of the present application, the process of performing data screening on the first-level cache database may further include the following steps: and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the primary cache database based on an LRU (Least Recently Used Chinese) algorithm.
The LRU elimination mechanism refers to traversing a linked list firstly when a user accesses one data; when traversing to the node of the corresponding data, deleting the node from the original position, and then inserting the node into the head of the linked list. If there is no data not previously cached, the node is inserted directly into the data header. If the cache is full, how much data to insert is deleted from the tail.
In the embodiment of the application, target data are stored in the memory by utilizing redis, and the non-hot cache is eliminated by an LRU mechanism, so that the memory is fully utilized.
On the basis of the foregoing embodiments, an embodiment of the present application further provides a data request method, as shown in fig. 3, the method includes the following steps:
step 301, acquiring an update message when the global data is updated.
In the embodiment of the application, the global data is often updated, and after the global data is updated, an update message can be generated according to the updated data, wherein the update message includes data content and data identification of the updated data.
Step 302, traversing the primary cache database and the secondary cache database according to the update message.
In the embodiment of the application, the update message includes the data content and the data identifier of the data that is updated, corresponding data can be found in the first-level cache database and the second-level cache database according to the data identifier, and then the data content before updating is replaced with the data content of the updated data.
In the embodiment of the application, when updating, the first-level cache database may be traversed first, and then the second-level cache database may be traversed.
In an optional implementation manner, when global data is updated, it may be checked whether updated data exists in the second-level cache database first, and if not, it is checked whether updated data exists in the first-level cache database, and if so, the data in the second-level cache database is updated based on an update message. And checking whether the first-level cache database has updated data, and if not, ending the updating. And if so, updating the data in the primary cache database based on the updating message, and then finishing the updating.
In the embodiment of the application, the slice data in the first-level cache database and the second-level cache database are updated, so that the slice data are always the latest data, and the inconsistency of the data is avoided.
The data request method provided by the embodiment of the application uses multiple technical schemes such as an LRU elimination mechanism, a redis primary cache and a fastdfs secondary cache, and controls pre-cutting and updating of data in a slicing process, so that response efficiency and request concurrency of slicing requests are greatly improved.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the present application further provides a data request apparatus for implementing the above-mentioned data request method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the data request device provided below can be referred to the limitations of the data request method in the foregoing, and details are not described here.
In one embodiment, as shown in fig. 4, there is provided a data request apparatus including: an obtaining module 401, a searching module 402 and a slicing module 403, wherein:
an obtaining module 401, configured to obtain a data identifier of data to be requested;
a searching module 402, configured to search target data from the primary cache database according to the data identifier; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and a slicing module 403, configured to, if target data does not exist in the secondary cache database, perform data slicing on the global data according to the data identifier to obtain target data, and send the target data to the terminal, so that the terminal displays the target data.
In one embodiment, the slicing module 403 is specifically configured to store the target data into the first-level cache database and the second-level cache database if the target data is obtained by data slicing of the global data.
In one embodiment, the slicing module 403 is specifically configured to store the target data into the primary cache database after storing the target data into the secondary cache database.
In one embodiment, the slicing module 403 is specifically configured to detect whether the storage capacity of the primary cache database has reached an upper capacity limit;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the slicing module 403 is specifically configured to perform data screening on the primary cache database based on an LRU algorithm when the storage capacity of the primary cache database reaches the upper limit of the storage capacity.
In one embodiment, the obtaining module 401 is specifically configured to obtain an update message when the global data is updated, where the update message includes data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
The respective modules in the data request device described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a data request method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the computer program when executed by the processor further performs the steps of:
under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
In one embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, performs the steps of:
acquiring a data identifier of data to be requested;
searching target data from a first-level cache database according to the data identification; if the target data does not exist in the first-level cache database, searching the target data from the second-level cache database according to the data identification;
and if the target data does not exist in the secondary cache database, performing data slicing on the global data according to the data identification to obtain target data, and sending the target data to the terminal so that the terminal can display the target data.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and if the target data is obtained by data slicing of the global data, storing the target data into the first-level cache database and the second-level cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of: and after storing the target data into the secondary cache database, storing the target data into the primary cache database.
In one embodiment, the computer program when executed by the processor further performs the steps of: detecting whether the storage capacity of the first-level cache database reaches the upper limit of the capacity;
under the condition that the storage capacity of the first-level cache database reaches the upper limit of the capacity, screening data of the first-level cache database;
and storing the target data into the second-level cache database and the first-level cache database after the data is cleared.
In one embodiment, the computer program when executed by the processor further performs the steps of: and under the condition that the storage capacity of the primary cache database reaches the upper limit of the capacity, screening the data of the primary cache database based on an LRU algorithm.
In one embodiment, the computer program when executed by the processor further performs the steps of: under the condition that global data is updated, acquiring an update message, wherein the update message comprises data content and a data identifier;
and traversing the primary cache database and the secondary cache database according to the updating message.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.