Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 schematically shows a flow chart of a user management method according to a first embodiment of the invention.
Referring to fig. 1, a user management method according to a first embodiment of the present invention includes the steps of:
step S110, collecting a face image of a user.
Step S120, searching a first image similar to the face image from an image library.
Step S130, performing interpolation amplification processing on the face image according to the size of the first image to obtain a second image with the same size as the first image.
Step S140, generating a clear image according to the first image and the second image, so as to determine the information of the user based on the clear image. The information of the user may be age and/or gender information of the user.
In one embodiment of the present invention, the process of generating the sharp image from the first image and the second image in step S140 includes: repairing the first image to obtain a third image; performing pixel filling processing on the second image based on a neural network to obtain a fourth image; and synthesizing the clear image according to the third image and the fourth image.
In one embodiment of the present invention, the acquiring information of the user based on the clear image in step S140 includes: and extracting the face features in the clear image, and determining the age group and/or gender information of the user according to the face features.
The user management method shown in fig. 1 enables a repair process to be performed on the captured blurred image, and thus can improve the accuracy of identification of user information (such as age group and/or gender information). The execution subject of the user management method shown in fig. 1 may be a terminal or a server. When the execution main body is a terminal, the terminal can acquire a face image of a user through a camera; when the execution main body is the server, the server can acquire the face image of the user through the user terminal, namely the user terminal sends the acquired face image to the server.
In an embodiment of the present invention, as shown in fig. 2, before step S120, the method further includes: step S111, judging whether the duration of continuously collecting the face image of the user reaches a preset duration; and executing the step S120 again under the condition that the time for continuously collecting the face image is not longer than the preset time.
In the embodiment, the collected face images are processed to obtain the age group and/or gender information of the user when the duration of continuously collecting the face images of the user is judged not to reach the preset duration, so that the analysis on the human flow and the instant passenger group can be realized, and the aim of accurate marketing is further fulfilled.
In an embodiment of the present invention, as shown in fig. 2, the method further includes: and step S112, under the condition that the time for continuously collecting the face image is judged to reach the preset time, identifying whether the user is a registered user or not according to the face image.
In the embodiment, when the duration of continuously collecting the face images of the user reaches the preset duration, whether the user is a registered user is identified according to the collected face images, so that the time for identifying the registered user can be determined, and the influence on the user experience and the processing performance of the system caused by the identification of the registered user on all the collected face images is avoided.
In one embodiment of the present invention, if the user is identified as a registered user, information associated with the user is acquired. Optionally, the information associated with the user comprises: user profile information, such as historical trails of shopping, consumption preferences, financial merchant data, etc., determined from the user's behavioral data.
In addition, in an embodiment of the present invention, the user management method further includes: and if the user is identified to be registered as a member, pushing promotion information and/or preferential information to the user according to the information of the user.
In an embodiment of the present invention, if the user is identified as an unregistered user, the user is prompted to register.
In the embodiment of the present invention, the step S112 of identifying whether the user is a registered user according to the face image may have two specific implementation manners as follows:
the implementation mode is as follows:
judging whether an image which is consistent with the face image can be matched in the stored registered user images; and if the image which is consistent with the face image is matched, determining that the user is a registered user, otherwise, determining that the user is an unregistered user.
It should be noted that, when the user registers, the face image of the user may be collected and stored, and further, the recognition may be performed based on the stored face image.
The implementation mode two is as follows:
sending the face image to a designated device so that the designated device can judge whether the user is a registered user according to the face image; and receiving a judgment result returned by the appointed equipment, and identifying whether the user is a registered user according to the judgment result.
It should be noted that: the designated device may be a server. In the second implementation manner, the acquired face image may be sent to a server, and the server determines whether the user is registered.
Fig. 3 schematically shows a flow chart of a user management method according to a third embodiment of the invention.
Referring to fig. 3, a user management method according to a third embodiment of the present invention includes the steps of:
step S30, a face image of the user is acquired.
In the embodiment of the invention, the face image of the user can be acquired through the camera.
And step S32, identifying whether the user is registered as a member according to the face image.
In the embodiment of the present invention, the step S32 may have two specific implementations as follows:
the implementation mode is as follows:
in an exemplary embodiment of the present invention, step S32 includes: judging whether a member image corresponding to the face image can be matched in the stored member images or not; and if the member image which is consistent with the face image is matched, determining that the user is registered as a member, otherwise, determining that the user is not registered as a member.
It should be noted that, when a user registers a member, a facial image of the user may be collected and stored, and then member identification may be performed based on the stored facial image.
The implementation mode two is as follows:
in an exemplary embodiment of the present invention, step S32 includes: sending the facial image to a server so that the server can judge whether the user is registered as a member according to the facial image; and receiving a judgment result returned by the server, and identifying whether the user is registered as a member according to the judgment result.
In the second implementation manner, the collected face image may be sent to a server, and the server determines whether the user is registered as a member.
And step S34, if the user is identified to be registered as a member, acquiring and displaying the information of the user.
In an exemplary embodiment of the present invention, the acquiring of the information of the user in step S34 includes: querying a database for information associated with the user; or receiving the information associated with the user sent by the server. That is, the information of the user may be obtained by directly querying the database, or may be received from the server.
It should be noted that the user information obtained and displayed may be user portrait information determined according to user behavior data, such as historical trail of shopping, consumption preference, financial data, and the like.
In addition, in an embodiment of the present invention, the user management method further includes: and if the user is identified to be registered as a member, pushing promotion information and/or preferential information to the user according to the information of the user.
Referring to fig. 4, the user management method according to the fourth embodiment of the present invention, in addition to including steps S30, S32, and S34 shown in fig. 3, further includes step S36 of prompting the user to perform member registration if it is recognized that the user is not registered as a member.
According to the embodiment of the present invention, on the basis of the user management methods shown in fig. 3 and fig. 4, the method may further include: judging whether the duration of continuously acquiring the face image of the user reaches a preset duration; when it is determined that the time period during which the facial image of the user is continuously acquired reaches the predetermined time period, step S32 shown in fig. 3 and 4 is performed, that is, whether the user is registered as a member is identified according to the facial image. The technical scheme of the embodiment can determine the time for member identification, and avoids the influence on the experience of the user and the processing performance of the system caused by member identification on all the collected face images.
Optionally, in an embodiment of the present invention, when it is determined that the duration of continuously acquiring the face image does not reach the predetermined duration, the user management method further includes: identifying age and/or gender information of the user according to the face image, and storing the identified age and/or gender information of the user; or sending the face image to a server so that the server can identify the age group and/or gender information of the user. The technical scheme of the embodiment can realize the analysis of the human flow and the instant customer group, thereby achieving the aim of accurate marketing.
In an embodiment of the present invention, the identifying age and/or gender information of the user according to the face image in the above embodiment includes: processing the face image to obtain a clear image; extracting the human face features in the clear image; and identifying the age bracket and/or the gender information of the user according to the face features.
Fig. 5 schematically shows a flowchart of processing a face image according to an embodiment of the present invention, which specifically includes:
step S501, searching a first image similar to the face image from an image library, and performing repairing processing on the first image to obtain a second image.
It should be noted that the image library includes a local image library and/or an image library on a network, and since the purpose of processing the images is to identify the age group and/or gender information of the user, a first image similar to the face image may be searched from the image library to assist in identifying the age group and/or gender information.
Step S502, carrying out interpolation amplification processing on the face image according to the size of the first image to obtain a third image with the same size as the first image, and carrying out pixel filling processing on the third image based on a neural network to obtain a fourth image.
Step S503, generating the sharp image based on the second image and the fourth image.
The technical scheme of the embodiment shown in fig. 5 enables the acquired blurred image to be repaired, and further improves the identification accuracy of the age group and/or gender information of the user.
It should be noted that the executing subject of the technical solutions of the embodiments shown in fig. 3 to fig. 5 may be an intelligent terminal.
Fig. 6 schematically shows a flow chart of a user management method according to a fifth embodiment of the invention.
Referring to fig. 6, a user management method according to a fifth embodiment of the present invention includes the steps of:
step S601, receiving a face image sent by a designated device.
In the embodiment of the invention, the specified device can be an intelligent terminal for acquiring the face image.
Step S602, determining whether the corresponding user is registered as a member according to the face image, so as to obtain a determination result.
In an exemplary embodiment of the present invention, step S602 includes: judging whether a member image corresponding to the face image can be matched in the stored member images or not; and if the member image which is consistent with the face image is matched, determining that the user is registered as a member, otherwise, determining that the user is not registered as a member.
It should be noted that, when a user registers a member, a facial image of the user may be collected and stored, and then member identification may be performed based on the stored facial image.
And step S603, feeding back the determination result to the designated device.
In the embodiment of the invention, the judgment result is fed back to the specified equipment, so that the specified equipment can be used according to the judgment result
In some embodiments of the present invention, based on the foregoing scheme, the user management method further includes: judging whether the duration of continuously receiving the face images of the same user sent by the specified equipment reaches a preset duration or not; and when the time length for continuously receiving the face image of the same user reaches the preset time length, judging whether the user is registered as a member according to the face image. The technical scheme of the embodiment can determine the time for member identification, and avoids the influence on the experience of the user and the processing performance of the system caused by member identification on all the collected face images.
Optionally, in an embodiment of the present invention, when it is determined that a duration of continuously receiving the face images of the same user does not reach the predetermined duration, the user management method further includes: and identifying the age bracket and/or the gender information of the user according to the face image, and storing the identified age bracket and/or gender information of the user. The technical scheme of the embodiment can realize the analysis of the human flow and the instant customer group, thereby achieving the aim of accurate marketing.
In some embodiments of the present invention, based on the foregoing scheme, the user management method further includes: acquiring information associated with the user when the user is judged to be registered as a member according to the face image; sending information associated with the user to the designated device to cause the designated device to display the information associated with the user.
It should be noted that the information associated with the user may be user representation information determined from behavioral data of the user, such as historical trail of shopping, consumption preferences, financial data, and the like.
It should be noted that the execution subject of the technical solution of the embodiment shown in fig. 6 may be a server.
The foregoing embodiments respectively describe the user management method in the embodiments of the present invention from the perspective of the intelligent terminal and the server, and the following describes in detail the technical solution in the embodiments of the present invention from the perspective of the combination of the intelligent terminal and the server.
Referring to fig. 7, the smart terminal according to an embodiment of the present invention includes: the terminal comprises a terminal main control module, a power supply module, a camera module and a voice module.
Wherein, power module is used for supplying power to intelligent terminal. The camera module is used for shooting a face image of a user, capturing the face image of the user in a proper environment and transmitting the face image to the server for user analysis. The acquired face image may be a base64 image and the frequency may be once every 0.1 seconds. The voice module is used for man-machine interaction with a user.
Referring to fig. 8, a server according to an embodiment of the present invention includes: the system comprises a main control module, a face recognition module, a feature analysis module, a matching recognition module, a voice synthesis module, a transceiving module and a data storage module.
The face recognition module is used for receiving the face image transmitted from the intelligent terminal for recognition and extracting features of the face image. Alternatively, the face recognition module may perform face recognition based on Fast RCNN (Fast specific area convolutional neural network) algorithm.
This section may include two aspects: 1. member login or member entry; 2. and analyzing the customer group.
1. When the member logs in or the member logs in, the acquired face image is usually a clear static face image, so that the face can be classified by Fast RCNN.
2. And analyzing the customer group.
Limited by the rapid walking of the customer in the store and the camera's own shooting ability, the face captured to a large extent is a blurred or too sparse image. At this time, preprocessing is performed first to restore the image into a clearer face image, so that more accurate age and gender detection can be performed, and a merchant is assisted in performing customer group analysis. The process of processing an image to obtain a clearer face image is specifically shown in fig. 9, and the main steps are as follows:
(1) and acquiring a photo sent by the intelligent terminal, normalizing the photo to 8 × 8 pixels, and recording the normalized image as an original image.
(2) And matching a high-pixel photo for the original image by using the adjusting network to obtain a large image most similar to the original image. The large map is then processed, for example by generating 256 pre-estimated channels to each pixel through a series of 18-30 low resolution ResNet tiles and transposed convolutional layers, to process the matched large map.
It should be noted that: the adjusting network is a grading matching network, when the original image is input into the database, the graph with the maximum score can be obtained in a grading mode, and the graph is the matched big graph most similar to the original image.
(3) A pixel neural network is used to add high-definition detail to the artwork. Specifically, the pixel neural network has four filters, the types of the four filters are obtained by previous training (different picture types have different modes and are classified into four modes), when processing is performed, interpolation and amplification processing is performed on an original image, then the image after amplification processing is divided into a plurality of blocks, the probability that the image belongs to one mode is calculated according to the pixel division mode of each block, and after calculation, filtering conversion is performed by the most consistent filter, so that the image after amplification processing is filled.
(4) And finally, performing combined operation on the images processed by the adjusting network and the pixel neural network through a softmax algorithm to obtain a clearer picture.
The effect graph obtained after the processing in the above process is shown in fig. 10, and it can be seen that the processing method in the embodiment of the present invention can restore the blurred or excessively sparse image to a clearer face image, so as to facilitate more accurate age and gender detection.
Continuing to refer to fig. 8, the feature analysis module in the server is configured to perform age and gender detection on the extracted features or the processed face image, and store the detected features for analysis of the guest group.
The matching identification module is used for comparing the face image with information stored in the database to determine whether the user is registered as a member, if the comparison is successful, a unique user PIN code is returned, and the PIN code corresponds to a user data label and personal information. And if the corresponding member information cannot be searched in the database, returning the information of the new user.
The voice synthesis module and the transceiving module are used for serving as push data to display member information on the intelligent terminal and realizing the function of interaction with a user.
In the embodiment of the invention, the scheme for realizing user management by the cooperation of the intelligent terminal and the server specifically comprises the following steps:
step 1: the intelligent terminal starts the camera module to continuously scan, intercepts and transmits the face information to the server when the face information is captured, and the server analyzes and processes the face image transmitted by the intelligent terminal, obtains age and gender information corresponding to the face, and transmits the information to the database for storage.
And if the same face stays in front of the intelligent terminal for more than a certain time (such as 1.5 seconds to 2.5 seconds), entering a member identification step. If the user does not register the member, the user is judged to be a random customer, and then the face of the user can be collected to carry out customer group analysis.
Step 2: when the member identification process is triggered, the server performs Fast RCNN identification on the face, and if the corresponding face is not matched in the database, the member entry process is started. The interface for the member entry process may be as shown in fig. 11.
And step 3: the intelligent terminal displays the two-dimensional code, and the user scans the two-dimensional code, namely, the member binding page is jumped, wherein the specific interface is shown in fig. 12. And when the member information is bound successfully, the server can identify the corresponding PIN code of the user in the database.
And 4, step 4: if the server matches a corresponding face in the database, the personal information of the user corresponding to the PIN code, including historical trail, consumption preference, financial and merchant data and the like of user shopping (online and/or offline), is retrieved from the database according to the PIN code of the user. And classifying the information into a plurality of data labels according to the information to obtain the user portrait, and sending the portrait to the intelligent terminal for displaying. The presentation effect can be as shown in fig. 13.
And 5: for users registered as members, the server retrieves promotion and preferential information corresponding to personal data tags of the users, and then the promotion and preferential information is displayed on a screen of the intelligent terminal or converted into an audio file through a sound wave coding technology and then is pushed to the intelligent terminal (such as the intelligent terminal for acquiring face images of the users or the mobile terminal of the users).
In addition to the above steps, in the embodiment of the present invention, the database may be divided into a member database and a non-member database, the member database provides personalized and customized commodity recommendation and promotion benefits, and the non-member database is counted according to gender and age, so that the merchant can perform more comprehensive customer analysis of the store.
Fig. 14 schematically shows a block diagram of a user management apparatus according to a first embodiment of the present invention.
Referring to fig. 14, a user management apparatus 1200 according to a first embodiment of the present invention includes: an acquisition unit 1402, a look-up unit 1404, a magnification unit 1406, and a processing unit 1408.
The acquisition unit 1402 is configured to acquire a face image of a user; the searching unit 1404 is configured to search an image library for a first image similar to the face image; the enlarging unit 1406 is configured to perform interpolation and enlargement processing on the face image according to the size of the first image to obtain a second image with the same size as the first image; the processing unit 1408 is configured to generate a sharp image from the first image and the second image to determine information of the user based on the sharp image.
In some embodiments of the present invention, based on the foregoing, the processing unit 1408 is configured to: repairing the first image to obtain a third image; performing pixel filling processing on the second image based on a neural network to obtain a fourth image; and synthesizing the clear image according to the third image and the fourth image.
In some embodiments of the present invention, based on the foregoing solution, the method further includes: a determining unit, configured to determine whether a duration of continuously acquiring the face image of the user reaches a predetermined duration before the searching unit 1404 searches for the first image similar to the face image from the image library;
the searching unit 1404 is configured to search a first image similar to the facial image from an image library when the determining unit determines that the duration of continuously acquiring the facial image does not reach the predetermined duration.
In some embodiments of the present invention, based on the foregoing solution, the method further includes: the identification unit is used for identifying whether the user is a registered user according to the face image under the condition that the judgment unit judges that the duration of continuously collecting the face image reaches the preset duration; an obtaining unit, configured to obtain information associated with the user when the identifying unit identifies that the user is a registered user.
In some embodiments of the present invention, based on the foregoing scheme, the information associated with the user includes: user portrait information determined from the user's behavioral data.
In some embodiments of the present invention, based on the foregoing solution, the method further includes: and the pushing unit is used for pushing promotion information and/or preferential information to the user according to the information associated with the user when the identification unit identifies the user as a registered user.
In some embodiments of the present invention, based on the foregoing solution, the method further includes: and the prompting unit is used for prompting the user to register when the identification unit identifies that the user is an unregistered user.
In some embodiments of the present invention, based on the foregoing solution, the identification unit is configured to: judging whether an image which is consistent with the face image can be matched in the stored registered user images; and if the image which is consistent with the face image is matched, determining that the user is a registered user, otherwise, determining that the user is an unregistered user.
In some embodiments of the present invention, based on the foregoing solution, the identification unit is configured to: sending the face image to a designated device so that the designated device can judge whether the user is a registered user according to the face image; and receiving a judgment result returned by the appointed equipment, and identifying whether the user is a registered user according to the judgment result.
In some embodiments of the present invention, based on the foregoing solution, the processing unit is configured to: and extracting the face features in the clear image, and determining the age group and/or gender information of the user according to the face features.
Referring now to FIG. 15, shown is a block diagram of a computer system 1500 suitable for use in implementing an electronic device of an embodiment of the present invention. The computer system 1500 of the electronic device shown in fig. 15 is only an example, and should not bring any limitations to the function and scope of use of the embodiments of the present invention.
As shown in fig. 15, the computer system 1500 includes a Central Processing Unit (CPU)1501 which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1502 or a program loaded from a storage section 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data necessary for system operation are also stored. The CPU 1501, the ROM 1502, and the RAM 1503 are connected to each other by a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
The following components are connected to the I/O interface 1505: an input portion 1506 including a keyboard, a mouse, and the like; an output portion 1507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 1508 including a hard disk and the like; and a communication section 1509 including a network interface card such as a LAN card, a modem, or the like. The communication section 1509 performs communication processing via a network such as the internet. A drive 1510 is also connected to the I/O interface 1505 as needed. A removable medium 1511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1510 as necessary, so that a computer program read out therefrom is mounted into the storage section 1508 as necessary.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 1509, and/or installed from the removable medium 1511. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 1501.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the user management method as described in the above embodiments.
For example, the electronic device may implement the following as shown in fig. 1: step S110, collecting a face image of a user; step S120, searching a first image similar to the face image from an image library; step S130, carrying out interpolation amplification processing on the face image according to the size of the first image to obtain a second image with the same size as the first image; step S140, generating a clear image according to the first image and the second image, so as to determine the information of the user based on the clear image.
As another example, the electronic device may implement the various steps shown in fig. 2-6.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.