Disclosure of Invention
In view of this, the present disclosure provides an interface display method and apparatus, so as to solve the problem that a user cannot timely and clearly view content displayed on an interface during an interface display process, or even misses an interested content.
According to a first aspect of the present disclosure, there is provided an interface display method, where the method is used for a terminal device, and the method includes:
when the interface movement is detected, judging whether the movement meets the triggering condition of highlighting the head portrait;
and when the trigger condition is met, highlighting the head portrait related to the picture in the picture contained in the interface.
For the above method, in a possible implementation manner, the determining whether the movement satisfies a trigger condition for highlighting the avatar includes:
judging whether the interface moving speed exceeds a speed threshold value or not;
determining that the movement satisfies the trigger condition when the interface movement speed exceeds a speed threshold.
For the above method, in one possible implementation, highlighting the avatar associated with the frame includes:
and covering a floating layer on the picture, wherein the region of the floating layer corresponding to the head portrait is transparent, and the other regions are non-transparent or semitransparent.
For the above method, in one possible implementation, highlighting the avatar associated with the frame includes:
performing at least one of the following processing on the area except the area where the head portrait is located on the picture: mosaic adding, frame drawing, cutting, deleting, blurring, graying, darkening, and blackening.
For the above method, in one possible implementation, highlighting the avatar associated with the frame includes:
at least one of the following processing is carried out on the area where the head portrait is located in the picture: the method comprises the steps of improving the brightness of the area where the head portrait is located, improving the chroma of the area where the head portrait is located, enabling the area where the head portrait is located to be displayed in the middle of the picture, enabling the area where the head portrait is located to be displayed dynamically, and enabling the area where the head portrait is located to be displayed in the picture in an enlarged mode.
For the above method, in one possible implementation, highlighting the avatar associated with the frame includes:
determining relevant information corresponding to head portraits related to pictures contained in the interface;
and when the related information is matched with the historical behaviors of the user, highlighting the head portrait corresponding to the related information.
For the above method, in one possible implementation manner, the avatar related to the picture includes:
the avatar is derived based on an analysis of the user's historical behavior.
According to a second aspect of the present disclosure, there is provided an interface display apparatus, comprising:
the interface movement judging module is used for judging whether the movement meets the triggering condition of highlighting the head portrait when the interface movement is detected;
and the head portrait highlighting module highlights the head portrait related to the picture in the picture contained in the interface when the triggering condition is met.
For the above apparatus, in a possible implementation manner, the interface movement determining module includes:
the judgment submodule judges whether the interface moving speed exceeds a speed threshold value;
and the determining submodule determines that the movement meets the trigger condition when the interface movement speed exceeds a speed threshold.
For the above apparatus, in one possible implementation, the avatar highlighting module includes:
and the first display sub-module is used for covering a floating layer on the picture, wherein the area of the floating layer corresponding to the head portrait is transparent, and the other areas are non-transparent or semitransparent.
For the above apparatus, in one possible implementation, the avatar highlighting module includes:
the second display sub-module performs at least one of the following processing on the area except the area where the head portrait is located on the picture: mosaic adding, frame drawing, cutting, deleting, blurring, graying, darkening, and blackening.
For the above apparatus, in one possible implementation, the avatar highlighting module includes:
a third display sub-module, performing at least one of the following processes on an area where the head portrait is located in the picture: the method comprises the steps of improving the brightness of the area where the head portrait is located, improving the chroma of the area where the head portrait is located, enabling the area where the head portrait is located to be displayed in the middle of the picture, enabling the area where the head portrait is located to be displayed dynamically, and enabling the area where the head portrait is located to be displayed in the picture in an enlarged mode.
For the above apparatus, in one possible implementation manner, the avatar highlighting module further includes:
the related information determining submodule is used for determining related information corresponding to the head portrait related to the picture contained in the interface;
and the head portrait determining submodule highlights the head portrait corresponding to the relevant information when the relevant information is matched with the historical behaviors of the user.
For the above apparatus, in one possible implementation manner, the avatar related to the picture includes:
the avatar is derived based on an analysis of the user's historical behavior.
According to a third aspect of the present disclosure, there is provided an interface display apparatus comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute the interface display method.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer program instructions, wherein the computer program instructions, when executed by a processor, implement the interface presentation method described above.
According to the interface display method and device provided by the embodiment of the disclosure, when the detected movement of the interface meets the trigger condition of highlighting the head portrait, the head portrait related to the screen is highlighted in the screen contained in the interface. The method and the device can ensure that the user determines the content displayed by the picture by checking the highlighted head portrait in the process of checking the picture displayed by the interface, avoid missing the interested content and save the checking time.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of an interface presentation method according to an embodiment of the present disclosure. As shown in fig. 1, the method may be applied to a terminal device, and the method may include steps S11 through S12.
In step S11, when the interface movement is detected, it is determined whether the movement satisfies a trigger condition for highlighting the avatar.
In this embodiment, the speed of the interface movement may be obtained, and then it is determined whether the movement satisfies the trigger condition for highlighting the avatar according to the speed of the interface movement.
In this embodiment, the terminal device may be any device, such as a mobile phone, a tablet computer, a smart watch, a vehicle-mounted terminal, an MP3 player, a VR (Virtual Reality) Head Display, VR glasses, an AR (Augmented Reality) Head Display, AR glasses, an MR (Mixed Reality) Head Display, MR glasses, a HUD (Head Up Display), or a smart television, and the disclosure is not limited thereto. The device has the function of displaying video, audio or other content related to human vision, hearing, smell, touch and taste. The interface may be an interface associated with a multimedia asset such as video, audio, pictures, etc. The user can directly move the interface by sliding a finger. The interface can also be moved by auxiliary control devices such as a handle, a mouse, etc. The user may also move the interface through eye spirit, mind (e.g., brain waves), gestures, etc., with the assistance of a related device, which is not limited by this disclosure.
In step S12, when the trigger condition is satisfied, the avatar relating to the screen is highlighted on the screen included in the interface.
In this embodiment, the trigger condition may be set according to the content of the screen displayed on the interface, the screen viewing speed at which the user can clearly see the content in the screen displayed on the interface, and the like. For example, where the speed of the interface movement exceeds the user's screen viewing speed, it may be determined that the movement satisfies the trigger condition for highlighting the avatar. The present disclosure is not so limited.
In the present embodiment, the avatar related to the picture may be an avatar of a person, an animal, or a cartoon character included in the picture. For example, the head portrait of the contact in instant messaging shown in the picture; the head portrait of a user of an account number in the social network site is displayed in the picture; thumbnails of video resources such as movies and TV shows, posters and head portraits of characters in brief introduction are displayed in the pictures; the head portrait of the person who takes the mirror and the like in the comprehensive art program. But also avatars that are associated with the content of corresponding areas in the picture, but are not shown in the picture. For example, if the content of the corresponding area in the screen is a video resource such as a movie or a tv show, the avatar may be an avatar of an actor playing a role in a thumbnail, a poster, or a brief introduction.
In one possible implementation, the avatar associated with the frame may include: the avatar is derived based on an analysis of the user's historical behavior.
In the implementation mode, the historical behaviors of the user can be analyzed, the characteristics of the head portrait which is possibly interested by the user are determined, and the head portrait which can be highlighted and is related to the picture is generated through specific processing according to the determined characteristics and the content of the picture. The specific processing manner may be to add text, graphics, and other contents related to the contents of the picture (or related to the characteristics of the avatar interested by the user) to the avatar interested by the user, which is not limited by the present disclosure. For example, if it is determined that the user F likes the actor K based on the historical behavior of the user F, when it is determined that the screen content is a series of series in which the actor K has starred, the avatar after processing may be determined as the avatar related to the screen by performing processing of adding characters to the avatar of the hero that the actor K has starred in the series based on the theme of the series or the preference of the user. Or, the head portrait K ' of the actor K may be acquired based on the preference of the user, and the characters related to the tv play may be added to the head portrait K ' according to the content of the tv play, and the head portrait K ' after the characters are added may be determined as the head portrait related to the picture.
In this embodiment, the manner of highlighting the avatar in the screen may include: enhancing the display effect of the region corresponding to the head portrait in the picture, wherein the display effect of the region except the head portrait in the picture is unchanged; reducing the display effect of the area except the head portrait in the picture, wherein the display effect of the area corresponding to the head portrait in the picture is unchanged; and enhancing the display effect of the region corresponding to the head portrait in the picture, and reducing the display effect of the region except the head portrait in the picture so as to achieve the purpose of highlighting the head portrait. The manner of highlighting the avatar may be set by those skilled in the art according to actual needs, and the present disclosure is not limited thereto.
In one possible implementation, in a case where the avatar in the screen is already highlighted in the screen included in the interface, if the movement of the interface shifts from a case where the trigger condition for highlighting the avatar is satisfied to a case where the trigger condition for highlighting the avatar is not satisfied, the avatar in the highlighted screen is held for the holding display time or the avatar in the highlighted screen is stopped in the screen included in the interface. The holding display time may be set according to the complexity of the content corresponding to the avatar, and the more complex the content, the longer the holding time. The present disclosure is not so limited.
Fig. 2 shows a flowchart of step S11 in the interface display method according to an embodiment of the present disclosure.
In one possible implementation, as shown in fig. 2, step S11 may include step S01 and step S02.
In step S01, the related information corresponding to the avatar relating to the screen included in the interface is specified.
In this implementation, the associated information corresponding to the avatar is used to describe the content displayed by the avatar. For example, if the content to be shown corresponding to the avatar is the hero S of a certain movie, the related information of the avatar may include the genre, showing time, score, and name of the role of the hero S and the name of the player of the hero S. Those skilled in the art can set the content included in the related information of the avatar according to actual needs, and the present disclosure does not limit this.
In step S02, when the related information matches the user' S historical behavior, the avatar corresponding to the related information is highlighted.
In this implementation, the historical behavior of the user may be obtained from the browsing record, the retrieval record, and the like of the user. The content which is possibly interested by the user can be determined according to the historical behaviors of the user, and the head portrait of which the associated information is matched with the content which is interested by the user is determined as the highlighted head portrait. For example, if it is determined that the user a likes to watch a comedy movie according to the historical behavior, the head portrait of the comedy movie is included in the highlight associated information in the process of highlighting the head portrait for the user a. And determining that the user B likes the actor M according to the historical behaviors, and highlighting the actor M in the picture in the process of highlighting the head portrait for the user B. The method for acquiring the historical behavior of the user can be set by a person skilled in the art according to actual needs, and the disclosure does not limit this. Therefore, the head portrait which is related to the picture and is possibly interested by the user can be highlighted for the user according to the historical behaviors of the user, the user is prevented from missing the interested content, and the selection time of the user is saved.
Fig. 3 shows a flowchart of step S11 of the interface presentation method according to an embodiment of the present disclosure.
In one possible implementation, as shown in fig. 3, step S11 may include step S111 and step S112.
In step S111, it is determined whether the interface movement speed exceeds a speed threshold.
In this implementation, the speed threshold may be determined according to the complexity of the content displayed in the screen included in the interface and the reading speed of the user for the content with different complexities. The more and more complicated the content displayed on the screen, the slower the reading and understanding speed of the user, and the smaller the value of the speed threshold.
In step S112, when the interface movement speed exceeds the speed threshold, it is determined that the movement satisfies the trigger condition.
Therefore, when the interface moving speed exceeds the speed threshold value, the head portrait related to the picture contained in the interface can be highlighted for the user, and the user can know the specific content in the picture contained in the interface under the condition that the interface moving speed is high.
Fig. 4 is a schematic diagram illustrating a floating layer in an interface display method according to an embodiment of the disclosure.
In one possible implementation, as shown in fig. 4, the highlighting of the avatar associated with the picture in step S12 may include: a floating layer is covered on the picture, the area 1 of the floating layer corresponding to the head portrait is transparent, and the other areas 2 are non-transparent or semitransparent.
In this implementation manner, the other areas 2 in the floating layer may also be processed by filling colors, patterns, and the like to increase the eye-catching degree of the avatar, and remind the user of the content displayed on the screen corresponding to the avatar. The transparency and filling of the other regions 2 in the float layer can be set by those skilled in the art according to actual needs, and the present disclosure does not limit this.
In one possible implementation, as shown in fig. 5, the highlighting of the avatar related to the picture in step S12 may further include: and performing at least one of the following processes on the area 3 except the area where the head portrait is located on the picture: mosaic adding, frame drawing, cutting, deleting, blurring, graying, darkening, and blackening. Fig. 5 shows a schematic diagram of an interface presentation method according to an embodiment of the present disclosure. As shown in fig. 5, the brightness of the area 3 other than the area where the avatar is located is darkened. Therefore, the attention of the user to the area 3 except the area where the avatar is located can be reduced, and the attention of the user to the avatar is further improved, so that the user can determine whether multimedia resources such as a movie, a television, a picture album, audio and the like corresponding to the avatar are interesting contents based on the avatar.
It should be understood that other treatments can be performed on the area 3 according to actual needs by those skilled in the art, and the present disclosure does not limit this.
Fig. 6 shows a schematic diagram of an interface presentation method according to an embodiment of the present disclosure.
In one possible implementation, as shown in fig. 6, the highlighting of the avatar related to the picture in step S12 may further include: at least one of the following processes is carried out on the area where the head portrait is located in the picture: the method comprises the steps of improving the brightness of an area 1 where the head portrait is located, improving the chromaticity of the area 1 where the head portrait is located, enabling the area 1 where the head portrait is located to be displayed in the middle of a picture, enabling the area 1 where the head portrait is located to be displayed dynamically, and enabling the area 1 where the head portrait is located to be displayed in the picture in an enlarged mode.
In this embodiment, in the case where only one avatar is included in the screen, the region where the avatar is located may be directly displayed in the middle portion of the entire screen. In the case where a plurality of avatars are included in the screen, the area where the avatar is located may be displayed in the center and/or enlarged in the area where the multimedia resource such as a movie or a tv show corresponding to the avatar is located in the screen. It should be understood that, the display mode of the region where the avatar is located can be set by those skilled in the art according to actual needs, and the present disclosure does not limit this.
The area where the avatar is located may be any shape area containing the avatar, such as a circular area, a polygonal area, etc., and the present disclosure does not limit this.
Application example
An application example according to the embodiment of the present disclosure is given below in conjunction with "filter movie" as an exemplary application scenario to facilitate understanding of the flow of the interface presentation method. It is to be understood by those skilled in the art that the following application examples are for the purpose of facilitating understanding of the embodiments of the present disclosure only and are not to be construed as limiting the embodiments of the present disclosure.
Fig. 7a and 7b are schematic diagrams illustrating an application scenario of an interface presentation method according to an embodiment of the present disclosure. As shown in fig. 7a, without using the interface display method provided by the present disclosure, in the process of screening movies by a certain video client software, a user only displays the movies to be selected in the interface, and does not perform processing of highlighting the movies. In the prior art, in the process of fast moving through an operation control interface by a user, the contents of elements and the like in the interface are not changed, and the elements and the like are only fast moved along the direction indicated by the user. But due to the fast moving speed, the user is likely to miss the movie in which the user is interested.
As shown in fig. 7b, in the case of adopting the interface display method provided by the present disclosure, in the process of screening movies by a certain video client software, a user controls the movement of the interface by sliding the interface with a finger. And when the interface moving speed exceeds a speed threshold value, acquiring an avatar related to a picture contained in the current interface, highlighting the avatar, and controlling the dimming of the area except the avatar. Under the condition that the interface moving speed is high, a user can know the specific content of the film according to the head portrait displayed in the area corresponding to each film, interested content cannot be missed, and the viewing requirement of the user is met.
It should be noted that, although the interface display method is described above by taking the above-mentioned embodiment as an example, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each step according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
According to the interface display method provided by the embodiment of the disclosure, when the detected movement of the interface meets the trigger condition of highlighting the head portrait, the head portrait related to the screen is highlighted in the screen contained in the interface. The method and the device can ensure that the user determines the content displayed by the picture by checking the highlighted head portrait in the process of checking the picture displayed by the interface, avoid missing the interested content and save the checking time.
FIG. 8 shows a block diagram of an interface presentation device according to an embodiment of the present disclosure. As shown in fig. 8, the apparatus may include an interface movement determination module 401 and an avatar highlighting module 402. The interface movement determination module 401 is configured to determine whether the movement satisfies a trigger condition for highlighting the avatar when the interface movement is detected; the avatar highlighting module 402 is configured to highlight an avatar associated with a frame in the frames contained in the interface when a trigger condition is satisfied.
FIG. 9 shows a block diagram of an interface presentation device according to an embodiment of the present disclosure.
In one possible implementation, as shown in fig. 9, the interface movement determination module 401 may include a determination sub-module 4011 and a determination sub-module 4012. The determination sub module 4011 is configured to determine whether the interface moving speed exceeds a speed threshold. The determination sub-module 4012 is configured to determine that the movement satisfies the trigger condition when the speed of the interface movement exceeds a speed threshold.
In one possible implementation, as shown in fig. 9, the avatar highlighting module 402 may include a first display sub-module 4021. The first display sub-module 4021 is configured to cover a floating layer on a screen, where a region of the floating layer corresponding to the avatar is transparent, and other regions are non-transparent or semi-transparent.
In one possible implementation, as shown in fig. 9, the avatar highlighting module 402 may include a second display sub-module 4022. The second display sub-module 4022 is configured to perform at least one of the following processing for an area other than the area where the avatar is located on the screen: mosaic adding, frame drawing, cutting, deleting, blurring, graying, darkening, and blackening.
In one possible implementation, as shown in fig. 9, the avatar highlighting module 402 may include a third display sub-module 4023. The third display sub-module 4023 is configured to perform at least one of the following processes on an area of the screen where the avatar is located: the method comprises the steps of improving the brightness of the area where the head portrait is located, improving the chroma of the area where the head portrait is located, enabling the area where the head portrait is located to be displayed in the middle of the picture, enabling the area where the head portrait is located to be displayed dynamically, and enabling the area where the head portrait is located to be displayed in the picture in an enlarged mode.
In one possible implementation, as shown in fig. 9, the avatar highlighting module 402 may include a correlation information determination sub-module 4024 and an avatar determination sub-module 4025. The related information determining sub-module 4024 is configured to determine related information corresponding to a head portrait related to a screen included in the interface. The avatar determination sub-module 4025 is configured to highlight the avatar corresponding to the associated information if the associated information matches the historical behavior of the user.
In one possible implementation, the avatar associated with the frame may include: the avatar is derived based on an analysis of the user's historical behavior.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that, although the interface display device is described above by taking the above-mentioned embodiment as an example, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each part according to personal preference and/or actual application scene as long as the technical scheme of the disclosure is met.
According to the interface display device provided by the embodiment of the disclosure, under the condition that the detected movement of the interface meets the trigger condition of highlighting the head portrait, the head portrait related to the screen is highlighted in the screen contained in the interface. The method and the device can ensure that the user determines the content displayed by the picture by checking the highlighted head portrait in the process of checking the picture displayed by the interface, avoid missing the interested content and save the checking time.
Fig. 10 shows a block diagram of an interface presentation apparatus 800 according to an embodiment of the present disclosure. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 10, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user, in some embodiments, the screen may include a liquid crystal display (L CD) and a Touch Panel (TP). if the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), programmable logic devices (P L D), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the methods described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.