WO2015062372A1 - Method and apparatus for selecting objects - Google Patents

Method and apparatus for selecting objects Download PDF

Info

Publication number
WO2015062372A1
WO2015062372A1 PCT/CN2014/086610 CN2014086610W WO2015062372A1 WO 2015062372 A1 WO2015062372 A1 WO 2015062372A1 CN 2014086610 W CN2014086610 W CN 2014086610W WO 2015062372 A1 WO2015062372 A1 WO 2015062372A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
slide operation
user interface
location
selection status
Prior art date
Application number
PCT/CN2014/086610
Other languages
French (fr)
Inventor
Jun Li
Zhiyong ZHUO
Yiju MOU
Zhuo GONG
Xin Wang
Liang SHANG
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015062372A1 publication Critical patent/WO2015062372A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Definitions

  • the present application generally relates to the field of computer technology, and more particularly to a method and related apparatus for selecting objects displayed in a user interface.
  • Some application programs developed for mobile terminal devices require a user to select (e.g. , use the user’s finger) objects displayed in a user interface.
  • a user is typically required to perform multiple selections to select each object individually.
  • such known object-selecting methods are difficult to perform and prone to mistakes, especially for mobile terminal devices equipped with a touchscreen.
  • the known object-selecting methods potentially prolong the process of selection for the user and increase power consumption for the mobile terminal devices.
  • an apparatus includes one or more processors, and memory storing one or more programs to be executed by the one or more processors.
  • the one or more programs include instructions for receiving information of a first slide operation performed on a user interface of the apparatus.
  • the information of the first slide operation indicates a path of the first slide operation from a starting point to an end point on the user interface.
  • the one or more programs include instructions for determining a set of objects from a group of objects displayed in the user interface based on the information of the first slide operation and location information of the group of objects. Each object from the set of objects is associated with at least a portion of the path of the first slide operation.
  • the one or more programs also include instructions for changing a selection status of a first object from the set of objects from a first selection status to a second selection status.
  • the first selection status and the second selection status are different.
  • the first object is associated with the starting point of the first slide operation.
  • the selection status of each object from the group of objects is either the first selection status or the second selection status.
  • the one or more programs further include instructions for updating a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status.
  • the information of the first slide operation includes coordinates of the path of the first slide operation on the user interface.
  • the instruction for determining includes determining the set of objects based on whether the coordinates correspond to the location of the set of objects displayed in the user interface.
  • the information of the first slide operation includes at least one coordinate of the starting point and at least one coordinate of the end point, where the at least one coordinate of the starting point corresponds to the location of the first object, and the at least one coordinate of the end point corresponds to the location of an object (the first object or another object) from the set of objects.
  • the instruction for determining includes determining the set of objects based on 1) whether vertical coordinates of the path of the first slide operation correspond to the location of the set of objects independent of horizontal coordinates of the path of the first slide operation, or 2) whether horizontal coordinates of the path of the first slide operation correspond to the location of the set of objects independent of vertical coordinates of the path of the first slide operation.
  • the one or more programs include instructions for receiving information of a second slide operation performed on the user interface.
  • the information of the second slide operation indicates a path of the second slide operation from a starting point of the second slide operation to an end point of the second slide operation on the user interface.
  • the one or more programs include instructions for determining, based on the information of the second slide operation, whether the starting point of the second slide operation corresponds to an object from the group of objects and whether the end point of the second slide operation corresponds to an object from the group of objects.
  • the one or more programs include instructions for, if the starting point of the second slide operation is determined to correspond to an object from the group of objects and the end point of the second slide operation is determined not to correspond to any object from the group of objects, updating the selection status of the group of objects such that the updated selection status of each object from the group of objects is a predefined selection status.
  • the predefined selection status is one of the first selection status and the second selection status.
  • the information of the second slide operation includes at least one coordinate of the starting point of the second slide operation and at least one coordinate of the end point of the second slide operation.
  • the instruction for determining includes determining whether the at least one coordinate of the starting point of the second slide operation corresponds to the location of objects from the group of objects, and whether the at least one coordinate of the end point of the second slide operation corresponds to the location of objects from the group of objects.
  • an apparatus includes one or more processors, and memory storing one or more programs to be executed by the one or more processors.
  • the one or more programs include instructions for receiving information of a touch operation performed on a user interface of the apparatus.
  • the information of the touch operation indicates a number of touch points of the touch operation.
  • the one or more programs include instructions for determining whether the number of touch points is greater than a threshold number of touch points.
  • the one or more programs include instructions for, if the number of touch points is determined to be greater than the threshold number, identifying a set of objects from a group of objects displayed in the user interface and then performing a predefined operation on each object from the set of objects without performing the predefined operation on any object excluded from the set of objects.
  • Each object from the set of objects is in a first selection status, and each object excluded from the set of objects is in a second selection status different from the first selection status.
  • performing the predefined operation on each object from the set of objects includes changing the location of that object on the user interface.
  • the threshold number of touch points is one.
  • the information of the touch operation indicates a location of the touch operation on the user interface.
  • the one or more programs further comprise instructions for, if the number of touch points is determined to be less than or equal to the threshold number, identifying an object from the group of objects based on the location of the touch operation and location information of the group of objects, and then changing the selection status of the identified object.
  • the location of the touch operation is associated with the location of the identified object. In some instances, a horizontal coordinate of the location of the touch operation corresponds to a vertical coordinate of the identified object, and a vertical coordinate of the location of the touch operation does not correspond to any vertical coordinate of the identified object.
  • a non-transitory computer readable storage medium stores one or more programs including instructions for execution by one or more processors.
  • the instructions when executed by the one or more processors, cause the processors to perform the method for selecting objects displayed in a user interface of an apparatus as described above.
  • FIG. 1 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIG. 1a is a schematic diagram illustrating a user interface of an apparatus associated with performing the method in FIG. 1.
  • FIG. 2 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIG. 2a is a schematic diagram illustrating a user interface of an apparatus associated with performing the method in FIG. 2.
  • FIG. 3 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIGS. 3a-3b are schematic diagrams illustrating a user interface of an apparatus associated with performing the method in FIG. 3.
  • FIG. 4 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIG. 5a-5b are block diagrams of apparatuses configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIG. 6 is a block diagram illustrating components of an apparatus configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • FIG. 1 is a flowchart illustrating a method 100 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus performing the method 100 can be any type of electronic device configured to present, to a user of the apparatus, a user interface where one or more objects are displayed.
  • the user can operate the apparatus to manipulate a position of each object that is displayed in the user interface of the apparatus.
  • such an apparatus can be, for example, a cellular phone, a smart phone, a mobile Internet device (MID) , a personal digital assistant (PDA) , a tablet computer, an e-book reader, a laptop computer, a handheld computer, a desktop computer, a wearable device, a MP3 player (Moving Picture Experts Group Audio Layer III) , a MP4 (Moving Picture Experts Group Audio Layer IV) player, and/or any other personal electronic device.
  • the apparatus can be referred to as, for example, a client device, a mobile device, a user device, a terminal device, a portable device, and/or the like.
  • the apparatus includes a display device configured to display the user interface including visual representations (e.g. , an icon, an image, a figure, a symbol, etc. ) of one or more objects.
  • a display device can be any device configured to display a user interface such as, for example, a screen, a monitor, a touch screen, a projector, and/or the like.
  • an application program e.g. , a software application
  • the apparatus is configured to display the visual representations of the objects on the user interface.
  • an application program can be, for example, a game (e.g. , a card game, a board game, etc. ) or any other type of application.
  • an object can be, for example, an icon, a list, a figure, an image, a picture, a text block, a table, and/or the like.
  • the application program can be a card game and the objects can be cards used in the card game.
  • the apparatus performing the method 100 can be operatively connected to and communicate with other network devices (e.g. , another apparatus similar to the apparatus performing the method 100) via, for example, one or more networks (e.g. , the Internet) .
  • network devices e.g. , another apparatus similar to the apparatus performing the method 100
  • networks e.g. , the Internet
  • the user operating the apparatus can be any person (potentially) interested in using the application game. Such a person can be, for example, a player of a card game.
  • the apparatus performing the method 100 includes one or more processors and memory.
  • the method 100 is implemented using instructions or code of the application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus.
  • the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 100 is performed to manipulate the visual representations of the objects (simplified as “object” herein) displayed in the user interface. As shown in FIG. 1, the method 100 includes the following steps.
  • the apparatus receives information of a slide operation performed on the user interface of the apparatus.
  • the received information of the slide operation indicates a path of the slide operation from a starting point to an end point on the user interface.
  • a slide operation can be performed by the user using, for example, a finger of the user, an input device of the apparatus (e.g. , a mouse, a keyboard, a touch pen stylus, etc. ) , or any other suitable input method.
  • the slide operation is performed within an operable area of the user interface.
  • an operable area of the user interface can include, for example, any area of the user interface that is not occupied by a predefined icon (e. g., with a specific given function) such as, for example, a button to submit a result, a button for going to a next page, an text message block showing an instruction, and/or the like.
  • the operable area of the user interface includes any area where an object being operated can potentially be.
  • the operable area of the user interface is defined as any area of the user interface, on which an operation (e.g. , a slide operation, a touch operation, etc. ) is treated as a valid operation on the objects being operated.
  • location information of an operation e.g. , a touch operation, a slide operation, etc.
  • location information of an operation is collected by, for example, one or more monitoring instructions or commands executed at the apparatus.
  • UITouches can be used to determine coordinates of a touch point on the user interface.
  • UIGestureRecognizer from the developer tool kit iPhone SDK can also be used to capture location information of a touch point or an operation.
  • MotionEvent can be used to obtain location information of a touch point or an operation.
  • the operating system of the apparatus e.g. , an iPhone device
  • the operating system of the apparatus is enabled to capture the following four types of UITouches events (i.e. , touch operation on the user interface) :
  • information of a number of touch points occurred during the operation can be captured and recorded. Such information can be used to determine, for example, whether the operation is a single-touch point operation (e.g. , a touch by a single finger) or multiple-touch point operation (e.g. , touches by two fingers) . As shown and described below with respect to FIG. 4, different methods can be performed based on the number of touch points of an operation performed on the user interface.
  • UIGestureRecognize when used (e.g. , on an iPhone device) , a derivative function UIGestureRecognizer can be instantiated and displayed as an object on the user interface, to capture information of operations (particularly, slide operations) performed by the user on the user interface.
  • the following four types of events can be captured by analyzing state value (s) of the UISwipeGestureRecognizer object:
  • each analyzed state value of the UISwipeGestureRecognizer object can be processed to determine location information of the touch point (s) .
  • the operating system of the apparatus e.g. , an Android device
  • MotionEvent events i.e. , touch operation on the user interface
  • ACTION_MOVE event which is triggered by a slide operation (e.g. , continuous and moving contact with the screen) performed on the user interface;
  • ACTION_UP which is triggered by disengagement (e. g., of a finger) of contact with the screen (e.g. , a touch screen) , and as a result, an end point of the operation is determined; and
  • ACTION_CANCEL which is triggered by sliding (e.g. , of a finger) out of the screen (e.g. , a touch screen) or the operable area of the user interface, and as a result, an end point of the operation is determined.
  • a getPointerCount () function in the MotionEvent objects can be invoked to capture and record a number of touch points occurred during the operation.
  • the apparatus determines a set of objects from a group of objects displayed in the user interface based on the information of the slide operation and location information of the group of objects.
  • Each object from the set of objects is associated with at least a portion of the path of the slide operation.
  • the apparatus can determine the set of objects in various suitable methods based on the information of the slide operation and location information of the group of objects.
  • the information of the slide operation includes coordinates of the path of the slide operation on the user interface.
  • the apparatus can be configured to determine the set of objects based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface.
  • the information of the slide operation includes at least one coordinate of the starting point and at least one coordinate of the end point.
  • the apparatus can be configured to determine a first object from the group of objects, whose location corresponds to the at least one coordinate of the starting point.
  • the apparatus can be configured to determine a second object from the group of objects, whose location corresponds to the at least one coordinate of the end point.
  • the apparatus can be configured to determine the set of objects from the group of objects based on vertical coordinates of the path of the slide operation corresponding to the location of objects from the set of objects. Such a determination can be independent of horizontal coordinates of the path of the slide operation.
  • the apparatus can be configured to determine the set of objects from the group of objects based on horizontal coordinates of the path of the slide operation corresponding to the location of objects from the set of objects. Such a determination can be independent of vertical coordinates of the path of the slide operation.
  • the apparatus can be configured to determine the set of objects from the group of objects using both vertical and horizontal coordinates of the path of the slide operation.
  • FIG. 1a is a schematic diagram illustrating a user interface 110 of the apparatus associated with performing the method 100 in FIG. 1.
  • a slide operation is performed on a user interface where a group of cards (i.e. , objects) are displayed.
  • the starting point of the slide operation has a horizontal coordinate X2 and a vertical coordinate (not shown in FIG. 1a) .
  • the end point of the slide operation has a horizontal coordinate X3 and a vertical coordinate (not shown in FIG. 1a) .
  • the slide operation has a range of horizontal coordinates [X2, X3] .
  • the determination of a set of cards from the group of cards is based on the horizontal coordinates of the slide operation only, which is independent of the vertical coordinates of the slide operation. Specifically, the horizontal coordinates of each card are compared to the range of [X2, X3] . If a horizontal coordinate of a card falls within the range of [X2, X3] , it indicates that the card is selected by the slide operation, thus being included in the set of cards. Otherwise, if none horizontal coordinate of a card falls within the range of [X2, X3] , it indicates that the card is not selected by the slide operation, thus being excluded from the set of cards.
  • the apparatus change a selection status of at least one object from the set of objects.
  • each object is in one of multiple selection states.
  • each object can be either in a “selected” state or an “unselected” state.
  • the selection status of each object can be changed from one to the other.
  • the apparatus changes a selection status of the first object (corresponding to the starting point of the slide option) from the set of objects from a first selection status (e.g. , “unselected” ) to a second selection status (e.g. , “selected” ) .
  • the apparatus updates a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status.
  • the apparatus changes the selection status of the first object (i.e. , the object corresponding to the starting point of the slide option) , and updates the selection status of each remaining object from the set of objects to make them identical to the updated selection status of the first object.
  • the apparatus can change the selection statuses of the set of objects according to any other suitable rules. For example, when the selection status of each object from the set of objects is “unselected, ” the apparatus changes all of them to be “selected. ” Similarly, when the selection status of each object from the set of objects is “selected, ” the apparatus changes all of them to be “unselected. ”
  • the apparatus changes the selection status of each “unselected” object to be “selected, ” and changes the selection status of each “selected” object to be “unselected. ”
  • the apparatus updates the selection status of each object from the set of objects such that the updated selection status of each object from the set of objects is a predefined selection status (e.g. , “unselected” or “selected” ) .
  • a predefined selection status e.g. , “unselected” or “selected”
  • the apparatus does not change the selection status of any object from the set of objects.
  • FIG. 2 is a flowchart illustrating a method 200 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus performing the method 200 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1.
  • the apparatus performing the method 200 can include one or more processors and memory.
  • the method 200 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus.
  • the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 200 is performed to manipulate the objects displayed in the user interface. As shown in FIG. 2, the method 200 includes the following steps.
  • the apparatus receives information of a touch operation performed on the user interface of the apparatus.
  • the information of the touch operation includes location information of the touch operation. Similar to the operations of S102 of the method 100 shown and described with respect to FIG. 1, the information of the touch operation can include, for example, coordinates of the touch point (s) of the touch operation.
  • the apparatus identifies an object from a group of objects displayed in the user interface based on the location information of the touch operation and location information of the group of objects. Similar to the operations of S102 of the method 100 shown and described with respect to FIG. 1, the apparatus can identify the object from the group of objects based on the location information of the group of objects and coordinates of the touch point (s) of the touch operation. In some embodiments, the identification can be based on both vertical and horizontal coordinates of the touch point (s) . In some other embodiments, the identification can be based on only one of the vertical coordinate and the horizontal coordinate of the touch point (s) , which is dependent of the other coordinate of the touch point (s) .
  • FIG. 2a is a schematic diagram illustrating a user interface 210 of the apparatus associated with performing the method in FIG. 2.
  • a group of cards i.e. , objects
  • Each card corresponds to a range of horizontal coordinates. Specifically, the left-most horizontal coordinate for the first card (from left to right) is X, and the horizontal distance occupied by each card is L.
  • the range of horizontal coordinates corresponding to the first card is [X, X+L)
  • the range of horizontal coordinates corresponding to the second card is [X+L, X+2L)
  • the range of horizontal coordinates corresponding to the third card is [X+2L, X+3L) , etc.
  • the apparatus in response to determining the horizontal coordinate of the touch operation as X1 (regardless of the vertical coordinate of the touch operation) , the apparatus can identify which card corresponds to the touch operation by determining which range of horizontal coordinates includes X1. For example, if X+ (A-1) *L ⁇ X1 ⁇ X+A*L, then the touch operation corresponds to the A-th card from the left.
  • the range of horizontal coordinates corresponding to the horizontal axis of the user interface 210 can be divided into a group of small ranges, each of which corresponds to a card from the group of cards displayed in the user interface 210 or a void space (i.e. , associated with no card) .
  • the apparatus can identify the card corresponding to the touch operation by determining which small range the horizontal coordinate of the touch operation falls into.
  • the apparatus determines a selection status of the identified object and changes the selection status of the identified object. Similar to the operations of S103 of the method 100 shown and described above with respect to FIG. 1, by changing the selection status of the identified object, the apparatus can select an “unselected” object or unselect a “selected” object.
  • FIG. 3 is a flowchart illustrating a method 300 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus performing the method 300 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1.
  • the apparatus performing the method 300 can include one or more processors and memory.
  • the method 300 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus.
  • the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 300 is performed to manipulate the objects displayed in the user interface. As shown in FIG. 3, the method 300 includes the following steps.
  • the apparatus receives information of an operation performed on the user interface of the apparatus.
  • the information of the operation includes location information of the operation.
  • the operations are similar to operations of S101 of the method 100 and S201 of the method 200 shown and described above with respect to FIGS. 1 and 2.
  • the location information of the operation includes coordinates of the touch point (s) of the operation.
  • the operation can be, for example, a slide operation, a touch operation, or any other type of operation performed on the user interface by a user of the apparatus.
  • the apparatus determines, based on the location information of the operation, whether the operation is performed within a first predefined area of the user interface or a second predefined area of the user interface.
  • the second predefined area is defined to be an area of the user interface where the objects are displayed; and the first predefined area is defined to include all other operable area excluding the second predefined area.
  • various types of operations are accepted within each different area of the user interface.
  • an operation not accepted in an area is performed within that area, that operation is not accepted by the apparatus. That is, the apparatus does not recognize the operation or respond to the operation.
  • touch operations but not slide operations are accepted within the first predefined area (i.e. , the operable area where the objects are not displayed)
  • slide operations but not touch operations area accepted within the second predefined area i.e. , the area where the objects are displayed
  • the operation when an operation is detected to be within the first predefined area, the operation is treated as a touch operation but not a slide operation; and when an operation is detected to be within the second predefined area, the operation is treated as a slide operation but not a touch operation.
  • a slide operation performed in the first predefined area is treated as a touch operation, where the starting point of the slide operation is treated as the only touch point of the touch operation.
  • a touch operation performed in the second predefined area is treated as a slide operation having a short path.
  • the apparatus determines that the operation is performed within the first predefined area of the user interface, at S303, the apparatus receives information of a touch operation, and identifies an object from a group of objects displayed in the user interface based on location information of the touch operation.
  • the operations are similar to operations of S201-S202 of the method 200 shown and described above with respect to FIG. 2.
  • the apparatus determines that the operation is performed within the second predefined area of the user interface, at S304, the apparatus receives information of a slide operation, and identifies a set of objects from the group of objects based on location information of the slide operation.
  • the operations are similar to operations of S101-S102 of the method 100 shown and described above with respect to FIG. 1.
  • the apparatus updates a selection status of each identified object accordingly. Specifically, if the apparatus determines that the operation is performed within the first predefined area of the user interface (at S302) and receives information of a touch operation accordingly (at S303) , the apparatus can perform operations similar to operations of S203 of the method 200 shown and described above with respect to FIG. 2. Otherwise, if the apparatus determines that the operation is performed within the second predefined area of the user interface (at S302) and receives information of a slide operation accordingly (at S304) , the apparatus can perform operations similar to operations of S103 of the method 100 shown and described above with respect to FIG. 1.
  • FIGS. 3a-3b are schematic diagrams illustrating a user interface 310 of the apparatus associated with performing the method 300 in FIG. 3.
  • a group of cards are displayed within an area 330 at the bottom of the user interface 310.
  • the area 330 is the second predefined area.
  • the first predefined area is other operable area in the user interface 310 excluding the other items displayed in the user interface 310.
  • an operation is performed within the first predefined area.
  • the operation is received as a touch operation, and a card from the group of cards is identified based on the location of the touch operation.
  • the apparatus can identify the card from the group of cards based on the horizontal coordinate of the touch operation falling within a range of horizontal coordinates corresponding to the location of that card. The apparatus then changes the selection status of the identified card.
  • an operation is performed within the second predefined area (i.e. , the area 330) .
  • the operation is received as a slide operation, and a set of cards are determined from the group of cards based on location information of the slide operation.
  • the apparatus can determine the set of cards from the group of cards based on the horizontal coordinates of the slide operation falling within a range of horizontal coordinates corresponding to the collective location of the set of cards.
  • the apparatus then updates the selection status of the set of cards in accordance with a predefined rule, as described with respect to S103 of the method 100 in FIG. 1.
  • a slide operation can start from a starting point within the second predefined area (i.e. , the area where the group of objects are displayed) and end at an end point within the first predefined area (i.e. , the operable area where the group of objects are not displayed) .
  • the apparatus can determine, based on location information of the slide operation and location information of the group of objects, whether the starting point corresponds to location of an object from the group of objects and whether the end point corresponds to location of an object from the group of objects.
  • the apparatus can determine whether at least one coordinate of the starting point corresponds to the location of any object from the group of objects, and whether at least one coordinate of the end point corresponds to the location of any object from the group of objects.
  • the apparatus can determine that the starting point corresponds to location of an object from the group of objects (indicating the starting point is within the second predefined area) , while the end point does not correspond to any object from the group of objects (indicating the end point is within the first predefined area) .
  • the apparatus can, for example, optionally identify a set of objects from the group of objects displayed in the user interface and perform a predefined operation on each object from the set of objects or each object from the group of objects. For example, the apparatus can update the selection status of the group of objects such that the updated selection status of each object from the group of objects is a predefined selection status (e.g. , “unselected” ) . That is, in a card game, by detecting a slide operation crossing the border between the first predefined area and the second predefined area, the apparatus can cancel each “selected” card and make all the cards “unselected. ”
  • a predefined selection status e.g. , “unselected”
  • FIG. 4 is a flowchart illustrating a method 400 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus performing the method 400 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1.
  • the apparatus performing the method 400 can include one or more processors and memory.
  • the method 400 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus.
  • the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 400 is performed to manipulate the objects displayed in the user interface. As shown in FIG. 4, the method 400 includes the following steps.
  • the apparatus receives information of a first touch operation performed on the user interface of the apparatus.
  • the information of the first touch operation includes location information of the first touch operation.
  • the first touch operation can include a single touch point (e.g. , contact by a single finger of a user with a screen of the apparatus) .
  • the operations are similar to operations of S101 of the method 100 and S201 of the method 200 shown and described above with respect to FIGS. 1 and 2.
  • the location information of the first touch operation includes coordinates of the single touch point of the first touch operation.
  • the information of the first touch operation includes information indicating the number of touch points of the first touch operation.
  • the apparatus identifies an object from a group of objects displayed in the user interface based on the location information of the first touch operation and location information of the group of objects.
  • the apparatus determines a selection status of the identified object and change the selection status of the identified object.
  • the operations of S402-S403 are similar to operations of S202-S203 of the method 200 shown and described above with respect to FIG. 2.
  • the apparatus can identify the object from the group objects based on only one of the vertical coordinates and the horizontal coordinates of the group of objects and the location information of the first touch operation. For example, the apparatus can identify the object such that a horizontal coordinate of the location of the touch operation corresponds to a horizontal coordinate of the identified object, while a vertical coordinate of the location of the touch operation does not correspond to any vertical coordinate of the identified object.
  • the apparatus receives information of a second touch operation performed on the user interface.
  • the information of the second touch operation indicates a number of touch points of the second touch operation.
  • the apparatus determines whether the number of touch points of the second touch operation is greater than a threshold number of touch points. In some embodiments, such a threshold number can be, for example, one. Thus, the apparatus determines whether the number of touch points of the second touch operation is one or more than one.
  • a touch operation can be determined to have multiple touch points in various methods. For example, a touch operation can be determined to have multiple touch points (e.g. , two touch points) if multiple spatially-separate contacts are made with the screen (e.g. , touch screen) of the apparatus at (substantially) the same time. For instance, two fingers of a user contact the screen at the same time.
  • a touch operation can be determined to have multiple touch points (e.g. , two touch points) if multiple temporally-separate contacts (spatially-separate or not) are made with the screen (e.g. , touch screen) of the apparatus within a very short of time period. For instance, a finger of a user contacts the screen twice within 0.5 seconds.
  • the apparatus performs the operations of S402-S403 described above accordingly. For example, the apparatus can identify an object from the group of objects based on the location of the second touch operation and location information of the group of objects (e.g. , the location of the second touch operation being associated with the location of the identified object) . The apparatus can then change the selection status of the identified object.
  • the apparatus identifies a set of objects from the group of objects displayed in the user interface and perform a predefined operation on each object from the set of objects.
  • the apparatus can identify the set of objects based on, for example, the selection status of the set of objects. For example, the apparatus can include each “unselected” object from the group of objects into the set of objects, and then change their selection status to “selected. ” For another example, the apparatus can include each “selected” object from the group of objects into the set of objects, and then change their selection status to “unselected. ”
  • the predefined operation includes changing the location of the set of objects on the user interface.
  • the apparatus can include each “selected” card from a group of cards into a set of cards, and then change the location of each card from the set of cards on the user interface to a location farther away from the unselected cards, which indicates that each selected card is played out from the group of cards.
  • FIG. 5a is a block diagram of an apparatus 510 configured to select objects displayed in a user interface of the apparatus 510 in accordance with some embodiments.
  • the apparatus 510 can be structurally and functionally similar to the apparatuses described above with respect to FIGS. 1-4. Similar to the apparatuses described with respect to FIGS. 1-4, the apparatus 510 can be configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 510.
  • the apparatus 510 includes a first receive module 501, a first identifying module 502, a first update module 503, a determination module 507 and an execution module 508.
  • each module included in the apparatus 510 can be a hardware-based module (e.g. , a digital signal processor (DSP) , a field programmable gate array (FPGA) , etc. ) , a software-based module (e.g. , a module of computer code executed at a processor, a set of processor-readable instructions executed at a processor, etc. ) , or a combination of hardware and software modules.
  • Instructions or code of each module can be stored in a memory of the apparatus 510 (not shown in FIG. 5a) and executed at a processor (e.g. , a CPU) of the apparatus 510 (not shown in FIG. 5a) .
  • the first receive module 501 is configured to, among other functions, receive information of a slide operation performed on the user interface of the apparatus 510.
  • a slide operation can be performed within a first area of the user interface for displaying a group of objects, within a second area of the user interface mutually exclusive from the first area, or crossing the border between the first and second area.
  • the first receive module 501 is configured to determine location information of the slide operation, such as coordinates of a starting point, an end point, and a path of the slide operation.
  • the first receive module 501 is configured to perform operations of S101 of the method 100 shown and described above with respect to FIG. 1.
  • the first identifying module 502 is configured to, among other functions, identify a set of objects from the group of objects based on the information (e.g. , location information such as coordinates) of the slide operation received at the first receive module 501 and location information of the group of objects. In some embodiments, the first identifying module 502 is configured to perform operations of S102 of the method 100 shown and described above with respect to FIG. 1.
  • the first identifying module 502 includes a determination submodule 5021 and a retrieve submodule 5022.
  • the determination submodule 5021 is configured to, among other functions, determine a range of locations (e.g. , a range of coordinates) corresponding to the path of the slide operation. Such a determination can be based on, for example, location information of the starting point, the end point and/or the path of the slide operation.
  • the retrieve submodule 5022 is configured to, among other functions, identify the set of objects from the group of objects based on the range of locations determined by the determination submodule 5021 and location information of the group of objects. For example, the retrieve submodule 5022 is configured to identify each object from the group of objects whose location falls within the range of locations determined by the determination submodule 5021.
  • the first update module 503 is configured to, among other functions, update selection status of objects from the set of objects identified by the first identifying module 502. As described above, the first update module 503 can update the selection status of the set of objects in accordance with various rules. In some embodiments, the first update module 503 is configured to perform operations of S103 of the method 100 shown and described above with respect to FIG. 1.
  • the determination module 507 is configured to, among other functions, determine whether a number of touch points in a touch operation performed on the user interface of the apparatus 510 is greater than a threshold number of touch points. In some embodiments, the determination module 507 is configured to perform operations of S405 of the method 400 shown and described above with respect to FIG. 4.
  • the execution module 508 is configured to, in accordance with the number of touch points of the touch operation being determined by the determination module 507 to be greater than the threshold number, identify a set of objects from the group of objects, and perform a predefined operation on the set of objects. In some embodiments, the execution module 508 is configured to perform operations of S406 of the method 400 shown and described above with respect to FIG. 4.
  • FIG. 5b is a block diagram of another apparatus 520 configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus 520 can be (substantially) identical, or structurally and functionally similar, to the apparatus 510 shown and described above with respect to FIG. 5a. Similar to the apparatuses shown and described with respect to FIGS. 1-5a, the apparatus 520 can be configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 520.
  • the apparatus 520 includes the same (or substantially the same) modules of the apparatus 510: the determination module 507 and the execution module 508. Additionally, the apparatus 520 includes a second receive module 504, a second identifying module 505, and a second update module 506. Similar to those modules included in the apparatus 510 in FIG. 5a, each additional module included in the apparatus 520 can be a hardware-based module (e.g. , a DSP, a FPGA, etc. ) , a software-based module (e. g., a module of computer code executed at a processor, a set of processor-readable instructions executed at a processor, etc. ) , or a combination of hardware and software modules.
  • a hardware-based module e.g. , a DSP, a FPGA, etc.
  • a software-based module e. g., a module of computer code executed at a processor, a set of processor-readable instructions executed at a processor, etc.
  • a combination of hardware and software modules e
  • Instructions or code of each module included in the apparatus 520 can be stored in a memory of the apparatus 520 (not shown in FIG. 5b) and executed at a processor (e.g. , a CPU) of the apparatus 520 (not shown in FIG. 5b) .
  • a processor e.g. , a CPU
  • the second receive module 504 is configured to, among other functions, receive information of a touch operation performed on the user interface of the apparatus 520.
  • a touch operation can be performed within a first area of the user interface for displaying a group of objects or within a second area of the user interface mutually exclusive from the first area.
  • the second receive module 504 is configured to determine location information of the touch operation, such as coordinates of a touch point of the touch operation.
  • the touch operation includes a single touch point.
  • the second receive module 504 is configured to perform operations of S201 of the method 200 shown and described above with respect to FIG. 2.
  • the second identifying module 505 is configured to, among other functions, identify an object from the group of objects based on the information (e.g. , location information such as coordinates) of the touch operation received at the second receive module 504 and location information of the group of objects. In some embodiments, the second identifying module 505 is configured to perform operations of S202 of the method 200 shown and described above with respect to FIG. 2.
  • the second update module 506 is configured to, among other functions, determine and change a selection status of the object identified by the second identifying module 505. Alternatively, the second update module 506 can update the selection status of the identified object in accordance with any other suitable rule. In some embodiments, the second update module 506 is configured to perform operations of S203 of the method 200 shown and described above with respect to FIG. 2.
  • FIG. 6 is a block diagram illustrating components of an apparatus 600 configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
  • the apparatus 600 can be structurally and functionally similar to the apparatuses shown and described above with respect to FIGS. 1-5b. Particularly, the components of the apparatus 600 can be collectively configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 600.
  • the apparatus 600 includes a processor 680, a memory 620, an input unit 630, a display unit 640, a sensor 650, an audio circuit 660, a Wi-Fi (Wireless Fidelity) module 670, a radio frequency (RF) circuit 610 and a power supply 690.
  • the apparatus 600 can include more or less devices, components and/or modules than those shown in FIG. 6.
  • the structure of the apparatus 600 shown in FIG. 6 does not constitute a limitation for the apparatus 600, and may include more or less components than those illustrated in FIG. 6.
  • the components of the apparatus 600 (shown or not shown in FIG. 6) can be combined and/or arranged in different ways other than that shown in FIG. 6.
  • the RF circuit 610 is configured to send and receive data, and in particular, to send uplink to data to and/or receive downlink data from a base station.
  • the RF circuit 610 is configured to send the received data to the processor 680 for further processing.
  • the RF circuit 610 can include, for example, one more antenna, amplifier, tuner, oscillator, subscriber identity module (SIM) card, transceiver, coupler, low noise amplifier (LNA) , duplexer, etc.
  • SIM subscriber identity module
  • LNA low noise amplifier
  • the RF circuit 610 is configured to wirelessly communicate with other network or device using any suitable wireless communication protocol such as, for example, GSM (Global System for Mobile communication) , GPRS (General Packet Radio Service) , CDMA (Coe Division Multiple Access) , WCDMA (Wideband Code Division Multiple Access) , LTE (Long Term Evolution) , etc.
  • GSM Global System for Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • the memory 620 is configured to store software programs and/or modules.
  • the processor 680 can execute various applications and data processing functions included in the software programs and/or modules stored in the memory 620.
  • the memory 620 includes, for example, a program storage area and a data storage area.
  • the program storage area is configured to store, for example, an operating system and application programs.
  • the data storage area is configured to store data received and/or generated during the use of the apparatus 600 (e.g. , location information of an operation, location information of objects) .
  • the memory 620 can include one or more high-speed RAM, non-volatile memory such as a disk storage device and a flash memory device, and/or other volatile solid state memory devices.
  • the memory 620 also includes a memory controller configured to provide the processor 680 and the input unit 630 with access to the memory 620.
  • the input unit 630 is configured to receive input data and signals (e. g., messages) and also generate signals caused by operations and manipulations of input devices such as, for example, a user’s finger, a touch pen stylus, a keyboard, a mouse, etc.
  • the input unit 630 includes a touch panel 631 (e.g. , a touch screen, a touchpad) and other input devices 632.
  • the other input devices 632 can include, for example, a physical keyboard, a function key (such as a volume control key, a switch key, etc. ) , a trackball, a mouse, a joystick, etc.
  • the touch panel 631 is configured to collect touch operations on or near the touch panel 631 that are performed by a user of the apparatus 600, such as slide operations and touch operations performed by the user using a finger, stylus, touch pen, or any other suitable object or attachment on or near a touch-sensitive surface of the touch panel 631.
  • the touch panel 631 can optionally include a touch detection apparatus and a touch controller.
  • the touch detection apparatus can detect the direction of the touch operation and signals generated by the touch operation, and then transmit the signals to the touch controller.
  • the touch controller can receive the signals from the touch detection apparatus, convert the signals into contact coordinate data, and then send the contact coordinate data to the processor 680.
  • the touch controller can also receive and execute commands received from the processor 680.
  • the touch panel 631 can be implemented using various types of technologies such as, for example, resistive touch screen, capacitive touch screen, infrared ray touch screen, surface acoustic wave (SAW) touch screen, etc.
  • SAW surface acoustic wave
  • the display unit 640 is configured to display information (e.g. , objects) on various graphical user interfaces (GUIs) of the apparatus 600.
  • the GUIs can include, for example, graph, text, icon, video, and/or any combination of them.
  • the display unit 640 includes a display panel 641, which can be, for example, a LCD (Liquid Crystal Display) , a LED (Light-Emitting Diode) , an OLED (Organic Light-Emitting Diode) display, etc.
  • the touch panel 631 can cover the display panel 641. After a touch operation on or near the touch panel 631 is detected, the touch panel 631 transmits information of the touch operation to the processor 680, where the type and/or other information of the touch operation are determined.
  • the processor 680 sends visual information to the display panel 641 based on the determined type of the touch operation.
  • the visual information is then displayed on the display panel 641.
  • the touch panel 631 and the display panel 641 can be integrated into one component for realization of the input and output functions.
  • the apparatus 600 includes at least one sensor 650 such as, for example, a light sensor, a motion sensor, and/or other types of sensors.
  • a light sensor can be, for example, an ambient light sensor or a proximity sensor.
  • the ambient light sensor is configured to adjust the brightness of the display panel 641 according to the light intensity received at the ambient light sensor.
  • the proximity sensor is configured to turn off the display panel 641 and/or backlight when, for example, the apparatus 600 moves near the user’s ear.
  • a motion sensor can be, for example, an acceleration transducer that can measure acceleration at each direction (e.g. , 3-axis directions) , measure the magnitude and direction of gravity when stationary, be used in applications for recognition of the posture of the apparatus 600 (e.g.
  • the apparatus 600 can also include other sensory devices such as, for example, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and/or the like.
  • the audio circuit 660, the speaker 661 and the microphone 662 collectively provide an audio interface between the user and the apparatus 600.
  • the audio circuit 660 transmits an electric signal converted from audio data to the speaker 661, where the electric signal is converted and output as an acoustical signal by the speaker 661.
  • the microphone 662 converts a collected acoustical signal into an electric signal, which is then sent to and converted to audio data by the audio circuit 660.
  • the audio data is sent to the processor 680 for further processing, and then sent to another terminal device through the RF circuit 610 or stored in the memory 620 for further processing.
  • the audio circuit 660 can also include an earplug jack to enable communication between a peripheral headset and the apparatus 600.
  • a speech message spoken by the user can be received through the microphone 662 and the audio circuit 660.
  • a speech message received from another device can be played using the speaker 661 and the audio circuit 660.
  • the Wi-Fi module 670 is configured to enable Wi-Fi communication between the apparatus 600 and other devices or networks.
  • the Wi-Fi module 670 provides the user with a wireless access to broadband Internet.
  • the user can use the Wi-Fi connection to, for example, send and receive E-mails, browse web pages, access streaming media, and so on.
  • apparatus can operate without such a Wi-Fi module or the Wi-Fi functionality.
  • the processor 680 functions as a control center of the apparatus 600.
  • the processor 680 is configured to operatively connect each component of the apparatus 600 using various interfaces and circuits.
  • the processor 680 is configured to execute the various functions of the apparatus 600 and to perform data processing by operating and/or executing the software programs and/or modules stored in the memory 620 and using the data stored in the memory 620.
  • the processor 680 can include one or more processing cores.
  • an application processor and a modem processor can be integrated at the processor 680.
  • the application processor is configured to monitor and control the operating system, user interfaces, application programs, and so on.
  • the modem processor is configured to control wireless communication.
  • the power supply 690 is used to provide power for the various components of the apparatus 600.
  • the power supply 690 can be, for example, a battery.
  • the power supply 690 can be operatively coupled to the processor 680 via a power management system that controls charging, discharging, power consumption, and/or other functions related to power management.
  • the power supply 690 can include one or more DC and/or AC power source, recharging system, power failure detection circuit, power converter or inverter, power supply status indicator, and/or the like.
  • first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
  • first ranking criteria could be termed second ranking criteria, and, similarly, second ranking criteria could be termed first ranking criteria, without departing from the scope of the present application.
  • First ranking criteria and second ranking criteria are both ranking criteria, but they are not the same ranking criteria.
  • the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting, ” that a stated condition precedent is true, depending on the context.
  • the phrase “if it is determined [that a stated condition precedent is true] ” or “if [astated condition precedent is true] ” or “when [astated condition precedent is true] ” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
  • stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An apparatus including one or more processors and memory storing one or more programs is disclosed. The programs include instructions for receiving information of a slide operation performed on a user interface of the apparatus, which has a path from a starting point to an end point. The programs include instructions for determining a set of objects displayed in the user interface based on the information and location information of the objects, where each object from the set of objects is associated with a portion of the path. The programs include instructions for changing a selection status of a first object from the set of objects from a first selection status to a second selection status, where the first object is associated with the starting point, and updating a selection status of each remaining object such that the updated selection status of each remaining object is the second selection status.

Description

METHOD AND APPARATUS FOR SELECTING OBJECTS 
RELATED APPLICATION
This application claims priority to Chinese Patent Application Serial No. 201310518421.9, entitled “Method and Apparatus for Selecting Objects, ” filed October 28, 2013, which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
The present application generally relates to the field of computer technology, and more particularly to a method and related apparatus for selecting objects displayed in a user interface.
BACKGROUND
With the popularity and wide use of mobile terminal devices, more and more application programs are developed for mobile terminal devices, especially mobile terminal devices equipped with a touchscreen. Some application programs developed for mobile terminal devices require a user to select (e.g. , use the user’s finger) objects displayed in a user interface. According to some known object-selecting methods, in order to select multiple objects, a user is typically required to perform multiple selections to select each object individually. However, due to limited distances between the multiple objects displayed in the user interface, such known object-selecting methods are difficult to perform and prone to mistakes, especially for mobile terminal devices equipped with a touchscreen. As a result, the known object-selecting methods potentially prolong the process of selection for the user and increase power consumption for the mobile terminal devices.
Thus, a need exists for a method and related apparatus that can provide a method for efficiently and accurately selecting multiple objects displayed in a user interface of a mobile terminal device.
SUMMARY
The above deficiencies associated with the known object-selecting methods may be reduced or eliminated by the techniques described herein.
In some embodiments, an apparatus includes one or more processors, and memory storing one or more programs to be executed by the one or more processors. The one or more programs include instructions for receiving information of a first slide operation performed on a user interface of the apparatus. The information of the first slide operation indicates a path of the first slide operation from a starting point to an end point on the user interface. The one or more programs include instructions for determining a set of objects from a group of objects displayed in the user interface based on the information of the first slide operation and location information of the group of objects. Each object from the set of objects is associated with at least a portion of the path of the first slide operation.
The one or more programs also include instructions for changing a selection status of a first object from the set of objects from a first selection status to a second selection status. The first selection status and the second selection status are different. The first object is associated with the starting point of the first slide operation. In some instances, the selection status of each object from the group of objects is either the first selection status or the second selection status. The one or more programs further include instructions for updating a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status.
In some instances, the information of the first slide operation includes coordinates of the path of the first slide operation on the user interface. In such instances, the instruction for determining includes determining the set of objects based on whether the coordinates correspond to the location of the set of objects displayed in the user interface. In some instances, the information of the first slide operation includes at least one coordinate of the starting point and at least one coordinate of the end point, where the at least one coordinate of the starting point corresponds to the location of the first object, and the at least one coordinate of the end point corresponds to the location of an object (the first object or another object) from the set of objects.
In some instances, the instruction for determining includes determining the set of objects based on 1) whether vertical coordinates of the path of the first slide operation  correspond to the location of the set of objects independent of horizontal coordinates of the path of the first slide operation, or 2) whether horizontal coordinates of the path of the first slide operation correspond to the location of the set of objects independent of vertical coordinates of the path of the first slide operation.
In some instances, the one or more programs include instructions for receiving information of a second slide operation performed on the user interface. The information of the second slide operation indicates a path of the second slide operation from a starting point of the second slide operation to an end point of the second slide operation on the user interface. In such instances, the one or more programs include instructions for determining, based on the information of the second slide operation, whether the starting point of the second slide operation corresponds to an object from the group of objects and whether the end point of the second slide operation corresponds to an object from the group of objects.
Furthermore, the one or more programs include instructions for, if the starting point of the second slide operation is determined to correspond to an object from the group of objects and the end point of the second slide operation is determined not to correspond to any object from the group of objects, updating the selection status of the group of objects such that the updated selection status of each object from the group of objects is a predefined selection status. In some instances, the predefined selection status is one of the first selection status and the second selection status.
In some instances, the information of the second slide operation includes at least one coordinate of the starting point of the second slide operation and at least one coordinate of the end point of the second slide operation. In such instances, the instruction for determining includes determining whether the at least one coordinate of the starting point of the second slide operation corresponds to the location of objects from the group of objects, and whether the at least one coordinate of the end point of the second slide operation corresponds to the location of objects from the group of objects.
In some embodiments, an apparatus includes one or more processors, and memory storing one or more programs to be executed by the one or more processors. The one or more programs include instructions for receiving information of a touch operation performed on a user interface of the apparatus. The information of the touch operation indicates a number of touch points of the touch operation. The one or more programs include  instructions for determining whether the number of touch points is greater than a threshold number of touch points.
Furthermore, the one or more programs include instructions for, if the number of touch points is determined to be greater than the threshold number, identifying a set of objects from a group of objects displayed in the user interface and then performing a predefined operation on each object from the set of objects without performing the predefined operation on any object excluded from the set of objects. Each object from the set of objects is in a first selection status, and each object excluded from the set of objects is in a second selection status different from the first selection status. In some instances, performing the predefined operation on each object from the set of objects includes changing the location of that object on the user interface. In some instances, the threshold number of touch points is one.
In some instances, the information of the touch operation indicates a location of the touch operation on the user interface. The one or more programs further comprise instructions for, if the number of touch points is determined to be less than or equal to the threshold number, identifying an object from the group of objects based on the location of the touch operation and location information of the group of objects, and then changing the selection status of the identified object. The location of the touch operation is associated with the location of the identified object. In some instances, a horizontal coordinate of the location of the touch operation corresponds to a vertical coordinate of the identified object, and a vertical coordinate of the location of the touch operation does not correspond to any vertical coordinate of the identified object.
In some embodiments, a non-transitory computer readable storage medium stores one or more programs including instructions for execution by one or more processors. The instructions, when executed by the one or more processors, cause the processors to perform the method for selecting objects displayed in a user interface of an apparatus as described above.
Various advantages of the present application are apparent in light of the descriptions below.
BRIEF DESCRIPTION OF THE DRAWINGS
The aforementioned features and advantages of the present application as well as additional features and advantages thereof will be more clearly understood hereinafter as a result of a detailed description of preferred embodiments when taken in conjunction with the drawings.
FIG. 1 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
FIG. 1a is a schematic diagram illustrating a user interface of an apparatus associated with performing the method in FIG. 1.
FIG. 2 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
FIG. 2a is a schematic diagram illustrating a user interface of an apparatus associated with performing the method in FIG. 2.
FIG. 3 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
FIGS. 3a-3b are schematic diagrams illustrating a user interface of an apparatus associated with performing the method in FIG. 3.
FIG. 4 is a flowchart illustrating a method performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments.
FIG. 5a-5b are block diagrams of apparatuses configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
FIG. 6 is a block diagram illustrating components of an apparatus configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments.
Like reference numerals refer to corresponding parts throughout the several views of the drawings.
DESCRIPTION OF EMBODIMENTS
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter presented herein. But it will be apparent to one skilled in the art that the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
In order to make the objectives, technical solutions, and advantages of the present application comprehensible, embodiments of the present application are further described in detail below with reference to the accompanying drawings.
FIG. 1 is a flowchart illustrating a method 100 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus performing the method 100 can be any type of electronic device configured to present, to a user of the apparatus, a user interface where one or more objects are displayed. The user can operate the apparatus to manipulate a position of each object that is displayed in the user interface of the apparatus. In some embodiments, such an apparatus can be, for example, a cellular phone, a smart phone, a mobile Internet device (MID) , a personal digital assistant (PDA) , a tablet computer, an e-book reader, a laptop computer, a handheld computer, a desktop computer, a wearable device, a MP3 player (Moving Picture Experts Group Audio Layer III) , a MP4 (Moving Picture Experts Group Audio Layer IV) player, and/or any other personal electronic device. In some embodiments, the apparatus can be referred to as, for example, a client device, a mobile device, a user device, a terminal device, a portable device, and/or the like.
In some embodiments, the apparatus includes a display device configured to display the user interface including visual representations (e.g. , an icon, an image, a figure, a symbol, etc. ) of one or more objects. Such a display device can be any device configured to  display a user interface such as, for example, a screen, a monitor, a touch screen, a projector, and/or the like.
In some embodiments, when an application program (e.g. , a software application) of the apparatus is executed at the apparatus, the apparatus is configured to display the visual representations of the objects on the user interface. Such an application program can be, for example, a game (e.g. , a card game, a board game, etc. ) or any other type of application. Such an object can be, for example, an icon, a list, a figure, an image, a picture, a text block, a table, and/or the like. For example, as shown in FIGS. 1a, 2a, 3a and 3b described below, the application program can be a card game and the objects can be cards used in the card game.
In some embodiments, the apparatus performing the method 100 can be operatively connected to and communicate with other network devices (e.g. , another apparatus similar to the apparatus performing the method 100) via, for example, one or more networks (e.g. , the Internet) . Structure, modules and components of the apparatus are further shown and described below with respect to FIGS. 5a, 5b and 6. In some embodiments, the user operating the apparatus can be any person (potentially) interested in using the application game. Such a person can be, for example, a player of a card game.
The apparatus performing the method 100 includes one or more processors and memory. In some embodiments, the method 100 is implemented using instructions or code of the application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus. In such embodiments, the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 100 is performed to manipulate the visual representations of the objects (simplified as “object” herein) displayed in the user interface. As shown in FIG. 1, the method 100 includes the following steps.
At S101, the apparatus receives information of a slide operation performed on the user interface of the apparatus. The received information of the slide operation indicates a path of the slide operation from a starting point to an end point on the user interface. In some embodiments, such a slide operation can be performed by the user using, for example, a  finger of the user, an input device of the apparatus (e.g. , a mouse, a keyboard, a touch pen stylus, etc. ) , or any other suitable input method.
In some embodiments, the slide operation is performed within an operable area of the user interface. Such an operable area of the user interface can include, for example, any area of the user interface that is not occupied by a predefined icon (e. g., with a specific given function) such as, for example, a button to submit a result, a button for going to a next page, an text message block showing an instruction, and/or the like. In some embodiments, the operable area of the user interface includes any area where an object being operated can potentially be. In some embodiments, the operable area of the user interface is defined as any area of the user interface, on which an operation (e.g. , a slide operation, a touch operation, etc. ) is treated as a valid operation on the objects being operated.
In some embodiments, location information of an operation (e.g. , a touch operation, a slide operation, etc. ) performed by the user on the user interface is collected by, for example, one or more monitoring instructions or commands executed at the apparatus. For example, for an iPhone device functioning as the apparatus, UITouches can be used to determine coordinates of a touch point on the user interface. Alternatively, UIGestureRecognizer from the developer tool kit iPhone SDK can also be used to capture location information of a touch point or an operation. For another example, for an Android device, MotionEvent can be used to obtain location information of a touch point or an operation.
To be more specific, for example, when UITouches is used, the operating system of the apparatus (e.g. , an iPhone device) is enabled to capture the following four types of UITouches events (i.e. , touch operation on the user interface) :
1) (void) touchesBegan: (NSSet *) touches withEvent: (UIEvent *) event, which is triggered by the initial contact (e.g. , of a finger) with the screen (e.g. , a touch screen) , and as a result, a starting point of the operation is determined;
2) (void) touchesMoved: (NSSet *) touches withEvent: (UIEvent *) event, which is triggered by a slide operation (e.g. , continuous and moving contact with the screen) performed on the user interface;
3) (void) touchesEnded: (NSSet *) touches withEvent: (UIEvent *) event, which is triggered by disengagement (e.g. , of a finger) of contact with the screen (e.g. , a touch screen) , and as a result, an end point of the operation is determined; and
4) (void) touchesCancelled: (NSSet *) touches withEvent: (UIEvent *) event, which is triggered by sliding (e.g. , of a finger) out of the screen (e.g. , a touch screen) or the operable area of the user interface, and as a result, an end point of the operation is determined.
Additionally, information of a number of touch points occurred during the operation can be captured and recorded. Such information can be used to determine, for example, whether the operation is a single-touch point operation (e.g. , a touch by a single finger) or multiple-touch point operation (e.g. , touches by two fingers) . As shown and described below with respect to FIG. 4, different methods can be performed based on the number of touch points of an operation performed on the user interface.
For another example, when UIGestureRecognize is used (e.g. , on an iPhone device) , a derivative function UIGestureRecognizer can be instantiated and displayed as an object on the user interface, to capture information of operations (particularly, slide operations) performed by the user on the user interface. The following four types of events can be captured by analyzing state value (s) of the UISwipeGestureRecognizer object:
1) a UIGestureRecognizerStateBegan event, which is triggered by the initial contact (e.g. , of a finger) with the screen (e.g. , a touch screen) as part of a slide operation performed on the user interface;
2) a UIGestureRecognizerStateChanged event, which is triggered by a slide operation (e.g. , continuous and moving contact with the screen) performed on the user interface;
3) a UIGestureRecognizerStateEnded event, which is triggered by disengagement (e.g. , of a finger) of contact with the screen (e.g. , a touch screen) ; and
4) a UIGestureRecognizerStateCancelled event, which is triggered by sliding (e. g., of a finger) out of the screen (e.g. , a touch screen) or the operable area of the user interface.
Subsequently, each analyzed state value of the UISwipeGestureRecognizer object can be processed to determine location information of the touch point (s) .
For yet another example, when MotionEvent is used, the operating system of the apparatus (e.g. , an Android device) is enabled to capture the following four types of MotionEvent events (i.e. , touch operation on the user interface) :
1) a MotionEvent. ACTION_DOWN, which is triggered by the initial contact (e. g., of a finger) with the screen (e.g. , a touch screen) , and as a result, a starting point of the operation is determined;
2) a MotionEvent. ACTION_MOVE event, which is triggered by a slide operation (e.g. , continuous and moving contact with the screen) performed on the user interface;
3) a MotionEvent. ACTION_UP, which is triggered by disengagement (e. g., of a finger) of contact with the screen (e.g. , a touch screen) , and as a result, an end point of the operation is determined; and
4) a MotionEvent. ACTION_CANCEL, which is triggered by sliding (e.g. , of a finger) out of the screen (e.g. , a touch screen) or the operable area of the user interface, and as a result, an end point of the operation is determined.
Additionally, similar to the case of UITouches, a getPointerCount () function in the MotionEvent objects can be invoked to capture and record a number of touch points occurred during the operation.
At S102, the apparatus determines a set of objects from a group of objects displayed in the user interface based on the information of the slide operation and location information of the group of objects. Each object from the set of objects is associated with at least a portion of the path of the slide operation. The apparatus can determine the set of objects in various suitable methods based on the information of the slide operation and location information of the group of objects.
In some embodiments, for example, the information of the slide operation includes coordinates of the path of the slide operation on the user interface. In such embodiments, the apparatus can be configured to determine the set of objects based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface. Specifically, for example, the information of the slide operation includes at least one coordinate of the starting point and at least one coordinate of the end point. The apparatus can be configured to determine a first object from the group of objects, whose location corresponds to the at least one coordinate of the starting point.  Similarly, the apparatus can be configured to determine a second object from the group of objects, whose location corresponds to the at least one coordinate of the end point.
Furthermore, in some embodiments, the apparatus can be configured to determine the set of objects from the group of objects based on vertical coordinates of the path of the slide operation corresponding to the location of objects from the set of objects. Such a determination can be independent of horizontal coordinates of the path of the slide operation. Alternatively, the apparatus can be configured to determine the set of objects from the group of objects based on horizontal coordinates of the path of the slide operation corresponding to the location of objects from the set of objects. Such a determination can be independent of vertical coordinates of the path of the slide operation. In some embodiments, the apparatus can be configured to determine the set of objects from the group of objects using both vertical and horizontal coordinates of the path of the slide operation.
As an example, FIG. 1a is a schematic diagram illustrating a user interface 110 of the apparatus associated with performing the method 100 in FIG. 1. As shown in FIG. 1a, a slide operation is performed on a user interface where a group of cards (i.e. , objects) are displayed. The starting point of the slide operation has a horizontal coordinate X2 and a vertical coordinate (not shown in FIG. 1a) . Similarly, the end point of the slide operation has a horizontal coordinate X3 and a vertical coordinate (not shown in FIG. 1a) . Thus, the slide operation has a range of horizontal coordinates [X2, X3] .
In this example, the determination of a set of cards from the group of cards is based on the horizontal coordinates of the slide operation only, which is independent of the vertical coordinates of the slide operation. Specifically, the horizontal coordinates of each card are compared to the range of [X2, X3] . If a horizontal coordinate of a card falls within the range of [X2, X3] , it indicates that the card is selected by the slide operation, thus being included in the set of cards. Otherwise, if none horizontal coordinate of a card falls within the range of [X2, X3] , it indicates that the card is not selected by the slide operation, thus being excluded from the set of cards.
Returning to FIG. 1, at S103, the apparatus change a selection status of at least one object from the set of objects. In some embodiments, at any given time each object is in one of multiple selection states. For example, each object can be either in a “selected” state  or an “unselected” state. The selection status of each object can be changed from one to the other.
In some embodiments, the apparatus changes a selection status of the first object (corresponding to the starting point of the slide option) from the set of objects from a first selection status (e.g. , “unselected” ) to a second selection status (e.g. , “selected” ) . The apparatus then updates a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status. In other words, the apparatus changes the selection status of the first object (i.e. , the object corresponding to the starting point of the slide option) , and updates the selection status of each remaining object from the set of objects to make them identical to the updated selection status of the first object.
In some embodiments, the apparatus can change the selection statuses of the set of objects according to any other suitable rules. For example, when the selection status of each object from the set of objects is “unselected, ” the apparatus changes all of them to be “selected. ” Similarly, when the selection status of each object from the set of objects is “selected, ” the apparatus changes all of them to be “unselected. ”
For another example, when a portion of the set of objects are “unselected” and the remaining objects from the set of objects are “selected, ” the apparatus changes the selection status of each “unselected” object to be “selected, ” and changes the selection status of each “selected” object to be “unselected. ”
For yet another example, regardless of the initial selection status of the set of objects, the apparatus updates the selection status of each object from the set of objects such that the updated selection status of each object from the set of objects is a predefined selection status (e.g. , “unselected” or “selected” ) . In this scenario, it is possible that the apparatus does not change the selection status of any object from the set of objects.
FIG. 2 is a flowchart illustrating a method 200 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus performing the method 200 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1. Particularly, the apparatus performing the method 200 can include one or more processors and memory.
In some embodiments, the method 200 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus. In such embodiments, the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 200 is performed to manipulate the objects displayed in the user interface. As shown in FIG. 2, the method 200 includes the following steps.
At S201, the apparatus receives information of a touch operation performed on the user interface of the apparatus. The information of the touch operation includes location information of the touch operation. Similar to the operations of S102 of the method 100 shown and described with respect to FIG. 1, the information of the touch operation can include, for example, coordinates of the touch point (s) of the touch operation.
At S202, the apparatus identifies an object from a group of objects displayed in the user interface based on the location information of the touch operation and location information of the group of objects. Similar to the operations of S102 of the method 100 shown and described with respect to FIG. 1, the apparatus can identify the object from the group of objects based on the location information of the group of objects and coordinates of the touch point (s) of the touch operation. In some embodiments, the identification can be based on both vertical and horizontal coordinates of the touch point (s) . In some other embodiments, the identification can be based on only one of the vertical coordinate and the horizontal coordinate of the touch point (s) , which is dependent of the other coordinate of the touch point (s) .
As an example, FIG. 2a is a schematic diagram illustrating a user interface 210 of the apparatus associated with performing the method in FIG. 2. As shown in FIG. 2a, a group of cards (i.e. , objects) are displayed roughly in a row at the bottom of the user interface 210. Each card corresponds to a range of horizontal coordinates. Specifically, the left-most horizontal coordinate for the first card (from left to right) is X, and the horizontal distance occupied by each card is L. Thus, the range of horizontal coordinates corresponding to the first card is [X, X+L) , the range of horizontal coordinates corresponding to the second card is  [X+L, X+2L) , the range of horizontal coordinates corresponding to the third card is [X+2L, X+3L) , etc.
Accordingly, in response to determining the horizontal coordinate of the touch operation as X1 (regardless of the vertical coordinate of the touch operation) , the apparatus can identify which card corresponds to the touch operation by determining which range of horizontal coordinates includes X1. For example, if X+ (A-1) *L ≤ X1 < X+A*L, then the touch operation corresponds to the A-th card from the left.
In some embodiments, to expedite the calculation, the range of horizontal coordinates corresponding to the horizontal axis of the user interface 210 can be divided into a group of small ranges, each of which corresponds to a card from the group of cards displayed in the user interface 210 or a void space (i.e. , associated with no card) . Thus, the apparatus can identify the card corresponding to the touch operation by determining which small range the horizontal coordinate of the touch operation falls into.
At S203, the apparatus determines a selection status of the identified object and changes the selection status of the identified object. Similar to the operations of S103 of the method 100 shown and described above with respect to FIG. 1, by changing the selection status of the identified object, the apparatus can select an “unselected” object or unselect a “selected” object.
FIG. 3 is a flowchart illustrating a method 300 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus performing the method 300 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1. Particularly, the apparatus performing the method 300 can include one or more processors and memory.
In some embodiments, the method 300 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus. In such embodiments, the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 300 is performed to manipulate the objects  displayed in the user interface. As shown in FIG. 3, the method 300 includes the following steps.
At S301, the apparatus receives information of an operation performed on the user interface of the apparatus. The information of the operation includes location information of the operation. The operations are similar to operations of S101 of the method 100 and S201 of the method 200 shown and described above with respect to FIGS. 1 and 2. In some embodiments, the location information of the operation includes coordinates of the touch point (s) of the operation. The operation can be, for example, a slide operation, a touch operation, or any other type of operation performed on the user interface by a user of the apparatus.
At S302, the apparatus determines, based on the location information of the operation, whether the operation is performed within a first predefined area of the user interface or a second predefined area of the user interface. In some embodiments, the second predefined area is defined to be an area of the user interface where the objects are displayed; and the first predefined area is defined to include all other operable area excluding the second predefined area.
In some embodiments, various types of operations are accepted within each different area of the user interface. In such embodiments, when an operation not accepted in an area is performed within that area, that operation is not accepted by the apparatus. That is, the apparatus does not recognize the operation or respond to the operation. For example, touch operations but not slide operations are accepted within the first predefined area (i.e. , the operable area where the objects are not displayed) , while slide operations but not touch operations area accepted within the second predefined area (i.e. , the area where the objects are displayed) .
In some embodiments, when an operation is detected to be within the first predefined area, the operation is treated as a touch operation but not a slide operation; and when an operation is detected to be within the second predefined area, the operation is treated as a slide operation but not a touch operation. As a result, a slide operation performed in the first predefined area is treated as a touch operation, where the starting point of the slide operation is treated as the only touch point of the touch operation. Similarly, a touch  operation performed in the second predefined area is treated as a slide operation having a short path.
If the apparatus determines that the operation is performed within the first predefined area of the user interface, at S303, the apparatus receives information of a touch operation, and identifies an object from a group of objects displayed in the user interface based on location information of the touch operation. The operations are similar to operations of S201-S202 of the method 200 shown and described above with respect to FIG. 2.
Otherwise, if the apparatus determines that the operation is performed within the second predefined area of the user interface, at S304, the apparatus receives information of a slide operation, and identifies a set of objects from the group of objects based on location information of the slide operation. The operations are similar to operations of S101-S102 of the method 100 shown and described above with respect to FIG. 1.
At S305, the apparatus updates a selection status of each identified object accordingly. Specifically, if the apparatus determines that the operation is performed within the first predefined area of the user interface (at S302) and receives information of a touch operation accordingly (at S303) , the apparatus can perform operations similar to operations of S203 of the method 200 shown and described above with respect to FIG. 2. Otherwise, if the apparatus determines that the operation is performed within the second predefined area of the user interface (at S302) and receives information of a slide operation accordingly (at S304) , the apparatus can perform operations similar to operations of S103 of the method 100 shown and described above with respect to FIG. 1.
As an example, FIGS. 3a-3b are schematic diagrams illustrating a user interface 310 of the apparatus associated with performing the method 300 in FIG. 3. As shown in FIGS. 3a-3b, a group of cards are displayed within an area 330 at the bottom of the user interface 310. Thus, the area 330 is the second predefined area. The first predefined area is other operable area in the user interface 310 excluding the other items displayed in the user interface 310.
As shown in FIG. 3a, an operation is performed within the first predefined area. Thus, the operation is received as a touch operation, and a card from the group of cards is identified based on the location of the touch operation. For example, as described above  with respect to S202 of the method 200 in FIG. 2, the apparatus can identify the card from the group of cards based on the horizontal coordinate of the touch operation falling within a range of horizontal coordinates corresponding to the location of that card. The apparatus then changes the selection status of the identified card.
As shown in FIG. 3b, an operation is performed within the second predefined area (i.e. , the area 330) . Thus, the operation is received as a slide operation, and a set of cards are determined from the group of cards based on location information of the slide operation. For example, as described above with respect to S102 of the method 100 in FIG. 1, the apparatus can determine the set of cards from the group of cards based on the horizontal coordinates of the slide operation falling within a range of horizontal coordinates corresponding to the collective location of the set of cards. The apparatus then updates the selection status of the set of cards in accordance with a predefined rule, as described with respect to S103 of the method 100 in FIG. 1.
In some embodiments, although not shown in FIGS. 3a and 3b, a slide operation can start from a starting point within the second predefined area (i.e. , the area where the group of objects are displayed) and end at an end point within the first predefined area (i.e. , the operable area where the group of objects are not displayed) . For example, the apparatus can determine, based on location information of the slide operation and location information of the group of objects, whether the starting point corresponds to location of an object from the group of objects and whether the end point corresponds to location of an object from the group of objects. Specifically, in some embodiments, the apparatus can determine whether at least one coordinate of the starting point corresponds to the location of any object from the group of objects, and whether at least one coordinate of the end point corresponds to the location of any object from the group of objects. The apparatus can determine that the starting point corresponds to location of an object from the group of objects (indicating the starting point is within the second predefined area) , while the end point does not correspond to any object from the group of objects (indicating the end point is within the first predefined area) .
In such embodiments, as a result of the slide operation crossing the border between the first predefined area and the second predefined area, the apparatus can, for example, optionally identify a set of objects from the group of objects displayed in the user interface and perform a predefined operation on each object from the set of objects or each  object from the group of objects. For example, the apparatus can update the selection status of the group of objects such that the updated selection status of each object from the group of objects is a predefined selection status (e.g. , “unselected” ) . That is, in a card game, by detecting a slide operation crossing the border between the first predefined area and the second predefined area, the apparatus can cancel each “selected” card and make all the cards “unselected. ”
FIG. 4 is a flowchart illustrating a method 400 performed at an apparatus for selecting objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus performing the method 400 can be structurally and functionally similar to the apparatus performing the method 100 as described above with respect to FIG. 1. Particularly, the apparatus performing the method 400 can include one or more processors and memory.
In some embodiments, the method 400 is implemented using instructions or code of an application program that are stored in a non-transitory computer readable storage medium of the apparatus and executed by the one or more processors of the apparatus. In such embodiments, the application program is associated with displaying and manipulating visual representations of objects on the user interface of the apparatus. Instructions of the application program are stored in and/or executed at the apparatus. As a result of such an application program being executed, the method 400 is performed to manipulate the objects displayed in the user interface. As shown in FIG. 4, the method 400 includes the following steps.
At S401, the apparatus receives information of a first touch operation performed on the user interface of the apparatus. The information of the first touch operation includes location information of the first touch operation. The first touch operation can include a single touch point (e.g. , contact by a single finger of a user with a screen of the apparatus) . The operations are similar to operations of S101 of the method 100 and S201 of the method 200 shown and described above with respect to FIGS. 1 and 2. In some embodiments, the location information of the first touch operation includes coordinates of the single touch point of the first touch operation. In some embodiments, the information of the first touch operation includes information indicating the number of touch points of the first touch operation.
At S402, the apparatus identifies an object from a group of objects displayed in the user interface based on the location information of the first touch operation and location information of the group of objects. At S403, the apparatus determines a selection status of the identified object and change the selection status of the identified object. The operations of S402-S403 are similar to operations of S202-S203 of the method 200 shown and described above with respect to FIG. 2.
Particularly, in some embodiments, the apparatus can identify the object from the group objects based on only one of the vertical coordinates and the horizontal coordinates of the group of objects and the location information of the first touch operation. For example, the apparatus can identify the object such that a horizontal coordinate of the location of the touch operation corresponds to a horizontal coordinate of the identified object, while a vertical coordinate of the location of the touch operation does not correspond to any vertical coordinate of the identified object.
At S404, the apparatus receives information of a second touch operation performed on the user interface. The information of the second touch operation indicates a number of touch points of the second touch operation. At S405, the apparatus determines whether the number of touch points of the second touch operation is greater than a threshold number of touch points. In some embodiments, such a threshold number can be, for example, one. Thus, the apparatus determines whether the number of touch points of the second touch operation is one or more than one.
In some embodiments, a touch operation can be determined to have multiple touch points in various methods. For example, a touch operation can be determined to have multiple touch points (e.g. , two touch points) if multiple spatially-separate contacts are made with the screen (e.g. , touch screen) of the apparatus at (substantially) the same time. For instance, two fingers of a user contact the screen at the same time. For another example, a touch operation can be determined to have multiple touch points (e.g. , two touch points) if multiple temporally-separate contacts (spatially-separate or not) are made with the screen (e.g. , touch screen) of the apparatus within a very short of time period. For instance, a finger of a user contacts the screen twice within 0.5 seconds.
In some embodiments, if the number of touch points of the second touch operation is equal to or less than the threshold number, the apparatus performs the operations  of S402-S403 described above accordingly. For example, the apparatus can identify an object from the group of objects based on the location of the second touch operation and location information of the group of objects (e.g. , the location of the second touch operation being associated with the location of the identified object) . The apparatus can then change the selection status of the identified object.
At S406, if the number of touch points of the second touch operation is greater than the threshold number, the apparatus identifies a set of objects from the group of objects displayed in the user interface and perform a predefined operation on each object from the set of objects. In some embodiments, the apparatus can identify the set of objects based on, for example, the selection status of the set of objects. For example, the apparatus can include each “unselected” object from the group of objects into the set of objects, and then change their selection status to “selected. ” For another example, the apparatus can include each “selected” object from the group of objects into the set of objects, and then change their selection status to “unselected. ”
In some embodiments, the predefined operation includes changing the location of the set of objects on the user interface. For example, in a card game application, the apparatus can include each “selected” card from a group of cards into a set of cards, and then change the location of each card from the set of cards on the user interface to a location farther away from the unselected cards, which indicates that each selected card is played out from the group of cards.
FIG. 5a is a block diagram of an apparatus 510 configured to select objects displayed in a user interface of the apparatus 510 in accordance with some embodiments. The apparatus 510 can be structurally and functionally similar to the apparatuses described above with respect to FIGS. 1-4. Similar to the apparatuses described with respect to FIGS. 1-4, the apparatus 510 can be configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 510.
As shown in FIG. 5a, the apparatus 510 includes a first receive module 501, a first identifying module 502, a first update module 503, a determination module 507 and an execution module 508. In some embodiments, each module included in the apparatus 510 can be a hardware-based module (e.g. , a digital signal processor (DSP) , a field programmable gate array (FPGA) , etc. ) , a software-based module (e.g. , a module of computer code executed  at a processor, a set of processor-readable instructions executed at a processor, etc. ) , or a combination of hardware and software modules. Instructions or code of each module can be stored in a memory of the apparatus 510 (not shown in FIG. 5a) and executed at a processor (e.g. , a CPU) of the apparatus 510 (not shown in FIG. 5a) .
Specifically, the first receive module 501 is configured to, among other functions, receive information of a slide operation performed on the user interface of the apparatus 510. In some embodiments, as described above, such a slide operation can be performed within a first area of the user interface for displaying a group of objects, within a second area of the user interface mutually exclusive from the first area, or crossing the border between the first and second area. In some embodiments, the first receive module 501 is configured to determine location information of the slide operation, such as coordinates of a starting point, an end point, and a path of the slide operation. In some embodiments, the first receive module 501 is configured to perform operations of S101 of the method 100 shown and described above with respect to FIG. 1.
The first identifying module 502 is configured to, among other functions, identify a set of objects from the group of objects based on the information (e.g. , location information such as coordinates) of the slide operation received at the first receive module 501 and location information of the group of objects. In some embodiments, the first identifying module 502 is configured to perform operations of S102 of the method 100 shown and described above with respect to FIG. 1.
In some embodiments, as shown in FIG. 5a, the first identifying module 502 includes a determination submodule 5021 and a retrieve submodule 5022. In such embodiments, the determination submodule 5021 is configured to, among other functions, determine a range of locations (e.g. , a range of coordinates) corresponding to the path of the slide operation. Such a determination can be based on, for example, location information of the starting point, the end point and/or the path of the slide operation.
The retrieve submodule 5022 is configured to, among other functions, identify the set of objects from the group of objects based on the range of locations determined by the determination submodule 5021 and location information of the group of objects. For example, the retrieve submodule 5022 is configured to identify each object from the group of  objects whose location falls within the range of locations determined by the determination submodule 5021.
The first update module 503 is configured to, among other functions, update selection status of objects from the set of objects identified by the first identifying module 502. As described above, the first update module 503 can update the selection status of the set of objects in accordance with various rules. In some embodiments, the first update module 503 is configured to perform operations of S103 of the method 100 shown and described above with respect to FIG. 1.
The determination module 507 is configured to, among other functions, determine whether a number of touch points in a touch operation performed on the user interface of the apparatus 510 is greater than a threshold number of touch points. In some embodiments, the determination module 507 is configured to perform operations of S405 of the method 400 shown and described above with respect to FIG. 4.
The execution module 508 is configured to, in accordance with the number of touch points of the touch operation being determined by the determination module 507 to be greater than the threshold number, identify a set of objects from the group of objects, and perform a predefined operation on the set of objects. In some embodiments, the execution module 508 is configured to perform operations of S406 of the method 400 shown and described above with respect to FIG. 4.
FIG. 5b is a block diagram of another apparatus 520 configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus 520 can be (substantially) identical, or structurally and functionally similar, to the apparatus 510 shown and described above with respect to FIG. 5a. Similar to the apparatuses shown and described with respect to FIGS. 1-5a, the apparatus 520 can be configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 520.
As shown in FIG. 5b, the apparatus 520 includes the same (or substantially the same) modules of the apparatus 510: the determination module 507 and the execution module 508. Additionally, the apparatus 520 includes a second receive module 504, a second identifying module 505, and a second update module 506. Similar to those modules included in the apparatus 510 in FIG. 5a, each additional module included in the apparatus 520 can be  a hardware-based module (e.g. , a DSP, a FPGA, etc. ) , a software-based module (e. g., a module of computer code executed at a processor, a set of processor-readable instructions executed at a processor, etc. ) , or a combination of hardware and software modules. Instructions or code of each module included in the apparatus 520 can be stored in a memory of the apparatus 520 (not shown in FIG. 5b) and executed at a processor (e.g. , a CPU) of the apparatus 520 (not shown in FIG. 5b) .
Specifically, the second receive module 504 is configured to, among other functions, receive information of a touch operation performed on the user interface of the apparatus 520. In some embodiments, as described above, such a touch operation can be performed within a first area of the user interface for displaying a group of objects or within a second area of the user interface mutually exclusive from the first area. In some embodiments, the second receive module 504 is configured to determine location information of the touch operation, such as coordinates of a touch point of the touch operation. In some embodiments, the touch operation includes a single touch point. In some embodiments, the second receive module 504 is configured to perform operations of S201 of the method 200 shown and described above with respect to FIG. 2.
The second identifying module 505 is configured to, among other functions, identify an object from the group of objects based on the information (e.g. , location information such as coordinates) of the touch operation received at the second receive module 504 and location information of the group of objects. In some embodiments, the second identifying module 505 is configured to perform operations of S202 of the method 200 shown and described above with respect to FIG. 2.
The second update module 506 is configured to, among other functions, determine and change a selection status of the object identified by the second identifying module 505. Alternatively, the second update module 506 can update the selection status of the identified object in accordance with any other suitable rule. In some embodiments, the second update module 506 is configured to perform operations of S203 of the method 200 shown and described above with respect to FIG. 2.
FIG. 6 is a block diagram illustrating components of an apparatus 600 configured to select objects displayed in a user interface of the apparatus in accordance with some embodiments. The apparatus 600 can be structurally and functionally similar to the  apparatuses shown and described above with respect to FIGS. 1-5b. Particularly, the components of the apparatus 600 can be collectively configured to perform the methods 100-400 to select and manipulate objects displayed in the user interface of the apparatus 600.
As shown in FIG. 6, the apparatus 600 includes a processor 680, a memory 620, an input unit 630, a display unit 640, a sensor 650, an audio circuit 660, a Wi-Fi (Wireless Fidelity) module 670, a radio frequency (RF) circuit 610 and a power supply 690. In some embodiments, the apparatus 600 can include more or less devices, components and/or modules than those shown in FIG. 6. One skilled in the art understands that the structure of the apparatus 600 shown in FIG. 6 does not constitute a limitation for the apparatus 600, and may include more or less components than those illustrated in FIG. 6. Furthermore, the components of the apparatus 600 (shown or not shown in FIG. 6) can be combined and/or arranged in different ways other than that shown in FIG. 6.
The RF circuit 610 is configured to send and receive data, and in particular, to send uplink to data to and/or receive downlink data from a base station. The RF circuit 610 is configured to send the received data to the processor 680 for further processing. The RF circuit 610 can include, for example, one more antenna, amplifier, tuner, oscillator, subscriber identity module (SIM) card, transceiver, coupler, low noise amplifier (LNA) , duplexer, etc. The RF circuit 610 is configured to wirelessly communicate with other network or device using any suitable wireless communication protocol such as, for example, GSM (Global System for Mobile communication) , GPRS (General Packet Radio Service) , CDMA (Coe Division Multiple Access) , WCDMA (Wideband Code Division Multiple Access) , LTE (Long Term Evolution) , etc.
The memory 620 is configured to store software programs and/or modules. The processor 680 can execute various applications and data processing functions included in the software programs and/or modules stored in the memory 620. The memory 620 includes, for example, a program storage area and a data storage area. The program storage area is configured to store, for example, an operating system and application programs. The data storage area is configured to store data received and/or generated during the use of the apparatus 600 (e.g. , location information of an operation, location information of objects) . The memory 620 can include one or more high-speed RAM, non-volatile memory such as a disk storage device and a flash memory device, and/or other volatile solid state memory devices. In some embodiments, the memory 620 also includes a memory controller  configured to provide the processor 680 and the input unit 630 with access to the memory 620.
The input unit 630 is configured to receive input data and signals (e. g., messages) and also generate signals caused by operations and manipulations of input devices such as, for example, a user’s finger, a touch pen stylus, a keyboard, a mouse, etc. Specifically, the input unit 630 includes a touch panel 631 (e.g. , a touch screen, a touchpad) and other input devices 632. The other input devices 632 can include, for example, a physical keyboard, a function key (such as a volume control key, a switch key, etc. ) , a trackball, a mouse, a joystick, etc.
The touch panel 631 is configured to collect touch operations on or near the touch panel 631 that are performed by a user of the apparatus 600, such as slide operations and touch operations performed by the user using a finger, stylus, touch pen, or any other suitable object or attachment on or near a touch-sensitive surface of the touch panel 631. In some embodiments, the touch panel 631 can optionally include a touch detection apparatus and a touch controller. The touch detection apparatus can detect the direction of the touch operation and signals generated by the touch operation, and then transmit the signals to the touch controller. The touch controller can receive the signals from the touch detection apparatus, convert the signals into contact coordinate data, and then send the contact coordinate data to the processor 680. The touch controller can also receive and execute commands received from the processor 680. The touch panel 631 can be implemented using various types of technologies such as, for example, resistive touch screen, capacitive touch screen, infrared ray touch screen, surface acoustic wave (SAW) touch screen, etc.
The display unit 640 is configured to display information (e.g. , objects) on various graphical user interfaces (GUIs) of the apparatus 600. The GUIs can include, for example, graph, text, icon, video, and/or any combination of them. The display unit 640 includes a display panel 641, which can be, for example, a LCD (Liquid Crystal Display) , a LED (Light-Emitting Diode) , an OLED (Organic Light-Emitting Diode) display, etc. Furthermore, the touch panel 631 can cover the display panel 641. After a touch operation on or near the touch panel 631 is detected, the touch panel 631 transmits information of the touch operation to the processor 680, where the type and/or other information of the touch operation are determined. The processor 680 sends visual information to the display panel 641 based on the determined type of the touch operation. The visual information is then  displayed on the display panel 641. Although shown in FIG. 6 as two separate components for the input and output functions respectively, in other embodiments, the touch panel 631 and the display panel 641 can be integrated into one component for realization of the input and output functions.
The apparatus 600 includes at least one sensor 650 such as, for example, a light sensor, a motion sensor, and/or other types of sensors. A light sensor can be, for example, an ambient light sensor or a proximity sensor. The ambient light sensor is configured to adjust the brightness of the display panel 641 according to the light intensity received at the ambient light sensor. The proximity sensor is configured to turn off the display panel 641 and/or backlight when, for example, the apparatus 600 moves near the user’s ear. A motion sensor can be, for example, an acceleration transducer that can measure acceleration at each direction (e.g. , 3-axis directions) , measure the magnitude and direction of gravity when stationary, be used in applications for recognition of the posture of the apparatus 600 (e.g. , horizontal and vertical screen switching, games, magnetometer posture calibration) , be used in applications related to vibration recognition (e.g. , pedometer, percussion) , and/or the like. Additionally, although not shown in FIG. 6, the apparatus 600 can also include other sensory devices such as, for example, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and/or the like.
The audio circuit 660, the speaker 661 and the microphone 662 collectively provide an audio interface between the user and the apparatus 600. The audio circuit 660 transmits an electric signal converted from audio data to the speaker 661, where the electric signal is converted and output as an acoustical signal by the speaker 661. The microphone 662 converts a collected acoustical signal into an electric signal, which is then sent to and converted to audio data by the audio circuit 660. The audio data is sent to the processor 680 for further processing, and then sent to another terminal device through the RF circuit 610 or stored in the memory 620 for further processing. The audio circuit 660 can also include an earplug jack to enable communication between a peripheral headset and the apparatus 600. In some embodiments, a speech message spoken by the user can be received through the microphone 662 and the audio circuit 660. Similarly, a speech message received from another device can be played using the speaker 661 and the audio circuit 660.
The Wi-Fi module 670 is configured to enable Wi-Fi communication between the apparatus 600 and other devices or networks. For example, the Wi-Fi module 670  provides the user with a wireless access to broadband Internet. As a result, the user can use the Wi-Fi connection to, for example, send and receive E-mails, browse web pages, access streaming media, and so on. Although shown in FIG. 6 as including the Wi-Fi module 670, in some other embodiments, apparatus can operate without such a Wi-Fi module or the Wi-Fi functionality.
The processor 680 functions as a control center of the apparatus 600. The processor 680 is configured to operatively connect each component of the apparatus 600 using various interfaces and circuits. The processor 680 is configured to execute the various functions of the apparatus 600 and to perform data processing by operating and/or executing the software programs and/or modules stored in the memory 620 and using the data stored in the memory 620. In some embodiments, the processor 680 can include one or more processing cores. In some embodiments, an application processor and a modem processor can be integrated at the processor 680. The application processor is configured to monitor and control the operating system, user interfaces, application programs, and so on. The modem processor is configured to control wireless communication.
The power supply 690 is used to provide power for the various components of the apparatus 600. The power supply 690 can be, for example, a battery. The power supply 690 can be operatively coupled to the processor 680 via a power management system that controls charging, discharging, power consumption, and/or other functions related to power management. In some embodiments, the power supply 690 can include one or more DC and/or AC power source, recharging system, power failure detection circuit, power converter or inverter, power supply status indicator, and/or the like.
While particular embodiments are described above, it will be understood it is not intended to limit the present application to these particular embodiments. On the contrary, the present application includes alternatives, modifications and equivalents that are within the spirit and scope of the appended claims. Numerous specific details are set forth in order to provide a thorough understanding of the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, first ranking criteria could be termed second ranking criteria, and, similarly, second ranking criteria could be termed first ranking criteria, without departing from the scope of the present application. First ranking criteria and second ranking criteria are both ranking criteria, but they are not the same ranking criteria.
The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in the description of the present application and the appended claims, the singular forms “a, ” “an, ” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "includes, " "including, " "comprises, " and/or "comprising, " when used in this specification, specify the presence of stated features, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting, ” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true] ” or “if [astated condition precedent is true] ” or “when [astated condition precedent is true] ” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Although some of the various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art and so do not present an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the present application to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain principles of the present application and its practical applications, to thereby enable others skilled in the art to best utilize the present application and various implementations with various modifications as are suited to the particular use contemplated. Implementations include alternatives, modifications and equivalents that are within the spirit and scope of the appended claims. Numerous specific details are set forth in order to provide a thorough understanding of the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that the subject matter may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the implementations.

Claims (20)

  1. An apparatus, comprising:
    one or more processors; and
    memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for:
    receiving information of a slide operation performed on a user interface of the apparatus, the information of the slide operation indicating a path of the slide operation from a starting point to an end point on the user interface;
    determining a set of objects from a plurality of objects displayed in the user interface based on the information of the slide operation and location information of the plurality of objects, each object from the set of objects being associated with at least a portion of the path of the slide operation;
    changing a selection status of a first object from the set of objects from a first selection status to a second selection status different from the first selection status, the first object being associated with the starting point of the slide operation; and
    updating a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status.
  2. The apparatus of claim 1, wherein the information of the slide operation includes coordinates of the path of the slide operation on the user interface, and the instruction for determining includes determining the set of objects based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface.
  3. The apparatus of claim 2, wherein the information of the slide operation includes at least one coordinate of the starting point and at least one coordinate of the end point, the at least one coordinate of the starting point corresponding to the location of the first object, the at least one coordinate of the end point corresponding to the location of an object from the set of objects.
  4. The apparatus of claim 2, wherein the determining based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface includes determining the set of objects based on:
    1) whether vertical coordinates of the path of the slide operation correspond to the location of objects from the set of objects displayed in the user interface independent of horizontal coordinates of the path of the slide operation, or
    2) whether horizontal coordinates of the path of the slide operation correspond to the location of objects from the set of objects displayed in the user interface independent of vertical coordinates of the path of the slide operation.
  5.  The apparatus of claim 1, wherein the slide operation is a first slide operation, the one or more programs further comprising instructions for:
    receiving information of a second slide operation performed on the user interface, the information of the second slide operation indicating a path of the second slide operation from a starting point of the second slide operation to an end point of the second slide operation on the user interface;
    determining, based on the information of the second slide operation, whether the starting point of the second slide operation corresponds to an object from the plurality of objects and whether the end point of the second slide operation corresponds to an object from the plurality of objects;
    if the starting point of the second slide operation is determined to correspond to an object from the plurality of objects and the end point of the second slide operation is determined not to correspond to any object from the plurality of objects, updating the selection status of the plurality of objects such that the updated selection status of each object from the plurality of objects is a predefined selection status.
  6.  The apparatus of claim 5, wherein the information of the second slide operation includes at least one coordinate of the starting point of the second slide operation and at least one coordinate of the end point of the second slide operation, and the instruction for determining includes determining whether the at least one coordinate of the starting point of the second slide operation corresponds to the location of objects from the plurality of objects, and whether the at least one coordinate of the end point of the second slide operation corresponds to the location of objects from the plurality of objects.
  7.  An apparatus, comprising:
    one or more processors; and 
    memory storing one or more programs to be executed by the one or more processors, the one or more programs comprising instructions for:
    receiving information of a touch operation performed on a user interface of the apparatus, the information of the touch operation indicating a number of touch points of the touch operation;
    determining whether the number of touch points is greater than a threshold number of touch points;
    if the number of touch points is determined to be greater than the threshold number, identifying a set of objects from a plurality of objects displayed in the user interface, and performing a predefined operation on each object from the set of objects without performing the predefined operation on any object excluded from the set of objects, each object from the set of objects being in a first selection status, each object excluded from the set of objects being in a second selection status different from the first selection status.
  8. The apparatus of claim 7, wherein performing the predefined operation on each object from the set of objects including changing the location of that object on the user interface.
  9. The apparatus of claim 7, wherein the threshold number of touch points is one.
  10. The apparatus of claim 7, wherein the information of the touch operation indicates a location of the touch operation on the user interface, the one or more programs further comprising instructions for:
    if the number of touch points is determined to be less than or equal to the threshold number, identifying an object from the plurality of objects based on the location of the touch operation and location information of the plurality of objects, and changing the selection status of the identified object, the location of the touch operation being associated with the location of the identified object.
  11. The apparatus of claim 10, wherein a horizontal coordinate of the location of the touch operation corresponds to a horizontal coordinate of the identified object, and a vertical coordinate of the location of the touch operation does not correspond to any vertical coordinate of the identified object.
  12. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by one or more processors, cause the processors to perform operations comprising:
    receiving information of a slide operation performed on a user interface, the information of the slide operation indicating a path of the slide operation from a starting point to an end point on the user interface;
    determining a set of objects from a plurality of objects displayed in the user interface based on the information of the slide operation and location information of the plurality of  objects, each object from the set of objects being associated with at least a portion of the path of the slide operation;
    changing a selection status of a first object from the set of objects from a first selection status to a second selection status different from the first selection status, the first object being associated with the starting point of the slide operation; and 
    updating a selection status of each remaining object from the set of objects such that the updated selection status of each remaining object is the second selection status.
  13. The non-transitory computer readable storage medium of claim 12, wherein the information of the slide operation includes coordinates of the path of the slide operation on the user interface, and the instruction for determining includes determining the set of objects based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface.
  14. The non-transitory computer readable storage medium of claim 13, wherein the determining based on whether the coordinates correspond to the location of objects from the set of objects displayed in the user interface includes determining the set of objects based on:
    1) whether vertical coordinates of the path of the slide operation correspond to the location of objects from the set of objects displayed in the user interface independent of horizontal coordinates of the path of the slide operation, or
    2) whether horizontal coordinates of the path of the slide operation correspond to the location of objects from the set of objects displayed in the user interface independent of vertical coordinates of the path of the slide operation.
  15. The non-transitory computer readable storage medium of claim 12, wherein the slide operation is a first slide operation, the one or more programs further comprising instructions for:
    receiving information of a second slide operation performed on the user interface, the information of the second slide operation indicating a path of the second slide operation from a starting point of the second slide operation to an end point of the second slide operation on the user interface;
    determining, based on the information of the second slide operation, whether the starting point of the second slide operation corresponds to an object from the plurality of objects and whether the end point of the second slide operation corresponds to an object from the plurality of objects;
    if the starting point of the second slide operation is determined to correspond to an object from the plurality of objects and the end point of the second slide operation is determined not to correspond to any object from the plurality of objects, updating the selection status of the plurality of objects such that the updated selection status of each object from the plurality of objects is a predefined selection status.
  16. The non-transitory computer readable storage medium of claim 15, wherein the information of the second slide operation includes at least one coordinate of the starting point of the second slide operation and at least one coordinate of the end point of the second slide operation, and the instruction for determining includes determining whether the at least one coordinate of the starting point of the second slide operation corresponds to the location of objects from the plurality of objects, and whether the at least one coordinate of the end point of the second slide operation corresponds to the location of objects from the plurality of objects.
  17. The non-transitory computer readable storage medium of claim 12, wherein the one or more programs further comprise instructions for:
    receiving information of a touch operation performed on the user interface, the information of the touch operation indicating a number of touch points of the touch operation; 
    determining whether the number of touch points is greater than a threshold number of touch points;
    if the number of touch points is determined to be greater than the threshold number, performing a predefined operation on each object from the plurality of objects that has a predefined selection status without performing the predefined operation on any remaining object from the plurality of objects.
  18. The non-transitory computer readable storage medium of claim 17, wherein performing the predefined operation on an object from the plurality of objects includes changing the location of that object on the user interface.
  19.  The non-transitory computer readable storage medium of claim 17, wherein the information of the touch operation indicates a location of the touch operation on the user interface, the one or more programs further comprising instructions for:
    if the number of touch points is determined to be less than or equal to the threshold number, identifying an object from the plurality of objects based on the location of the touch  operation and location information of the plurality of objects, and changing the selection status of the identified object, the location of the touch operation being associated with the location of the identified object.
  20. The non-transitory computer readable storage medium of claim 17, wherein a vertical coordinate of the location of the touch operation corresponds to a vertical coordinate of the identified object and a horizontal coordinate of the location of the touch operation does not correspond to any horizontal coordinate of the identified object.
PCT/CN2014/086610 2013-10-28 2014-09-16 Method and apparatus for selecting objects WO2015062372A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310518421.9 2013-10-28
CN201310518421.9A CN104571908B (en) 2013-10-28 2013-10-28 A kind of method and apparatus of Object Selection

Publications (1)

Publication Number Publication Date
WO2015062372A1 true WO2015062372A1 (en) 2015-05-07

Family

ID=53003296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/086610 WO2015062372A1 (en) 2013-10-28 2014-09-16 Method and apparatus for selecting objects

Country Status (3)

Country Link
CN (1) CN104571908B (en)
TW (1) TW201516844A (en)
WO (1) WO2015062372A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109683799A (en) * 2018-12-30 2019-04-26 努比亚技术有限公司 A kind of control method by sliding, equipment and computer readable storage medium

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105094423B (en) * 2015-07-03 2018-06-29 施政 Electronic plane interactive system and method
CN105100458B (en) * 2015-07-08 2018-09-14 努比亚技术有限公司 A kind of device and method of quick selecting object
JP2017120584A (en) * 2015-12-28 2017-07-06 ソニー株式会社 Information processor, information processing method, and program
CN110754073A (en) * 2017-09-25 2020-02-04 深圳市欢太科技有限公司 Content selection method, electronic device, storage medium, and computer program product
CN108837507A (en) * 2018-05-29 2018-11-20 网易(杭州)网络有限公司 Virtual item control method and device, electronic equipment, storage medium
CN109364474A (en) * 2018-09-13 2019-02-22 腾讯科技(深圳)有限公司 Card interaction scenarios, the manipulation implementation method in card institute dependent game and device
CN109614021A (en) * 2018-10-29 2019-04-12 阿里巴巴集团控股有限公司 Exchange method, device and equipment
CN109568954B (en) * 2018-11-30 2020-08-28 广州要玩娱乐网络技术股份有限公司 Weapon type switching display method and device, storage medium and terminal
CN110083288B (en) * 2019-04-22 2021-04-16 百度在线网络技术(北京)有限公司 Display interface control method, device and system, computing equipment and readable medium
CN110215687B (en) * 2019-07-04 2023-03-24 网易(杭州)网络有限公司 Game object selection method and device
CN110215695B (en) * 2019-07-04 2023-03-24 网易(杭州)网络有限公司 Game object selection method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262507A (en) * 2011-06-28 2011-11-30 中兴通讯股份有限公司 Method and device for realizing object batch selection through multipoint touch-control
US20120188191A1 (en) * 2009-09-29 2012-07-26 Yu Chen Method and electronic device for gesture recognition
JP2012165920A (en) * 2011-02-15 2012-09-06 Universal Entertainment Corp Gaming machine
CN103777882A (en) * 2012-10-24 2014-05-07 腾讯科技(深圳)有限公司 Multiterm selection method and device based on touch screen
CN103941973A (en) * 2013-01-22 2014-07-23 腾讯科技(深圳)有限公司 Batch selection method and device and touch screen terminal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120179963A1 (en) * 2011-01-10 2012-07-12 Chiang Wen-Hsiang Multi-touch electronic device, graphic display interface thereof and object selection method of multi-touch display
CN102760029B (en) * 2011-04-29 2016-04-20 汉王科技股份有限公司 The method and apparatus of operating list on display interface
CN102662511B (en) * 2012-03-24 2016-11-16 上海量明科技发展有限公司 Method and the terminal of operation it is controlled by touch screen

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120188191A1 (en) * 2009-09-29 2012-07-26 Yu Chen Method and electronic device for gesture recognition
JP2012165920A (en) * 2011-02-15 2012-09-06 Universal Entertainment Corp Gaming machine
CN102262507A (en) * 2011-06-28 2011-11-30 中兴通讯股份有限公司 Method and device for realizing object batch selection through multipoint touch-control
CN103777882A (en) * 2012-10-24 2014-05-07 腾讯科技(深圳)有限公司 Multiterm selection method and device based on touch screen
CN103941973A (en) * 2013-01-22 2014-07-23 腾讯科技(深圳)有限公司 Batch selection method and device and touch screen terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109683799A (en) * 2018-12-30 2019-04-26 努比亚技术有限公司 A kind of control method by sliding, equipment and computer readable storage medium
CN109683799B (en) * 2018-12-30 2021-07-23 努比亚技术有限公司 Sliding control method and device and computer readable storage medium

Also Published As

Publication number Publication date
CN104571908B (en) 2019-05-24
CN104571908A (en) 2015-04-29
TW201516844A (en) 2015-05-01

Similar Documents

Publication Publication Date Title
WO2015062372A1 (en) Method and apparatus for selecting objects
CN104636047B (en) The method, apparatus and touch screen terminal operated to the object in list
US9507451B2 (en) File selection method and terminal
US10101828B2 (en) Pen wake up on screen detect
US10372320B2 (en) Device and method for operating on touch screen, and storage medium
KR101652373B1 (en) Method, device, terminal, program and storage medium for displaying virtual keyboard
CN103605471B (en) Single-hand control method, device and handheld device
CN107077295A (en) A kind of method, device, electronic equipment, display interface and the storage medium of quick split screen
JP6068660B2 (en) Character selection method, character selection device, terminal device, program, and recording medium
WO2015000429A1 (en) Intelligent word selection method and device
KR20150087638A (en) Method, electronic device and storage medium for obtaining input in electronic device
WO2015078342A1 (en) Method for acquiring memory information, and terminal
CN103677633B (en) Unlocking screen method, device and terminal
US20170046040A1 (en) Terminal device and screen content enlarging method
WO2018133642A1 (en) Fingerprint recognition module, fingerprint recognition method, and related product
US20160316312A1 (en) Interface display method, device, terminal, server and system
EP2876539A1 (en) Method, device and terminal apparatus for responding to a sliding operation
CN104991699B (en) A kind of method and apparatus of video display control
CN107885718A (en) Semanteme determines method and device
CN103677417B (en) A kind of detect the method for gesture, device and terminal device
CN103399657A (en) Mouse pointer control method, device and terminal device
US20160098190A1 (en) Intelligent Terminal and Method for Displaying Input Operation Interface Thereof
WO2018036337A1 (en) Method and device for displaying uninstall interface
CN104238931B (en) Information input method and device and electronic equipment
CN108885491A (en) A kind of screen switching and dual-screen electronic device of dual-screen electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14858013

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.10.2016)

122 Ep: pct application non-entry in european phase

Ref document number: 14858013

Country of ref document: EP

Kind code of ref document: A1