CN106293397A - A kind of processing method showing object and terminal - Google Patents

A kind of processing method showing object and terminal Download PDF

Info

Publication number
CN106293397A
CN106293397A CN201610639524.4A CN201610639524A CN106293397A CN 106293397 A CN106293397 A CN 106293397A CN 201610639524 A CN201610639524 A CN 201610639524A CN 106293397 A CN106293397 A CN 106293397A
Authority
CN
China
Prior art keywords
terminal
display
display object
interface
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610639524.4A
Other languages
Chinese (zh)
Other versions
CN106293397B (en
Inventor
徐冬成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610639524.4A priority Critical patent/CN106293397B/en
Publication of CN106293397A publication Critical patent/CN106293397A/en
Application granted granted Critical
Publication of CN106293397B publication Critical patent/CN106293397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses a kind of processing method showing object, including: receive the user's the first operation on the object display interface of first terminal, described first operation the first display object at least one the display object selecting described object display interface;Respond described first operation, call the posture detecting unit of described first terminal;By described posture detecting unit, it is thus achieved that for characterizing the current pose information of the current pose of described first terminal;According to preset rules, determine the target display parameters that described current pose information is corresponding;Control described first display object to show with described target display parameters in the text editing interface of described first terminal.The embodiment of the present invention also discloses a kind of terminal.

Description

Display object processing method and terminal
Technical Field
The invention relates to the field of terminal application, in particular to a display object processing method and a terminal.
Background
With the continuous development of science and technology, electronic technology has also gained rapid development, and the variety of electronic products is also more and more, and people also enjoy various conveniences brought by the development of science and technology. People can enjoy comfortable life brought along with the development of science and technology through various types of electronic equipment.
At present, as people have more and more social needs, people often need to use network expressions when instant chatting, writing articles or making comments. The network emoticons may be symbolic emoticons and pictorial emoticons. Then, when the user edits the characters, if the picture expressions need to be added, the user can select the picture expressions needed by the user from the picture expressions stored in advance.
However, whether on a smartphone, tablet, or laptop, the picture expressions are displayed in a pre-designed manner and there is no way for the user to edit the picture expressions.
Disclosure of Invention
In view of this, the embodiments of the present invention are intended to provide a method for processing a display object and a terminal, so as to improve the intelligence of the terminal and facilitate user operations.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for processing a display object, including: receiving a first operation of a user on an object display interface of a first terminal, wherein the first operation is used for selecting a first display object in at least one display object of the object display interface; responding to the first operation, and calling a posture detection unit of the first terminal; acquiring current attitude information of the first terminal through the attitude detection unit; determining target display parameters corresponding to the current attitude information according to a preset rule; and controlling the first display object to be displayed in the target display parameter in a text editing interface of the first terminal.
In a second aspect, an embodiment of the present invention provides a terminal, including: the device comprises an operation receiving unit, a calling unit, an obtaining unit, a determining unit and a control unit; the operation receiving unit is used for receiving a first operation of a user on an object display interface of the terminal, wherein the first operation is used for selecting a first display object in at least one display object of the object display interface; the calling unit is used for responding to the first operation and calling a gesture detection unit for detecting the current gesture of the terminal; the obtaining unit is used for obtaining current attitude information used for representing the current attitude of the first terminal through the attitude detecting unit; the determining unit is used for determining a target display parameter corresponding to the current attitude information according to a preset rule; the control unit is configured to control the first display object to be displayed according to the target display parameter in a text editing interface of the first terminal.
In a third aspect, an embodiment of the present invention provides a terminal, including: the display screen is used for displaying an object display interface and a text editing interface; the attitude sensor is used for acquiring the current attitude of the terminal; the processor is used for receiving a first operation of a user on the object display interface, and the first operation is used for selecting a first display object in at least one display object of the object display interface; calling the attitude sensor in response to the first operation; obtaining current attitude information for representing the current attitude of the terminal through the attitude sensor; determining target display parameters corresponding to the current attitude information according to a preset rule; and controlling the first display object to be displayed in the text editing interface with the target display parameter.
The embodiment of the invention provides a display object processing method and a terminal, wherein a first terminal associates the posture of the first terminal with an object display parameter of a display object, so that the first terminal can obtain the corresponding object display parameter of the first display object according to the posture of the first terminal. Supposing that the first terminal selects the smile picture expression according to the first operation of the user, and detects that the current posture of the first terminal is rotated by 30 degrees clockwise, the first terminal can determine corresponding target display parameters, and then the first terminal can control the smile picture expression to be displayed by the target display parameters, and at the moment, the user can see that the smile picture expression is rotated by 30 degrees clockwise in the text editing interface. Therefore, the method for processing the display object improves the intelligent degree of the terminal, facilitates the user operation and provides good user experience.
Drawings
FIG. 1 is a first schematic diagram of an application interface in an embodiment of the invention;
FIG. 2 is a flowchart illustrating a method for processing a display object according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an attitude direction of the first terminal in the embodiment of the present invention;
FIG. 4 is a second diagram of an application interface in an embodiment of the invention;
FIG. 5 is a third diagram illustrating an application interface in an embodiment of the invention;
FIG. 6 is a fourth illustration of an application interface in an embodiment of the invention;
FIG. 7 is a schematic flow chart illustrating a method for processing a display object according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
fig. 9 is another schematic structural diagram of a terminal in the embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The embodiment of the invention provides a display object processing method which can be applied to terminals such as smart phones, tablet computers, notebook computers, smart watches and the like. The terminal is provided with an instant chat application, a mail application, a blog application, a network social application and the like, and a user can perform instant chat, edit mails, write articles, make comments on social statuses of friends and the like through the applications. These applications can provide an application interface, as shown in fig. 1, that includes an object display interface 11 and a text editing interface 12. At least one display object, such as at least one picture expression, may be displayed in the object display interface. The text edited by the user, the picture expression added by the user and the like can be displayed in the text editing interface. Of course, other contents may also be displayed in the object display interface and the text editing interface, and the embodiment of the present invention is not particularly limited.
The following describes a method for processing a display object according to an embodiment of the present invention with reference to the terminal.
Fig. 2 is a schematic flowchart of a processing method of a display object in an embodiment of the present invention, as shown in fig. 2, the method includes:
s201: receiving a first operation of a user on an object display interface of a first terminal;
here, the first operation is to select a first display object of the at least one display object of the object display interface.
Here, the object display interface may be provided when the user sends an instant message, edits an email, writes a blog, or posts a comment on a friend status on a social network site while performing an instant chat, and at this time, the user may select one display object, i.e., the first display object, from at least one display object of the object display interface through a first operation such as clicking, long-pressing, and sliding. At this time, the first terminal receives the first operation.
It should be noted that the display object may be a picture expression, a symbolic expression, or the like, and may also be another display object. In the embodiment of the present invention, a display object is used as a picture expression for explanation.
S202: responding to the first operation, and calling a posture detection unit in the first terminal;
in practical applications, the gesture detection unit may be a gyroscope, an acceleration sensor, a gravity sensor, and the like, and the embodiment of the present invention is not particularly limited.
S203: acquiring current attitude information for representing the current attitude of the first terminal through an attitude detection unit;
for example, assume that the first terminal is rotated 30 ° about the z-axis to the x-axis, as shown in fig. 3, that is, the first terminal is rotated 30 ° clockwise. At this time, the first terminal invokes an attitude detection unit, such as a gyroscope, the gyroscope obtains readings in three directions of x, y, and z, and since the first terminal rotates only in a two-dimensional space, only the readings in the x and y directions among the readings in the three directions of x, y, and z reported by the gyroscope are nonzero. In the embodiment of the present invention, assuming that the clockwise direction is positive and the counterclockwise direction is negative, the first terminal can determine the angle of rotation around the z-axis, i.e. the rotation is +30 °, according to the readings of the two directions. If the first terminal is rotated 30 counterclockwise, the first terminal can determine that it is rotated about the z-axis by minus 30 degrees (-30 °) based on the readings from the gyroscope. At this time, the current posture information may be α ═ 30 °. If the first terminal is rotated counterclockwise about the z-axis by positive 30 degrees (+30 °), the current posture information may be represented as-30 °. Of course, it may also be assumed that the clockwise direction is negative and the counterclockwise direction is positive, and the embodiment of the present invention is not particularly limited.
S204: determining target display parameters corresponding to the current attitude information according to a preset rule;
in a specific implementation process, S204 may include: determining a to-be-rotated angle of a first display object corresponding to the current attitude information according to the corresponding relation between the preset attitude information and the rotated angle; and generating target display parameters according to the angle to be rotated.
Here, the first terminal stores the corresponding relationship between the posture information of the terminal and the rotation angle of the display object in advance, and then after obtaining the current posture information in S203, the first terminal may determine the object display parameter to be rotated of the first display object corresponding to the current posture information, for example, α ═ 30 °, and the first display object, for example, the angle to be rotated a of the "smile" picture expression ═ 30 °, and at this time, the angle to be rotated of the "smile" picture expression may be represented as [ smile-r 30 ]; that if α ═ 90 °, the angle to be rotated for the "smile" picture expression can be denoted [ smile-r 90 ]; and if α is-60 °, that is to say the first terminal is rotated 60 ° to the right about the z-axis, the angle to be rotated for the "smile" picture expression may be denoted as [ smile-l 60 ]. Of course, the present invention may be embodied in other forms and is not limited to the embodiments.
Further, the angles to be rotated in the four directions of the top, bottom, left and right of the display object common standard can be expressed as [ display object-RT ], [ display object-RB ], [ display object-RL ], [ display object-RR ]. The format of the angle to be rotated is for facilitating the first terminal to modify the angle value, and is only used for reference, and the embodiment of the present invention is not limited specifically.
Then, the first terminal generates a four-dimensional matrix of the picture expression, namely target display parameters, according to the angle to be rotated.
Of course, the object display parameter may be a parameter of an attribute such as a display mode of the display object and an animation effect, and for example, when the current posture information α is +30 °, the target display parameter may be a parameter in which the first display object enters the text edit box from a position at the lower left corner of 30 °. In practical applications, a person skilled in the art can set itself, and the embodiments of the present invention are not limited specifically.
S205: and controlling the first display object to be displayed in the text editing interface of the first terminal according to the target display parameter.
Here, after the first terminal determines the target display parameter corresponding to the current posture information, the first terminal controls the first display object to be displayed in the text editing interface with the target display parameter. For example, the first terminal controls a first display object, such as a "smile" picture expression 41, to be displayed in the text editing interface 12 in a four-dimensional matrix corresponding to [ smile-r 30], and at this time, the user can see the text editing interface as shown in fig. 4.
The above method is described below by way of specific examples.
Suppose that, taking the first terminal as a smart phone as an example, the user uses an instant chat application in the smart phone to perform instant chat, and at this time, the smart phone displays an application interface as shown in fig. 1.
Then, the above method comprises:
firstly, a mobile phone receives a first operation of a user;
here, the first operation is to select a "smile" picture expression in the object display interface, and at this time, the mobile phone receives the "smile" picture expression input by the user.
Secondly, the mobile phone responds to the first operation and calls a gyroscope of the mobile phone;
here, the user rotates the phone after selecting the "smile" picture expression, at which time the gyroscope detects the current pose of the phone.
Thirdly, the mobile phone obtains the current attitude information of the mobile phone through the gyroscope, for example, alpha is 30 degrees;
specifically, the mobile phone can calculate the current posture of the mobile phone through a mathematical function atan or atan2, that is, the angle of the rotation of the mobile phone around the z-axis can be calculated through atan2 (acquisition.x, acquisition.y) to obtain the angle between the gravity direction and the y-axis, and further calculate the current posture of the mobile phone, and obtain α equal to 30 °.
Fourthly, the mobile phone determines the rotation angle of the first display object corresponding to the angle α of 30 °, that is, the angle a of 30 ° or [ smile-r 30], according to the corresponding relationship between the posture information and the rotation angle.
And fifthly, the smart phone generates a four-dimensional matrix of the smile picture expression according to the angle A being 30 degrees, namely the target display parameter of the smile picture expression.
And sixthly, controlling the smiling picture expression to be displayed in the four-dimensional matrix in the text editing interface by the mobile phone. At this time, the application interface viewed by the user may be displayed in the text editing interface 12 with the "smile" picture expression 41 rotated clockwise by 30 ° as shown in fig. 4.
Therefore, in the embodiment of the invention, the first terminal associates the posture of the first terminal with the object display parameters of the display object, so that the first terminal can obtain the corresponding object display parameters of the first display object according to the posture of the first terminal. Supposing that the first terminal selects the smile picture expression according to the first operation of the user, the first terminal detects that the current posture of the first terminal is clockwise rotated by 30 degrees, then the first terminal can determine corresponding target display parameters, and further the first terminal can control the smile picture expression to be displayed according to the target display parameters, at the moment, the user can see that the smile picture expression is clockwise rotated by 30 degrees in a text editing interface, the intelligent degree of the terminal is improved, the operation of the user is facilitated, and good user experience is provided.
Based on the foregoing embodiment, in the specific implementation process, most terminals have an interface rotation function, that is, an application interface can be adjusted along with the posture synchronization of the terminals. In this case, then, no matter how the user rotates the terminal, the relative position of the display object and the application interface as seen by the end user is not changed. In order to display the display objects in different directions, at this time, after S204, the method may further include: detecting whether an interface rotation function of the first terminal is enabled; if the interface rotation function is enabled, generating at least one second display object according to a preset rotation angle based on the target display parameter; and controlling at least one second display object to display.
That is, after obtaining the target display parameter of the first display object, the first terminal may detect whether its interface rotation function is enabled; if the first display object is enabled, the first terminal can rotate the first display object according to a preset rotation angle on the basis of the target display parameter corresponding to the current posture information of the first terminal, so that at least one second display object is generated and displayed. For example, the target display parameter obtained by the first terminal is [ smile-r 30], then the first terminal can determine that it is rotated by 30 °, and the first display object, i.e., the "smile" picture expression, is to be rotated by 30 °. Then, the first terminal continues to rotate by 90 °, 180 °, or 270 ° on the basis of 30 °, to generate three second display objects, and finally, as shown in fig. 5, the first terminal controls the first display object 51 and the three second display objects 52 to be displayed.
In another embodiment of the present invention, after the step of detecting whether the interface rotation function of the first terminal is enabled, if the first terminal determines that the interface rotation function is enabled, the display object set corresponding to the first display object stored in advance may be read, for example, as shown in fig. 6, the first display object 61 is a "smile" picture expression, and there may be four second display objects 63 in the display object set 62, that is, the "smile" picture expression itself and the picture expressions generated after clockwise rotating the "smile" picture expression by 90 °, 180 °, and 270 °, respectively. Next, the first terminal displays the set of display objects, i.e., the four second display objects.
Further, after the step of controlling the display of the at least one second display object is executed by the first terminal, the method may further include: obtaining a second operation on at least one second display object; and responding to a second operation to control the display of the third display object, wherein the second operation is used for selecting the third display object in at least one second display object.
Here, still referring to fig. 5, after the first terminal displays the at least one second display object, the user may also select one of the at least one second display object, i.e., a third display object, from the at least one second display object, for example, the user selects the second display object rotated clockwise by 120 ° as the third display object. At this time, the first terminal obtains a second operation of the user, and then, the first terminal controls the display of the third display object in response to the second operation, and at this time, the third display object may be displayed in the text editing interface.
In other embodiments of the present invention, the first terminal, in addition to being capable of providing the second display objects rotated by a plurality of preset angles, after detecting whether the interface rotation function is enabled, S202 may further include: and if the interface rotation function is enabled, forbidding the interface rotation function and calling the gesture detection unit. That is to say, after detecting that the interface rotation function is turned on, the first terminal prohibits the interface rotation function, and at this time, the user may rotate the first terminal, and the first terminal may also invoke the gesture detection unit to acquire the current gesture information of the first terminal as in S202; and if the interface rotation function is not enabled, the first terminal executes S202 after S201, and invokes the gesture detection unit to collect current gesture information of the first terminal. Therefore, the interference of the interface rotation function can be avoided, the target display parameter of the first display object is accurately obtained according to the current posture information of the first terminal, the rotation angle to be treated of the first display object displayed by the target display parameter is consistent with the posture of the first terminal, the intelligent degree of the terminal is further improved, the user operation is facilitated, and good user experience is provided.
Therefore, in the embodiment of the invention, for the first terminal with the interface rotation function enabled, for each first display object, several second display objects with different display angles can be provided for the user to select. Therefore, the influence of the interface rotation function on the adjustment of the display object is effectively avoided, the intelligent degree of the terminal is further improved, the user operation is facilitated, and good user experience is provided.
Based on the foregoing embodiment, in the embodiment of the present invention, since the user generally needs to input the picture emoticon in the application interface when using the instant chat application, the mail application, the blog application, the social networking application, and the like, after the first terminal executes S204, the picture emoticon may be sent to the second terminal. Here, the second terminal may be a terminal such as a smart phone, a tablet computer, a notebook computer, or a smart watch, may also be a server of an instant chat application, a mail application, a blog application, or a social networking application, and may also be a server of an input method application, which is not specifically limited in the embodiment of the present invention.
Then, in order to enable the second terminal to display the first display object with the target display parameter as the first terminal does, after S204, the method further includes: packaging the first display object and the target display parameter into information to be sent; and sending the information to be sent to the second terminal.
The information to be sent is used for indicating the second terminal to display the first display object according to the target display parameters.
Then, after receiving the information sent by the first terminal, the second terminal parses the information to obtain the first display object and the target display parameter thereof, and then the second terminal may control the first display object to be displayed with the target display parameter. Of course, when the second terminal is an application server, the first display object does not need to be displayed, and the first display object and the target display parameter thereof are stored in an associated manner.
Further, in other embodiments of the present invention, after S204, the method may further include: adjusting the first display object according to the target display parameter to obtain a fourth display object; and sending the fourth display object to the second terminal.
For example, after obtaining the angle a of waiting for rotation of the "smile" picture expression, the first terminal may rotate the "smile" picture expression clockwise by 30 °, thereby generating a fourth display object, and then the first terminal sends the fourth display object to the second terminal, so that the second terminal does not need to adjust direct display after receiving the fourth display object, thereby further improving the intelligence degree of the terminal, facilitating user operation, and providing good user experience.
Based on the foregoing embodiments, the embodiments of the present invention take the first terminal as an example of a mobile phone, and describe the foregoing method.
As shown in fig. 7, the method includes:
s701: receiving a picture expression selected by a user;
s702: judging whether the input angle expression function is supported by the mobile terminal; if yes, executing S703, otherwise, determining the rotation angle of the picture expression as 0 degree and executing S705;
s703: acquiring the angle of the mobile phone through a gyroscope or a gravity sensor;
here, the mobile phone reads the current acceleration in the x, y and z directions through the interface of the gyroscope or the gravity sensor; then, judging whether the acceleration values in the x direction, the y direction and the z direction are legal or not; if the picture expression is legal, executing S704, otherwise, determining the rotation angle of the picture expression as 0 DEG and executing S705; secondly, the mobile phone calculates the current rotated angle of the mobile phone through a mathematical function according to the acceleration values of x, y and z; here, the angle a of the current rotation of the handset is M _ PI minus the y-axis included angle, and a is M _ PI-atan2(x, y).
S704: the mobile phone calculates the angle to be rotated of the picture expression through the angle A; here, the angle B to be rotated of the picture expression is M _ PI-atan2(x, y).
S705: and displaying, storing and/or sending the picture expression and the angle to be rotated by the mobile phone.
Here, if necessary, the mobile phone may transmit the picture expression and the angle to be rotated to the second terminal.
After that, the mobile phone can also analyze the angle B to be rotated to generate a four-dimensional matrix for controlling the expression rotation display of the picture; then, calling the existing processing method or system method of the display object to display the picture expression through the four-dimensional matrix.
At this time, the picture expression viewed by the user is rotated.
Based on the same inventive concept, embodiments of the present invention provide a terminal, which is consistent with the first terminal described in one or more of the foregoing embodiments.
As shown in fig. 8, the terminal 80 includes: an operation receiving unit 81, a calling unit 82, an obtaining unit 83, a determining unit 84, and a control unit 85; the operation receiving unit 81 is configured to receive a first operation of a user on an object display interface of the terminal, where the first operation is used to select a first display object in at least one display object of the object display interface; a calling unit 82, configured to call, in response to the first operation, a posture detection unit configured to detect a current posture of the terminal; an obtaining unit 83, configured to obtain, by the gesture detection unit, current gesture information for representing a current gesture of the first terminal; the determining unit 84 is configured to determine a target display parameter corresponding to the current posture information according to a preset rule; and the control unit 85 is used for controlling the first display object to be displayed according to the target display parameter in the text editing interface of the first terminal.
In other embodiments of the present invention, the obtaining unit 83 is specifically configured to determine, according to a preset corresponding relationship between the posture information and the rotation angle, a to-be-rotated angle of the first display object corresponding to the current posture information; and generating target display parameters according to the angle to be rotated.
In other embodiments of the present invention, the terminal may further include: a detection unit and an acquisition unit; the detection unit is used for responding to the first operation and detecting whether the interface rotation function of the terminal is enabled or not; the acquisition unit is used for acquiring a display object set corresponding to a first display object if the interface rotation function is enabled, wherein the display object set is composed of at least one second display object, and the second display object is generated after the first display object rotates by at least one preset angle; and the control unit is also used for controlling at least one second display object to display.
In other embodiments of the present invention, the obtaining unit is further configured to obtain a second operation of the at least one second display object by the user after the control unit controls the display of the at least one second display object, where the second operation is used to select a third display object in the at least one second display object; and the control unit is also used for responding to the second operation and controlling the display of the third display object.
In other embodiments of the present invention, the invoking unit is specifically configured to disable the interface rotation function and invoke the gesture detecting unit if the interface rotation function is enabled.
In other embodiments of the present invention, the terminal may further include: the transmitting unit is used for encapsulating the first display object and the target display parameter into information to be transmitted; and sending the information to be sent to a second terminal, wherein the information to be sent is used for indicating the second terminal to control the first display object to be displayed according to the target display parameters.
In other embodiments of the present invention, the terminal may further include: a transmitting unit; the control unit is used for adjusting the first display object according to the target display parameter to obtain a fourth display object; and the sending unit is used for sending the fourth display object to the second terminal.
Here, it should be noted that: the description of the terminal embodiment is similar to the description of the method, and has the same beneficial effects as the method embodiment, and therefore, the description is omitted. For technical details that are not disclosed in the terminal embodiment of the present invention, those skilled in the art should refer to the description of the method embodiment of the present invention to understand that, for brevity, detailed description is omitted here.
Based on the same inventive concept, an embodiment of the present invention provides a terminal, as shown in fig. 9, where the terminal 90 includes: a display screen 91 for displaying an object display interface and a text editing interface; the attitude sensor 92 is used for acquiring the current attitude of the terminal; a processor 93, configured to receive a first operation of a user on an object display interface, where the first operation is used to select a first display object in at least one display object of the object display interface; responding to the first operation, and calling an attitude sensor; obtaining current attitude information for representing the current attitude of the terminal through an attitude sensor; determining target display parameters corresponding to the current attitude information according to a preset rule; and controlling the first display object to be displayed in the text editing interface with the target display parameter.
In other embodiments of the present invention, the processor is configured to determine, according to a preset corresponding relationship between the posture information and the rotation angle, a rotation angle to be rotated of the first display object corresponding to the current posture information; and generating target display parameters according to the angle to be rotated.
In other embodiments of the present invention, the processor is configured to detect whether an interface rotation function of the first terminal is enabled in response to the first operation; if the interface rotation function is enabled, acquiring a display object set corresponding to the first display object, wherein the display object set is composed of at least one second display object, and the second display object is generated after the first display object rotates at least one preset angle; and controlling at least one second display object to display.
In other embodiments of the present invention, the processor is configured to obtain a second operation performed by the user on the at least one second display object, where the second operation is used to select a third display object in the at least one second display object; and responding to the second operation, and controlling the display of the third display object.
In other embodiments of the present invention, the processor is configured to disable the interface rotation function and invoke the gesture detection unit if the interface rotation function is enabled.
In other embodiments of the present invention, the processor is configured to encapsulate the first display object and the target display parameter into information to be sent; and sending the information to be sent to a second terminal, wherein the information to be sent is used for indicating the second terminal to control the first display object to be displayed according to the target display parameters.
In other embodiments of the present invention, the processor is configured to adjust the first display object according to the target display parameter, and obtain a fourth display object; and sending the fourth display object to the second terminal.
In practical applications, the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor; it is to be understood that the electronic device for implementing the functions of the processor may be other devices, and the embodiments of the present invention are not limited in particular.
Here, it should be noted that: the description of the terminal embodiment is similar to the description of the method, and has the same beneficial effects as the method embodiment, and therefore, the description is omitted. For technical details that are not disclosed in the terminal embodiment of the present invention, those skilled in the art should refer to the description of the method embodiment of the present invention to understand that, for brevity, detailed description is omitted here. Here, it should be noted that:
it should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method for processing a display object, comprising:
receiving a first operation of a user on an object display interface of a first terminal, wherein the first operation is used for selecting a first display object in at least one display object of the object display interface;
responding to the first operation, and calling a posture detection unit of the first terminal;
obtaining, by the gesture detection unit, current gesture information for the first terminal;
determining target display parameters corresponding to the current attitude information according to a preset rule;
and controlling the first display object to be displayed in the target display parameter in a text editing interface of the first terminal.
2. The method according to claim 1, wherein the determining, according to a preset rule, a target display parameter corresponding to the current posture information includes:
determining a to-be-rotated angle of the first display object corresponding to the current posture information according to a corresponding relation between preset posture information and a rotation angle;
and generating the target display parameters according to the angle to be rotated.
3. The method of claim 1, wherein after the determining the target display parameter corresponding to the current pose information, the method further comprises:
detecting whether an interface rotation function of the first terminal is enabled;
if the interface rotation function is enabled, generating at least one second display object according to a preset rotation angle based on the target display parameter;
and controlling the at least one second display object to be displayed.
4. The method of claim 3, wherein after said controlling the display of the at least one second display object, the method further comprises:
obtaining a second operation of the user on the at least one second display object, wherein the second operation is used for selecting a third display object in the at least one second display object;
and responding to the second operation, and controlling the third display object to display.
5. The method of claim 3, wherein invoking the gesture detection unit of the first terminal comprises:
and if the interface rotation function is enabled, forbidding the interface rotation function and calling the gesture detection unit.
6. The method according to any one of claims 1 to 5, wherein after the determining the target display parameter corresponding to the current posture information, the method further comprises:
packaging the first display object and the target display parameter into information to be sent;
and sending the information to be sent to a second terminal, wherein the information to be sent is used for indicating the second terminal to control the first display object to be displayed according to the target display parameter.
7. The method according to any one of claims 1 to 5, wherein after the determining the target display parameter corresponding to the current posture information, the method further comprises:
adjusting the first display object according to the target display parameter to obtain a fourth display object;
and sending the fourth display object to a second terminal.
8. A terminal, comprising: the device comprises an operation receiving unit, a calling unit, an obtaining unit, a determining unit and a control unit; wherein,
the operation receiving unit is used for receiving a first operation of a user on an object display interface of the terminal, wherein the first operation is used for selecting a first display object in at least one display object of the object display interface;
the calling unit is used for responding to the first operation and calling a gesture detection unit for detecting the current gesture of the terminal;
the obtaining unit is used for obtaining current attitude information used for representing the current attitude of the first terminal through the attitude detecting unit;
the determining unit is used for determining a target display parameter corresponding to the current attitude information according to a preset rule;
the control unit is configured to control the first display object to be displayed according to the target display parameter in a text editing interface of the first terminal.
9. The terminal of claim 8, further comprising: a detection unit and a generation unit;
the detection unit is used for detecting whether the interface rotation function of the first terminal is enabled or not after the determination unit determines the target display parameter corresponding to the current attitude information;
the generating unit is used for generating at least one second display object according to a preset rotation angle based on the target display parameter if the interface rotation function is enabled;
the control unit is used for controlling the display of the at least one second display object.
10. A terminal, comprising:
the display screen is used for displaying an object display interface and a text editing interface;
the attitude sensor is used for acquiring the current attitude of the terminal;
the processor is used for receiving a first operation of a user on the object display interface, and the first operation is used for selecting a first display object in at least one display object of the object display interface; calling the attitude sensor in response to the first operation; obtaining current attitude information for representing the current attitude of the terminal through the attitude sensor; determining target display parameters corresponding to the current attitude information according to a preset rule; and controlling the first display object to be displayed in the text editing interface with the target display parameter.
CN201610639524.4A 2016-08-05 2016-08-05 A kind of processing method and terminal showing object Active CN106293397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610639524.4A CN106293397B (en) 2016-08-05 2016-08-05 A kind of processing method and terminal showing object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610639524.4A CN106293397B (en) 2016-08-05 2016-08-05 A kind of processing method and terminal showing object

Publications (2)

Publication Number Publication Date
CN106293397A true CN106293397A (en) 2017-01-04
CN106293397B CN106293397B (en) 2019-07-16

Family

ID=57666075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610639524.4A Active CN106293397B (en) 2016-08-05 2016-08-05 A kind of processing method and terminal showing object

Country Status (1)

Country Link
CN (1) CN106293397B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107102835A (en) * 2017-02-10 2017-08-29 珠海市魅族科技有限公司 A kind of display control method and system
CN109618054A (en) * 2018-12-21 2019-04-12 北京金山安全软件有限公司 Incoming call interface display method and device, electronic equipment and storage medium
WO2019149171A1 (en) * 2018-02-05 2019-08-08 阿里巴巴集团控股有限公司 Session processing method, device, and electronic apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174178A (en) * 2006-10-31 2008-05-07 佛山市顺德区顺达电脑厂有限公司 Method and device for viewing graphic pattern
US20110234750A1 (en) * 2010-03-24 2011-09-29 Jimmy Kwok Lap Lai Capturing Two or More Images to Form a Panoramic Image
CN103793141A (en) * 2014-02-11 2014-05-14 广州市久邦数码科技有限公司 Achieving method and system for control over icon rotation
CN104252358A (en) * 2013-06-28 2014-12-31 孕龙科技股份有限公司 Method for making and applying emoticons and system for making and applying emoticons
CN105320403A (en) * 2014-07-31 2016-02-10 三星电子株式会社 Method and device for providing content
CN105653165A (en) * 2015-12-24 2016-06-08 小米科技有限责任公司 Method and device for regulating character display

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101174178A (en) * 2006-10-31 2008-05-07 佛山市顺德区顺达电脑厂有限公司 Method and device for viewing graphic pattern
US20110234750A1 (en) * 2010-03-24 2011-09-29 Jimmy Kwok Lap Lai Capturing Two or More Images to Form a Panoramic Image
CN104252358A (en) * 2013-06-28 2014-12-31 孕龙科技股份有限公司 Method for making and applying emoticons and system for making and applying emoticons
CN103793141A (en) * 2014-02-11 2014-05-14 广州市久邦数码科技有限公司 Achieving method and system for control over icon rotation
CN105320403A (en) * 2014-07-31 2016-02-10 三星电子株式会社 Method and device for providing content
CN105653165A (en) * 2015-12-24 2016-06-08 小米科技有限责任公司 Method and device for regulating character display

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107102835A (en) * 2017-02-10 2017-08-29 珠海市魅族科技有限公司 A kind of display control method and system
CN107102835B (en) * 2017-02-10 2020-04-14 珠海市魅族科技有限公司 Display control method and system
WO2019149171A1 (en) * 2018-02-05 2019-08-08 阿里巴巴集团控股有限公司 Session processing method, device, and electronic apparatus
CN109618054A (en) * 2018-12-21 2019-04-12 北京金山安全软件有限公司 Incoming call interface display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106293397B (en) 2019-07-16

Similar Documents

Publication Publication Date Title
US11704878B2 (en) Surface aware lens
KR102649272B1 (en) Body posture estimation
US11062498B1 (en) Animated pull-to-refresh
US10963145B1 (en) Prioritizing display of user icons associated with content
US11348301B2 (en) Avatar style transformation using neural networks
US10895964B1 (en) Interface to display shared user groups
US11411895B2 (en) Generating aggregated media content items for a group of users in an electronic messaging application
US9317623B2 (en) Dynamic webpage image
US9972134B2 (en) Adaptive smoothing based on user focus on a target object
WO2018081125A1 (en) Redundant tracking system
US20120069028A1 (en) Real-time animations of emoticons using facial recognition during a video chat
WO2021119408A1 (en) Skeletal tracking using previous frames
KR20140049340A (en) Apparatus and methods of making user emoticon
CN106293397B (en) A kind of processing method and terminal showing object
CN112911052A (en) Information sharing method and device
US11277368B1 (en) Messaging system
US20230018205A1 (en) Message composition interface
EP3144830B1 (en) Electronic device and method of tagging digital photograph information
US20240333507A1 (en) Contextual chat replies
US11824825B1 (en) Messaging system with in-application notifications
US20230004260A1 (en) Hybrid search system for customizable media
CN117459803A (en) Message communication method and device, electronic equipment and storage medium
CN114553805A (en) Message display method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant