CN116301527A - Display control method and device, electronic equipment and medium - Google Patents

Display control method and device, electronic equipment and medium Download PDF

Info

Publication number
CN116301527A
CN116301527A CN202310266080.4A CN202310266080A CN116301527A CN 116301527 A CN116301527 A CN 116301527A CN 202310266080 A CN202310266080 A CN 202310266080A CN 116301527 A CN116301527 A CN 116301527A
Authority
CN
China
Prior art keywords
transformation
selected target
display
information
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310266080.4A
Other languages
Chinese (zh)
Other versions
CN116301527B (en
Inventor
张伟峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Likong Yuantong Technology Co ltd
Original Assignee
Beijing Likong Yuantong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Likong Yuantong Technology Co ltd filed Critical Beijing Likong Yuantong Technology Co ltd
Priority to CN202310266080.4A priority Critical patent/CN116301527B/en
Publication of CN116301527A publication Critical patent/CN116301527A/en
Application granted granted Critical
Publication of CN116301527B publication Critical patent/CN116301527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a display control method and device, electronic equipment and medium. The display control method comprises the following steps: first attribute information of a selected target is acquired. And responding to a transformation instruction acting on the selected target in the current transformation scene, and acquiring information to be transformed of the selected target, wherein the information to be transformed comprises a transformation distance. And adjusting the first attribute information based on the information to be converted to obtain second attribute information of the selected target, wherein the second attribute information comprises second display coordinates of the selected target after conversion processing. And updating the second display coordinates based on the current transformation scene to obtain third display coordinates, and displaying the selected target after transformation on the third display coordinates. The display control method provided by the invention is beneficial to reducing the display control times of the user for carrying out conversion processing on the selected target and improving the use experience of the user.

Description

Display control method and device, electronic equipment and medium
Technical Field
The present invention relates to the field of display control, and in particular, to a display control method and apparatus, an electronic device, and a medium.
Background
In the related art, a certain visual position difference exists between a display position of a selected target which can be transformed after being transformed in a current transformation scene and a display position of the selected target before transformation.
When the user needs to perform multiple transformation processes on the selected object, if the display control is still performed in this way, the user needs to determine whether the current display position of the selected object needs to perform the translation transformation process every time the user completes the transformation process, so that the visual position of the selected object in the next transformation process can meet the user requirement.
Therefore, the operation of performing display control on the selected target by adopting the mode is complicated, and the use experience of a user is affected. Accordingly, there is a need for a display control method capable of reducing user operations.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the defects that in the prior art, the operation of controlling the selected target display by a user is complicated and the use experience of the user is affected, so as to provide a display control method and device, electronic equipment and medium.
According to a first aspect, an embodiment of the present invention provides a display control method, including:
acquiring first attribute information of a selected target;
responding to a transformation instruction acting on the selected target in a current transformation scene, and acquiring information to be transformed of the selected target, wherein the information to be transformed comprises a transformation distance;
adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target, wherein the first attribute information comprises a first display coordinate and first size information of the selected target in the current transformation scene, and the second attribute information comprises a second display coordinate of the selected target after transformation;
and updating the second display coordinates based on the current transformation scene to obtain third display coordinates, and displaying the selected target after transformation processing on the third display coordinates.
In the mode, the visual position of the selected target after transformation processing is the same as the visual position of the selected target before transformation processing, so that the user does not need to manually adjust the display position of the selected target after transformation processing, the display control times of the user for the selected target after transformation processing are reduced, and the use experience of the user is improved.
With reference to the first aspect, in a first embodiment of the first aspect, the determining the transformation distance includes:
determining a drag point of the selected target for executing transformation in response to the transformation instruction, and determining historical coordinates of the drag point from the first attribute information;
based on the information to be transformed, determining the latest coordinates of the transformed drag points;
and determining the transformation distance of the selected target in the current transformation scene according to the distance between the latest coordinate and the historical coordinate.
With reference to the first embodiment of the first aspect, in a second embodiment of the first aspect, the first attribute information includes first size information, and the second attribute information includes second size information;
the adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target includes:
judging whether the drag point is a target drag point or not;
if the drag point is not the target drag point, adjusting the first size information according to the transformation distance to obtain second size information of the selected target after transformation processing;
and if the drag point is the target drag point, reversely adjusting the first size information according to the transformation distance to obtain second size information of the selected target after transformation processing.
With reference to the first embodiment or the second embodiment of the first aspect, in a third embodiment of the first aspect, the determining, based on the information to be transformed, a latest coordinate of the drag point includes:
if the current transformation scene is a rotation transformation scene, determining the dragging point as a center point;
acquiring the rotation angle of the selected target from the information to be transformed;
obtaining a second display coordinate according to the first display coordinate, the historical coordinate and the rotation angle;
and determining the latest coordinate of the center point based on the second display coordinate, second size information in the second attribute information and the rotation angle.
With reference to the third embodiment of the first aspect, in a fourth embodiment of the first aspect, the second size information includes a latest width and a latest height of the selected target after the transformation process; the latest coordinates comprise a latest abscissa and a latest ordinate;
the updating the second display coordinates based on the current transformation scene to obtain third display coordinates includes:
if the rotation transformation scene is a target transformation scene, taking half of the latest width as a transverse adjustment value of an abscissa and half of the latest height as a longitudinal adjustment value of an ordinate;
taking the difference between the latest abscissa and the transverse adjustment value as a third display abscissa and the difference between the latest ordinate and the longitudinal adjustment value as a third display ordinate;
obtaining a third display coordinate according to the third display abscissa and the third display ordinate;
and taking the third display coordinates as the updated coordinates of the second display coordinates.
With reference to the first aspect, in a fifth embodiment of the first aspect, the obtaining, in response to a transformation instruction applied to the selected target in the current transformation scene, information to be transformed of the selected target includes:
responding to a transformation instruction acting on the selected target in a current transformation scene, determining a drag point of the selected target for executing transformation in response to the transformation instruction, and acquiring historical coordinates of the drag point;
responding to the end of transformation, and acquiring the latest coordinates of the dragging point;
if the current transformation scene is a similar transformation scene, acquiring a mixed transformation matrix;
substituting the historical coordinates, the latest coordinates and the first display coordinates into the mixed transformation matrix and performing inverse matrix processing to obtain information to be transformed of the selected target after transformation processing, wherein the information to be transformed comprises any one or more of the following information: the rotation angle, scaling factor and miter angle of the selected object.
With reference to the fifth embodiment of the first aspect, in a sixth embodiment of the first aspect, the hybrid transformation matrix is composed of a rotation matrix, a scaling matrix, and a chamfer matrix.
According to a second aspect, an embodiment of the present invention further provides a display control apparatus, including:
a first acquisition unit configured to acquire first attribute information of a selected target;
the second acquisition unit is used for responding to a transformation instruction acting on the selected target in the current transformation scene to acquire information to be transformed of the selected target, wherein the information to be transformed comprises a transformation distance;
the adjusting unit is used for adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target, the first attribute information comprises a first display coordinate and first size information of the selected target in the current transformation scene, and the second attribute information comprises a second display coordinate of the selected target after transformation processing;
and the display unit is used for updating the second display coordinates based on the current transformation scene to obtain third display coordinates, and displaying the selected target after transformation processing on the third display coordinates.
According to a third aspect, an embodiment of the present invention further provides an electronic device, including a memory and a processor, where the memory and the processor are communicatively connected to each other, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the display control method according to any one of the first aspect and optional embodiments thereof.
According to a fourth aspect, embodiments of the present invention also provide a computer-readable storage medium storing computer instructions for causing the computer to execute the display control method of any one of the first aspect and its alternative embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a display control proposed according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a rotated and transformed display according to an exemplary embodiment.
Fig. 3 is a schematic diagram of another rotated and transformed display according to an exemplary embodiment.
Fig. 4 is a flowchart of another display control proposed according to an exemplary embodiment.
Fig. 5 is a schematic diagram illustrating a post-similarity transformation display according to an exemplary embodiment.
Fig. 6 is a schematic diagram of another similarity transformed display proposed in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating a structure of a display control apparatus according to an exemplary embodiment.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the related art, a certain visual position difference exists between a display position of a selected target which can be transformed after being transformed in a current transformation scene and a display position of the selected target before transformation.
When the user needs to perform multiple transformation processes on the selected object, if the display control is still performed in this way, the user needs to determine whether the current display position of the selected object needs to perform the translation transformation process every time the user completes the transformation process, so that the visual position of the selected object in the next transformation process can meet the user requirement. For example: if the translation processing is needed, the translation conversion processing is needed to be performed on the current display position of the selected target, so that the adjusted display position can meet the visual position requirement of the user on the next conversion processing of the selected target.
The display control of the selected target is complicated in operation by adopting the mode, and the use experience of a user is affected.
In order to solve the foregoing problems, in the embodiments of the present invention, a display control method is provided for an electronic device, and it should be noted that an execution body of the display control method may be a display control device, and the display control device may be implemented by software, hardware, or a combination of software and hardware to form part or all of the electronic device, where the electronic device may be a terminal, a client, or a server, and the server may be a server, or may be a server cluster formed by multiple servers. In the following method embodiments, the execution subject is an electronic device.
The display control method provided by the invention is suitable for the use scene of executing transformation processing on the selected target on the display interface of the electronic equipment. Wherein the selected object may include a geometric figure, an image, and is not limited in the present invention. The display control method provided by the invention can be embedded in any drawing Application program (APP), so that a user can perform conversion processing on a selected target by adopting the display control method provided by the invention in the process of drawing the APP.
The display control method provided by the invention comprises the following steps: acquiring first attribute information of a selected target, wherein the attribute information comprises first display coordinates and first size information of the selected target in a current transformation scene; responding to a transformation instruction acting on the selected target in the current transformation scene, and acquiring information to be transformed of the selected target; adjusting the first attribute information based on the information to be converted to obtain second attribute information of the selected target, wherein the second attribute information comprises second display coordinates of the selected target after conversion treatment; and updating the second display coordinates based on the current transformation scene, and displaying the selected target after transformation processing on the third display coordinates.
According to the display control method provided by the invention, the first attribute information of the selected target can be regulated based on the information to be converted of the selected target, so that the second attribute information is obtained. In order to avoid the situation that the difference between the display coordinates of the selected target after the transformation processing and the first display coordinates in the first attribute information is too large to influence the visual experience of a user, the second display coordinates in the second attribute information are adjusted based on the first display coordinates in the first attribute information to obtain third display coordinates for displaying the selected target after the transformation processing, so that the visual position difference between the third display coordinates and the first display coordinates is reduced, the user is not required to manually adjust the display positions of the selected target after the transformation processing, the display control times of the user on the selected target are reduced, and the user experience is improved.
Fig. 1 is a flow chart of a display control proposed according to an exemplary embodiment. As shown in fig. 1, the display control includes the following steps S101 to S104.
In step S101, first attribute information of a selected target is acquired.
In an embodiment of the present invention, the selected object may be a geometric figure. For example: dots, lines, planar patterns, solid patterns, etc. The selected object may also be a picture, a conventional component, a source of a picture, a source of a profile (polyline, polygon, bezier curve), etc., without limitation in the present invention. The current transformation scene can be a translation transformation scene, a rotation transformation scene or a similar transformation scene, and can be set according to actual requirements.
In step S102, information to be transformed of the selected target is acquired in response to a transformation instruction acting on the selected target in the current transformation scene.
In the embodiment of the invention, the transformation instruction is an instruction triggered when the user needs to perform transformation processing on the selected target on the display interface. The transformation instruction comprises information to be transformed, and the information to be transformed comprises a transformation distance. By acquiring the information to be transformed, the transformation processing process to be executed by the selected target can be definitely performed, and further targeted transformation processing can be performed, so that the control requirement of a user is met.
In step S103, the first attribute information is adjusted based on the information to be transformed, so as to obtain second attribute information of the selected target.
In an embodiment of the present invention, the first attribute may be understood as initial information of the selected target before responding to the transformation instruction. The first attribute information at least comprises first display coordinates and first size information of the selected target in the current transformation scene. The second attribute information may be understood as attribute information after the selected object is subjected to the transformation processing in response to the transformation instruction. The second attribute information includes second display coordinates of the selected object after the transformation processing.
Based on the information to be transformed, the transformation processing process to be executed by the selected target can be clarified, and then targeted transformation processing can be carried out on the selected target, and the first attribute information is regulated to obtain the second attribute information of the selected target.
In step S104, the second display coordinates are updated based on the current transformed scene to obtain third display coordinates, and the selected object after the transformation processing is displayed on the third display coordinates.
In the embodiment of the invention, in order to avoid the influence on the visual experience of a user due to the overlarge displacement difference between the second display coordinates and the first display coordinates of the selected target after the transformation processing, the second display coordinates are adjusted based on the first display coordinates in the first attribute information, and the third display coordinates to be displayed of the selected target after the transformation processing are obtained.
Through the embodiment, the visual position displayed by the selected target after the transformation processing is the same as the visual position displayed by the selected target before the transformation processing, so that the user does not need to manually adjust the display position of the selected target after the transformation processing, the display control times of the user for the transformation processing of the selected target are reduced, and the use experience of the user is improved.
The following embodiment will specifically explain a determination process of the conversion distance.
In the present invention, a drag point at which a selected object performs a transformation in response to a transformation instruction is determined, and the history coordinates of the drag point are determined from the first attribute information. And determining the latest coordinates of the drag points after transformation processing based on the information to be transformed. And determining the transformation distance of the selected target in the current transformation scene according to the distance between the latest coordinate and the historical coordinate.
Specifically, when the drag point is selected, a transformation instruction is triggered to obtain the historical coordinates of the drag point, namely, the coordinates before the drag point is moved are determined. The drag point may be any point in eight directions of the object, including: upper, lower, left, right, upper left, upper right, lower left and lower right. Tracking the movement change of the drag point to obtain information to be transformed, and determining the latest coordinate of the drag point after transformation. The transformation distance of the selected object in the current transformation scene can be determined according to the distance between the latest coordinate and the historical coordinate. For example, if the movement is along the upper or lower direction, the change distance is determined according to the change in the movement of the vertical axis. If the movement is left or right, the change distance is determined according to the movement change of the horizontal axis. If the movement is along the upper left, the upper right, the lower left or the lower right, the change distance is determined according to the change of the movement of the horizontal axis and the change of the movement of the vertical axis.
In an embodiment, the first attribute information includes first size information and the second attribute information includes second size information. The first size information may be understood as size information of the selected object before the transformation process is performed. The first size information includes at least a history height and a history width of the selected object before the transformation process is performed. When the first attribute information is adjusted based on the information to be converted, the adjustment can be performed based on whether the drag point is a target drag point, so that the second attribute information of the selected target is obtained. In particular, a target drag point may be understood as a drag point that affects a change in the visual position display of the target during the drag process. The target drag point may be determined based on the drag direction, for example: the upper left, upper right, left, and lower left drag points may be determined as target drag points. If the drag point is not the target drag point, the visual position difference between the visual position of the selected target displayed on the display interface according to the second display coordinate and the visual position of the selected target displayed according to the first display coordinate before the selected target is subjected to conversion processing is not large or can be ignored, so that the first size information can be adjusted according to the conversion distance, and the second size information of the selected target after the conversion processing can be obtained. If the drag point is the target drag point, the visual position of the selected target displayed on the display interface according to the second display coordinate is characterized as having larger difference from the visual position of the selected target displayed according to the first display coordinate before the selected target is subjected to conversion processing, and for compensating the visual position difference, the first size information is reversely regulated according to the conversion distance, so that the second size information of the selected target after the conversion processing is obtained.
For example: if the transformation distance comprises a transverse transformation distance and a longitudinal transformation distance. When the drag point is not the target drag point, the determination process of the second size information is as follows: latest height = history height + longitudinal transformation distance; latest width = historical width + lateral distance. When the drag point is the target drag point, the determination process of the second size information is as follows: latest height = historical height-longitudinal transformation distance; latest width = history width-lateral distance.
In another embodiment, if the current transform scene is a rotational transform scene, the drag point is determined to be the center point. And acquiring the rotation angle of the selected target from the information to be converted. Obtaining a second display coordinate according to the first display coordinate, the historical coordinate and the rotation angle; the latest coordinates of the center point are determined based on the second display coordinates, the second size information in the second attribute information, and the rotation angle.
Specifically, if the rotation angle is θ, the rotation radian Rad of the drag point can be determined by the following formula:
Figure BDA0004133069050000111
taking the first display coordinate as (x, y) and the history coordinate as (center x, center y) as an example, the second display coordinate (leftpoint x, leftpoint y) is determined by the following formula:
leftpoint x=(x-(center x))*cos(Rad)-(y-(center y))*sin(Rad)+(centerx);
leftpoint y=(x-(center x))*sin(Rad)-(y-(center y))*cos(Rad)+(centery)。
the second size information includes the latest width (newW) and the latest height (newH) of the selected object after the transformation process. The latest coordinates of the center point (newCenter x, newCenter y) are determined using the following formula:
newCenter x=((leftpoint x+newW/2)-leftpoint x)*cos(Rad)-((leftpoint y+newH/2)-leftpoint y)*sin(Rad)+(leftpoint x+newW/2);
newCenter y=((leftpoint x+newW/2)-leftpoint x)*sin(Rad)-((leftpoint y+newH/2)-leftpoint y)*cos(Rad)+(leftpoint y+newW/2)。
in yet another embodiment, if the rotation transformation scene is the target transformation scene, half of the latest width is taken as the lateral adjustment value of the abscissa; half of the latest height is taken as the longitudinal adjustment value of the ordinate. Taking the difference between the latest abscissa and the transverse adjustment value as a third display abscissa; the difference between the latest ordinate and the longitudinal adjustment value is taken as the third display ordinate. And obtaining a third display coordinate according to the third display abscissa and the third display ordinate, and taking the third display coordinate as the updated coordinate of the second display coordinate. That is, the third display coordinates (X, Y) are determined using the following formula: x = newcentrex-newW/2; y=newcenter Y-newH/2. The method is used for determining the third display coordinates, so that the fact that the third display coordinates and the first display coordinates are not greatly different in visual display when the selected target is displayed on the display interface after rotation transformation is finished is facilitated, and further position translation adjustment of the selected target after rotation transformation is finished is not needed, and the use experience of a user is facilitated to be improved.
In an example, if the current transformation scene is not the target transformation scene, the third display coordinate and the second display coordinate are the same display coordinate, thereby helping to reduce the display control cost.
In an implementation scenario, the selected object (rectangle) is subjected to rotation transformation in a rotation transformation scenario in a conventional manner, and the obtained display result is shown in fig. 2. According to the display control method provided by the invention, the selected object (rectangle) is subjected to rotation transformation under the rotation transformation scene, and the obtained display result is shown in fig. 3. Therefore, the display control method provided by the invention can facilitate the user to perform multiple transformation processing on the selected target, and the display position after each transformation processing is not changed visually, so that the display control quantity of the user on the selected target is reduced, and the use experience of the user is improved.
Fig. 4 is a flowchart of another display control proposed according to an exemplary embodiment. As shown in fig. 4, the display control includes the following steps.
In step S401, first attribute information of a selected target is acquired.
In step S402, in response to a transformation instruction acting on the selected object in the current transformation scene, a drag point at which the selected object performs transformation in response to the transformation instruction is determined, and history coordinates of the drag point are acquired.
In step S403, in response to the end of the transformation, the latest coordinates of the drag point are acquired.
In the embodiment of the invention, the position where the drag point stops dragging can be determined by tracking the drag track of the drag point, so as to obtain the latest coordinate of the drag point.
In step S404, if the current transform scene is a similar transform scene, a hybrid transform matrix is acquired.
In the embodiment of the invention, the number of the selected targets can be one or a plurality of targets, and can be determined according to actual requirements, and the method is not limited in the invention. The similar transformation scene includes a transformation scene in which a bias transformation process is performed on at least one selected object or a transformation scene in which a rotation transformation process is performed on a plurality of selected objects at the same time.
The hybrid transformation matrix can be obtained by freely combining a plurality of matrices: rotation matrix, scaling matrix and beveling matrix. Preferably, a mixed transformation matrix formed by a rotation matrix, a scaling matrix and a bias matrix can be selected as the mixed transformation matrix of the similar transformation scene, thereby being beneficial to improving the accuracy of the information to be transformed.
In step S405, the history coordinates, the latest coordinates, and the first display coordinates are substituted into the hybrid transformation matrix and subjected to inverse matrix processing, so as to obtain information to be transformed after transformation processing of the selected target.
In the embodiment of the invention, the information to be transformed comprises any one or more of the following information: rotation angle, scaling factor, and miter angle of the selected object.
In step S406, the first attribute information is adjusted based on the information to be transformed, so as to obtain second attribute information of the selected target.
In step S407, the second display coordinates are updated based on the similar transformation scene to obtain third display coordinates, and the selected object after the transformation processing is displayed on the third display coordinates.
Through the embodiment, the visual position displayed by the selected target after the transformation processing is the same as the visual position displayed by the selected target before the transformation processing, so that the user does not need to manually adjust the display position of the selected target after the transformation processing, the display control times of the user for the transformation processing of the selected target are reduced, and the use experience of the user is improved.
In an implementation scenario, the rotation matrix, the scaling matrix, and the bias matrix are 3*3 matrices, which are in the form of:
Figure BDA0004133069050000141
wherein, if the matrix is rotated, each letter in the formula (1) is expressed as follows:
a=cos (Rad), b=sin (Rad), c= -sin (Rad), d=cos (Rad), e=0, f=0. Rad is an arc determined based on the rotation angle.
If the matrix is scaled, each letter in equation (1) is represented as follows:
a=scalex, b=0, c=0, d=scaley, e=0, f=0. scaleX represents the lateral scaling factor of the selected object and scaleY represents the longitudinal scaling factor of the selected object.
If the matrix is a chamfer, each letter in equation (1) is expressed as follows:
a=1, b=tan (pi/180×skew), c=tan (pi/180×skew), d=1, e=0, f=0. SkewX represents the lateral chamfer angle of the selected object, and SkewY represents the longitudinal chamfer angle of the selected object.
If the hybrid transformation matrix is composed of a rotation matrix, a scaling matrix and a bias matrix, the hybrid transformation matrix can be obtained by performing hybrid calculation through the hybrid transformation matrix in the following manner:
Matrix=multiply(multiply(rotateMatrix,SkewMartix),ScaleMartix);
Matrix.a*ScaleX;Matrix.b*ScaleY;Matrix.c*ScaleX;Matrix.d*ScaleY。
matrix represents a hybrid transform Matrix, multiplex represents a Matrix hybrid, rotamatrix represents a rotation Matrix, skewMartix represents a chamfer Matrix, scaleMartix represents a scaling Matrix.
In the case of a mixed transformation matrix, each letter in equation (1) is represented as follows:
a=Matrix1.a*Matrix2.a+Matrix1.c*Matrix2.b;
b=Matrix1.b*Matrix2.a+Matrix1.d*Matrix2.b;
c=Matrix1.a*Matrix2.c+Matrix1.c*Matrix2.d;
d=Matrix1.b*Matrix2.c+Matrix1.d*Matrix2.d;
e=Matrix1.a*Matrix2.e+Matrix1.c*Matrix2.f+Matrix1.e;
f=Matrix1.b*Matrix2.e+Matrix1.d*Matrix2.f+Matrix1.f。
matrix1 represents a mixed transformation Matrix of the selected object before performing the similarity transformation, and Matrix2 represents a mixed transformation Matrix of the selected object after completing the similarity transformation.
By performing inverse matrix processing on the hybrid transformation matrix, a rotation angle (θ), an amplification factor (scaleX, scaleY), and a skew angle (skew) can be obtained. When the calculation is performed by the electronic device, the following procedure may be adopted for determination:
piBy180=π/180;rotate=atan2(b,a)/piBy180;
scaleX=sqrt(pow(a,2)+pow(b,2));scaleY=(a*d-b*c)/scaleX;
skewX=rotate+atan2(c,d)/piBy180;skewY=0
in another implementation scenario, the selected object (rectangle) is subjected to similarity transformation in a conventional manner in a similarity transformation scenario, and the obtained display result is shown in fig. 5. According to the display control method provided by the invention, the selected object (rectangle) is subjected to similar transformation under the similar transformation scene, and the obtained display result is shown in fig. 6. Therefore, the display control method provided by the invention can facilitate the user to perform multiple transformation processing on the selected target, and the display position after each transformation processing is not changed visually, so that the display control quantity of the user on the selected target is reduced, and the use experience of the user is improved.
Based on the same inventive concept, the invention further provides a display control device.
Fig. 7 is a block diagram illustrating a structure of a display control apparatus according to an exemplary embodiment. As shown in fig. 7, the display control apparatus includes a first acquisition unit 701, a second acquisition unit 702, an adjustment unit 703, and an execution unit 704.
A first obtaining unit 701, configured to obtain first attribute information of a selected object, where the attribute information includes first display coordinates and first size information of the selected object in a current transformation scene;
a second obtaining unit 702, configured to obtain information to be transformed of the selected target in response to a transformation instruction acting on the selected target in a current transformation scene, where the information to be transformed includes a transformation distance;
an adjusting unit 703, configured to adjust first attribute information based on information to be transformed to obtain second attribute information of the selected object, where the first attribute information includes a first display coordinate and first size information of the selected object in the current transformation scene, and the second attribute information includes a second display coordinate of the selected object after transformation processing;
and the execution unit 704 is configured to update the second display coordinates based on the current transformation scene to obtain third display coordinates, and display the selected target after the transformation processing on the third display coordinates.
In one embodiment, the second acquisition unit 702 determines the transform distance by: a first determining unit configured to determine a drag point at which the selected object performs a transformation in response to the transformation instruction, and determine a history coordinate of the drag point from the first attribute information; the second determining unit is used for determining the latest coordinates of the drag points after transformation processing based on the information to be transformed; and the third determining unit is used for determining the transformation distance of the selected target in the current transformation scene according to the distance between the latest coordinate and the historical coordinate.
In another embodiment, the first attribute information includes first size information and the second attribute information includes second size information. The adjusting unit includes: the judging unit is used for judging whether the drag point is a target drag point or not; the first adjusting subunit is used for adjusting the first size information according to the transformation distance if the drag point is not the target drag point to obtain second size information after the selected target is transformed; and the second adjusting subunit is used for reversely adjusting the first size information according to the transformation distance if the dragging point is the target dragging point to obtain second size information after the selected target is subjected to transformation processing.
In yet another embodiment, the second determining unit includes: the drag point determining unit is used for determining the drag point as a center point if the current transformation scene is a rotation transformation scene; the third acquisition unit is used for acquiring the rotation angle of the selected target from the information to be converted; and the first coordinate determining unit is used for obtaining the second display coordinate according to the first display coordinate, the historical coordinate and the rotation angle. And a second coordinate determining unit for determining the latest coordinate of the center point based on the second display coordinate, the second size information in the second attribute information, and the rotation angle.
In yet another embodiment, the second size information includes a latest width and a latest height of the selected object after the transformation process; the latest coordinates include the latest abscissa and the latest ordinate. The execution unit 704 includes: the first execution subunit is used for taking half of the latest width as a transverse adjustment value of an abscissa and half of the latest height as a longitudinal adjustment value of an ordinate if the rotation transformation scene is a target transformation scene; a second execution subunit configured to take a difference between the latest abscissa and the lateral adjustment value as a third display abscissa and a difference between the latest ordinate and the longitudinal adjustment value as a third display ordinate; and the third execution subunit is used for obtaining a third display coordinate according to the third display abscissa and the third display ordinate, and taking the third display coordinate as the coordinate after updating the second display coordinate.
In yet another embodiment, the second acquisition unit 702 includes: the first acquisition subunit is used for responding to a transformation instruction acting on the selected target in the current transformation scene, determining a dragging point of the selected target for executing transformation in response to the transformation instruction, and acquiring historical coordinates of the dragging point; the second acquisition subunit is used for acquiring the latest coordinates of the drag point in response to the end of transformation; a third obtaining subunit, configured to obtain a hybrid transformation matrix if the current transformation scene is a similar transformation scene; the information acquisition unit is used for substituting the historical coordinates, the latest coordinates and the first display coordinates into the mixed transformation matrix and performing inverse matrix processing to obtain information to be transformed after transformation processing of the selected target, wherein the information to be transformed comprises any one or more of the following information: rotation angle, scaling factor, and miter angle of the selected object.
In yet another embodiment, the hybrid transformation matrix is comprised of a rotation matrix, a scaling matrix, and a chamfer matrix.
The specific limitation of the display control device and the beneficial effects can be referred to the limitation of the display control method, and are not described herein. The various modules described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or independent of a processor in the electronic device, or may be stored in software in a memory in the electronic device, so that the processor may call and execute operations corresponding to the above modules.
Fig. 8 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment. As shown in fig. 8, the device includes one or more processors 810 and a memory 820, the memory 820 including persistent memory, volatile memory and a hard disk, one processor 810 being illustrated in fig. 8. The apparatus may further include: an input device 830 and an output device 840.
Processor 810, memory 820, input device 830, and output device 840 may be connected by a bus or other means, for example in fig. 8.
The processor 810 may be a central processing unit (Central Processing Unit, CPU). The processor 810 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 820 is used as a non-transitory computer readable storage medium, including persistent memory, volatile memory, and hard disk, and can be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the service management method in the embodiments of the present application. The processor 810 executes various functional applications of the server and data processing, i.e., implements any of the display control methods described above, by running non-transitory software programs, instructions, and modules stored in the memory 820.
Memory 820 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data, etc., as needed, used as desired. In addition, memory 820 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 820 may optionally include memory located remotely from processor 810, which may be connected to the data processing apparatus via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 830 may receive input numeric or character information and generate key signal inputs related to user settings and function control. The output device 840 may include a display device such as a display screen.
One or more modules are stored in the memory 820 that, when executed by the one or more processors 810, perform the methods illustrated in fig. 1-6.
The product can execute the method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details which are not described in detail in the present embodiment can be found in the embodiments shown in fig. 1 to 6.
The embodiment of the invention also provides a non-transitory computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions can execute the authentication method in any of the method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.

Claims (10)

1. A display control method, characterized in that the method comprises:
acquiring first attribute information of a selected target;
responding to a transformation instruction acting on the selected target in a current transformation scene, and acquiring information to be transformed of the selected target, wherein the information to be transformed comprises a transformation distance;
adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target, wherein the first attribute information comprises a first display coordinate and first size information of the selected target in the current transformation scene, and the second attribute information comprises a second display coordinate of the selected target after transformation;
and updating the second display coordinates based on the current transformation scene to obtain third display coordinates, and displaying the selected target after transformation processing on the third display coordinates.
2. The method of claim 1, wherein the transformed distance is determined by:
determining a drag point of the selected target for executing transformation in response to the transformation instruction, and determining historical coordinates of the drag point from the first attribute information;
based on the information to be transformed, determining the latest coordinates of the transformed drag points;
and determining the transformation distance of the selected target in the current transformation scene according to the distance between the latest coordinate and the historical coordinate.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the first attribute information includes first size information, and the second attribute information includes second size information;
the adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target includes:
judging whether the drag point is a target drag point or not;
if the drag point is not the target drag point, adjusting the first size information according to the transformation distance to obtain second size information of the selected target after transformation processing;
and if the drag point is the target drag point, reversely adjusting the first size information according to the transformation distance to obtain second size information of the selected target after transformation processing.
4. A method according to claim 2 or 3, wherein said determining the latest coordinates of the drag point based on the information to be transformed comprises:
if the current transformation scene is a rotation transformation scene, determining the dragging point as a center point;
acquiring the rotation angle of the selected target from the information to be transformed;
obtaining a second display coordinate according to the first display coordinate, the historical coordinate and the rotation angle;
and determining the latest coordinate of the center point based on the second display coordinate, second size information in the second attribute information and the rotation angle.
5. The method of claim 4, wherein the step of determining the position of the first electrode is performed,
the second size information comprises the latest width and the latest height of the selected target after transformation processing; the latest coordinates comprise a latest abscissa and a latest ordinate;
the updating the second display coordinates based on the current transformation scene to obtain third display coordinates includes:
if the rotation transformation scene is a target transformation scene, taking half of the latest width as a transverse adjustment value of an abscissa and half of the latest height as a longitudinal adjustment value of an ordinate;
taking the difference between the latest abscissa and the transverse adjustment value as a third display abscissa and the difference between the latest ordinate and the longitudinal adjustment value as a third display ordinate;
and obtaining a third display coordinate according to the third display abscissa and the third display ordinate, and taking the third display coordinate as the updated coordinate of the second display coordinate.
6. The method according to claim 1, wherein the obtaining information to be transformed of the selected target in response to a transformation instruction applied to the selected target in a current transformation scene includes:
responding to a transformation instruction acting on the selected target in a current transformation scene, determining a drag point of the selected target for executing transformation in response to the transformation instruction, and acquiring historical coordinates of the drag point;
responding to the end of transformation, and acquiring the latest coordinates of the dragging point;
if the current transformation scene is a similar transformation scene, acquiring a mixed transformation matrix;
substituting the historical coordinates, the latest coordinates and the first display coordinates into the mixed transformation matrix and performing inverse matrix processing to obtain information to be transformed of the selected target after transformation processing, wherein the information to be transformed comprises any one or more of the following information: the rotation angle, scaling factor and miter angle of the selected object.
7. The method of claim 6, wherein the hybrid transformation matrix consists of a rotation matrix, a scaling matrix, and a chamfer matrix.
8. A display control apparatus, characterized in that the apparatus comprises:
a first acquisition unit configured to acquire first attribute information of a selected target;
the second acquisition unit is used for responding to a transformation instruction acting on the selected target in the current transformation scene to acquire information to be transformed of the selected target, wherein the information to be transformed comprises a transformation distance;
the adjusting unit is used for adjusting the first attribute information based on the information to be transformed to obtain second attribute information of the selected target, the first attribute information comprises a first display coordinate and first size information of the selected target in the current transformation scene, and the second attribute information comprises a second display coordinate of the selected target after transformation processing;
and the display unit is used for updating the second display coordinates based on the current transformation scene to obtain third display coordinates, and displaying the selected target after transformation processing on the third display coordinates.
9. An electronic device comprising a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the display control method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing the computer to execute the display control method according to any one of claims 1 to 7.
CN202310266080.4A 2023-03-13 2023-03-13 Display control method and device, electronic equipment and medium Active CN116301527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310266080.4A CN116301527B (en) 2023-03-13 2023-03-13 Display control method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310266080.4A CN116301527B (en) 2023-03-13 2023-03-13 Display control method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN116301527A true CN116301527A (en) 2023-06-23
CN116301527B CN116301527B (en) 2023-11-21

Family

ID=86788338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310266080.4A Active CN116301527B (en) 2023-03-13 2023-03-13 Display control method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN116301527B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467600A (en) * 2020-03-31 2021-10-01 深圳光峰科技股份有限公司 Information display method, system and device based on augmented reality and projection equipment
CN113835521A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Scene view switching method and device, electronic equipment and readable medium
CN114100124A (en) * 2020-08-31 2022-03-01 上海米哈游天命科技有限公司 Game scene presenting method, device, equipment and medium in shooting game
US20230011625A1 (en) * 2019-12-24 2023-01-12 Nec Corporation Information processing device, control method, and storage medium
CN115619905A (en) * 2022-10-24 2023-01-17 北京力控元通科技有限公司 Primitive editing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230011625A1 (en) * 2019-12-24 2023-01-12 Nec Corporation Information processing device, control method, and storage medium
CN113467600A (en) * 2020-03-31 2021-10-01 深圳光峰科技股份有限公司 Information display method, system and device based on augmented reality and projection equipment
CN114100124A (en) * 2020-08-31 2022-03-01 上海米哈游天命科技有限公司 Game scene presenting method, device, equipment and medium in shooting game
CN113835521A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Scene view switching method and device, electronic equipment and readable medium
CN115619905A (en) * 2022-10-24 2023-01-17 北京力控元通科技有限公司 Primitive editing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116301527B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN111897639B (en) Image augmentation method, image augmentation device, computer device, and storage medium
CN108924525B (en) Image brightness adjusting method and device, computer equipment and storage medium
WO2020211812A1 (en) Aircraft landing method and apparatus
CN109961064B (en) Identity card text positioning method and device, computer equipment and storage medium
CN112164032B (en) Dispensing method and device, electronic equipment and storage medium
CN105430331A (en) Method and device for adjusting display direction of monitor image
CN113850807B (en) Image sub-pixel matching positioning method, system, device and medium
WO2020151153A1 (en) Image processing method and apparatus, and computer device and storage medium
US10783685B2 (en) Banner image generation
US10922859B2 (en) Vector art object deformation techniques
CN113759342B (en) Laser radar scanning method and device, computer equipment and storage medium
CN116301527B (en) Display control method and device, electronic equipment and medium
DE202015009118U1 (en) Dynamic integration of offline and online suggestions in a geographic application
CN114049268A (en) Image correction method, image correction device, electronic equipment and computer-readable storage medium
CN106780308B (en) Image perspective transformation method
EP4386610A1 (en) Intelligent rechecking method and apparatus for graph model
CN113240578B (en) Image special effect generation method and device, electronic equipment and storage medium
CN111951349B (en) Method and device for adjusting graph vertex types and electronic equipment
DE112016006020T5 (en) PICTORIAL PROCESSOR FOR PORTABLE DEVICE
CN112258386A (en) Image deformation acceleration processing method and device, electronic equipment and readable storage medium
CN111125281B (en) Map self-adaptive adjusting method and system, computer equipment and medium
CN111324826B (en) Map viewpoint dynamic method and device based on Mapbox in user following mode
CN117237441B (en) Sub-pixel positioning method, sub-pixel positioning system, electronic equipment and medium
US11812153B2 (en) Systems and methods for fisheye camera calibration and bird's-eye-view image generation in a simulation environment
US20240202049A1 (en) Gateway apparatus and method for dynamically applying api settings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant