CN115933949A - Coordinate conversion method and device, electronic equipment and storage medium - Google Patents

Coordinate conversion method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115933949A
CN115933949A CN202211528710.2A CN202211528710A CN115933949A CN 115933949 A CN115933949 A CN 115933949A CN 202211528710 A CN202211528710 A CN 202211528710A CN 115933949 A CN115933949 A CN 115933949A
Authority
CN
China
Prior art keywords
canvas
screen
target point
position coordinates
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211528710.2A
Other languages
Chinese (zh)
Inventor
孟召龙
张晓程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Hongji Information Technology Co Ltd
Original Assignee
Shanghai Hongji Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Hongji Information Technology Co Ltd filed Critical Shanghai Hongji Information Technology Co Ltd
Priority to CN202211528710.2A priority Critical patent/CN115933949A/en
Publication of CN115933949A publication Critical patent/CN115933949A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a coordinate conversion method, a coordinate conversion device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring the relative position coordinates of a target point in the canvas of the image file; and converting the relative position coordinates according to the mapping proportion between the canvas and a screen displaying the canvas to obtain the absolute position coordinates of the target point on the screen. In the implementation process of the scheme, the relative position coordinates of the target point in the canvas of the image file are converted through the mapping proportion between the canvas and the screen displaying the canvas, so that the absolute position coordinates of the target point on the screen are obtained, the selected target object is determined by using the absolute position coordinates of the target point on the screen, and the accuracy of the program for determining the selected target object on the screen is improved.

Description

Coordinate conversion method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing and coordinate transformation, and in particular, to a coordinate transformation method and apparatus, an electronic device, and a storage medium.
Background
Currently, when operating Computer Aided Design (CAD) software using an application program, an operator or developer of the CAD software finds that coordinates on a canvas of a CAD file are relative position coordinates, and can infinitely scale the canvas of the CAD file, so that it is difficult to precisely control a mouse by the program so that the mouse can select a corresponding point or graphic on a screen. Therefore, the current programs have low accuracy in selecting the target object on the screen.
Disclosure of Invention
An object of the embodiments of the present application is to provide a coordinate transformation method, an apparatus, an electronic device, and a storage medium, which are used to solve the problem of low accuracy of selecting a target object on a screen by a program.
The embodiment of the application provides a coordinate conversion method, which comprises the following steps: acquiring the relative position coordinates of a target point in the canvas of the image file; and converting the relative position coordinates according to the mapping proportion between the canvas and a screen displaying the canvas to obtain the absolute position coordinates of the target point on the screen. In the implementation process of the scheme, the relative position coordinates of the target point in the canvas of the image file are converted through the mapping proportion between the canvas and the screen displaying the canvas, so that the absolute position coordinates of the target point on the screen are obtained, the selected target object is determined by using the absolute position coordinates of the target point on the screen, and the accuracy of the program for determining the selected target object on the screen is improved.
Optionally, in this embodiment of the present application, obtaining the relative position coordinates of the target point in the canvas of the image file includes: determining an area box containing a target point according to the size of the screen and the size of the canvas; moving the target point to the central point coordinate of the area box according to the relative position coordinate; and converting the coordinates of the central point according to the mapping relation between the area square block and the canvas to obtain the relative position coordinates of the target point in the canvas. In the implementation process of the scheme, the coordinates of the central point are converted according to the mapping relation between the area box and the canvas, so that the relative position coordinates of the target point in the canvas are obtained, the absolute position coordinates of the target point on the screen are used for determining the selected target object, and the accuracy of the program for determining the selected target object on the screen is improved.
Optionally, in this embodiment of the present application, determining an area box including the target point according to the size of the screen and the size of the canvas includes: judging whether the size of the canvas is smaller than that of the screen or not; and if so, determining the canvas as an area box, otherwise, determining a randomly selected preset proportion area in the screen as the area box, wherein the size of the area box is smaller than that of the screen. In the implementation process of the scheme, the central point coordinate is converted according to the mapping relation between the area square frame and the canvas by determining the preset proportion area randomly selected in the screen as the area square frame, so that the accuracy of converting the central point coordinate by a program is improved.
Optionally, in this embodiment of the application, before converting the relative position coordinates according to a mapping ratio between the canvas and a screen on which the canvas is displayed, the method further includes: acquiring a width value of a transverse shaft of a canvas and a size width value of a screen; and dividing the width value of the horizontal axis of the canvas by the size width value of the screen to obtain the mapping ratio.
Optionally, in this embodiment of the application, before converting the relative position coordinates according to a mapping ratio between the canvas and a screen on which the canvas is displayed, the method further includes: acquiring a length value of a longitudinal axis of a canvas and a size length value of a screen; and dividing the length value of the longitudinal axis of the canvas by the length value of the size of the screen to obtain the mapping proportion.
Optionally, in this embodiment of the present application, after obtaining the absolute position coordinates of the target point on the screen, the method further includes: and responding to the interactive operation on the absolute position coordinates, selecting a target object in the image file by using the absolute position coordinates, and performing interactive operation on the target object. In the implementation process of the scheme, the target object in the image file is selected by using the absolute position coordinates, and the target object is subjected to interactive operation, so that the accuracy of the interactive operation of the program on the target object on the screen is improved.
Optionally, in this embodiment of the present application, the target point is a pixel point of a target object, and the target object includes: an element, graphic, or image layer in the canvas.
An embodiment of the present application further provides a coordinate transformation apparatus, including: the relative coordinate acquisition module is used for acquiring the relative position coordinates of the target point in the canvas of the image file; and the absolute coordinate obtaining module is used for converting the relative position coordinate according to the mapping proportion between the canvas and the screen displaying the canvas to obtain the absolute position coordinate of the target point on the screen.
Optionally, in an embodiment of the present application, the relative coordinate obtaining module includes: the area box determining submodule is used for determining an area box containing a target point according to the size of the screen and the size of the canvas; the central coordinate moving submodule is used for moving the target point to the central coordinate of the area square frame according to the relative position coordinate; and the central coordinate conversion sub-module is used for converting the central point coordinate according to the mapping relation between the area square box and the canvas to obtain the relative position coordinate of the target point in the canvas.
Optionally, in an embodiment of the present application, the area block determination sub-module includes: the canvas size judging submodule is used for judging whether the size of the canvas is smaller than that of the screen or not; and the area box determining submodule is used for determining the canvas as the area box if the size of the canvas is smaller than that of the screen, otherwise, determining a preset proportion area randomly selected from the screen as the area box, wherein the size of the area box is smaller than that of the screen.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes: the transverse shaft width acquisition module is used for acquiring a transverse shaft width value of the canvas and a size width value of the screen; and the first proportion obtaining module is used for dividing the width value of the transverse axis of the canvas by the size width value of the screen to obtain the mapping proportion.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes: the vertical axis size obtaining module is used for obtaining the vertical axis length value of the canvas and the size length value of the screen; and the second proportion obtaining module is used for dividing the length value of the longitudinal axis of the canvas by the length value of the size of the screen to obtain the mapping proportion.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes: and the interactive operation processing module is used for responding to the interactive operation on the absolute position coordinates, selecting the target object in the image file by using the absolute position coordinates and carrying out interactive operation on the target object.
Optionally, in this embodiment of the present application, the target point is a pixel point of a target object, and the target object includes: an element, graphic, or image layer in the canvas.
An embodiment of the present application further provides an electronic device, including: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as described above.
Embodiments of the present application also provide a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the method as described above.
Additional features and advantages of embodiments of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of embodiments of the present application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a coordinate transformation method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a process of converting relative position coordinates into absolute position coordinates according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a process for determining a region block according to an embodiment of the present application;
FIG. 4 is a diagram illustrating a specific calculation process of center point coordinates provided in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating two determination methods of the mapping ratio provided by the embodiment of the present application;
fig. 6 is a schematic structural diagram of a coordinate transformation apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as presented in the figures, is not intended to limit the scope of the embodiments of the present application, as claimed, but is merely representative of selected embodiments of the present application. All other embodiments obtained by a person skilled in the art based on the embodiments of the present application without making any creative effort belong to the protection scope of the embodiments of the present application.
It is to be understood that "first" and "second" in the embodiments of the present application are used to distinguish similar objects. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
Before describing the coordinate transformation method provided by the embodiment of the present application, some concepts related to the embodiment of the present application are described:
computer Aided Design (CAD) refers to Design software developed for Computer Aided (e.g., autoCAD) for two-dimensional drawing, detailed drawing, design documentation and basic three-dimensional Design, and has a good user interface for performing various operations through an interactive menu or command line manner.
The machine Process Automation (RPA) technology can simulate the operation of an employee on a computer through a keyboard and a mouse in daily work, and can replace a human to execute operations of logging in a system, operating software, reading and writing data, downloading files, reading mails and the like. The automatic robot is used as the virtual labor force of an enterprise, so that the staff can be liberated from repeated and low-value work, and the energy is put into the work with high added value, so that the enterprise can realize the reduction of cost and the increase of benefit while realizing the digital intelligent transformation.
The RPA is a software robot which replaces manual tasks in business processes and interacts with a front-end system of a computer like a human, so the RPA can be regarded as a software program robot running in a personal PC or a server, and replaces human beings to automatically repeat operations such as mail retrieval, attachment downloading, system logging, data processing and analysis and other activities by imitating the operations performed by users on the computer, and is fast, accurate and reliable.
It should be noted that the coordinate transformation method provided in the embodiments of the present application may be executed by an electronic device, where the electronic device refers to a device terminal or a server having a function of executing a computer program, and the device terminal includes, for example: smart phones, personal computers, tablet computers, personal digital assistants, or mobile internet access devices, and the like. A server refers to a device that provides computing services over a network, such as: the server system comprises an x86 server and a non-x 86 server, wherein the non-x 86 server comprises: mainframe, minicomputer, and UNIX server.
Application scenarios to which the coordinate transformation method is applicable are described below, where the application scenarios include, but are not limited to: RPA drawing scenes or program control software (e.g., CAD, photoshop, 3D Studio Max, sketchUp, solidworks, blender, maya, etc.) drawing scenes, and the like. An RPA drawing scene is taken as an example for explanation, although the RPA and the traditional physical robot solve the problems of speed and accuracy of human work through specific rules, the traditional physical robot is a robot combining software and hardware, and can execute work only by matching with software under the support of specific hardware; the RPA robot is in a pure software layer, and the RPA robot can be deployed to any one PC and any server to complete specified work as long as the corresponding software of the RPA robot is installed.
In a specific practical process, it is noted that although the RPA robot can acquire the relative position coordinates in the image file edited by software such as CAD, photoshop, 3D Studio Max, sketchUp, solidworks, blender, maya, and the like, when selecting or operating a target object (e.g., a graphic) corresponding to the relative position coordinates, it is difficult to control the mouse to select the target object.
Based on the above consideration, it is found that since the mouse coordinates acquired by the RPA robot are actually absolute position coordinates on the screen where the mouse is located, however, only the relative position coordinates of the target point of the target object (e.g. graphic) in the image file can be acquired by the above (CAD, photoshop, 3D Studio Max, sketchUp, solidworks, blender, maya, etc.) software, and the relative position coordinates and the absolute position coordinates are two coordinate systems completely different.
As a result of having recognized the above-described various drawbacks, a coordinate conversion method has been proposed after intensive research, which can convert the relative position coordinates of a target point in an image file into absolute position coordinates of the target point on a screen. In practice, it is found that the coordinate conversion method can be used to enhance or add functions of the RPA robot, so that the RPA robot accurately converts relative position coordinates of a target point in an image file edited by software such as CAD, photoshop, 3D Studio Max, sketchUp, solidworks, blender, maya, and the like, to obtain absolute position coordinates of the target point on a screen, thereby completing accurate selection and operation of a graphic or an image and the like corresponding to the target point, and finally realizing automatic drawing processing, labor cost saving and processing efficiency improvement.
Please refer to a schematic flow chart of a coordinate transformation method provided in the embodiment of the present application shown in fig. 1; the coordinate conversion method has the main idea that the relative position coordinate of a target point in the canvas of an image file is converted into the absolute position coordinate of the target point on the screen through the mapping proportion between the canvas and the screen displaying the canvas, so that the effect of accurately determining one pixel point of a corresponding target object on the screen through a program is achieved. The embodiment of the coordinate conversion method may include:
step S110: and acquiring the relative position coordinates of the target point in the canvas of the image file.
It will be appreciated that the canvas of an image file is typically infinitely scalable, such as: the canvas in the image file edited by software such as CAD is infinitely zoomed, so that when a program selects and operates a target object in the CAD image file, only the relative position coordinate of the target point in the canvas of the image file can be obtained, and it is difficult to accurately obtain the absolute position coordinate of the target point on the target object on the screen.
Step S120: and converting the relative position coordinates according to the mapping proportion between the canvas and a screen displaying the canvas to obtain the absolute position coordinates of the target point on the screen.
Please refer to fig. 2, which illustrates a schematic diagram of a process for converting a relative position coordinate into an absolute position coordinate according to an embodiment of the present application; when converting the relative position coordinates of the target point into the absolute position coordinates of the target point on the screen, two cases are divided: in the first case (see left branch in fig. 2), when the size of the canvas is larger than that of the screen, the screen can display only a partial image of the area near the target point of the canvas and cannot display all the images in the canvas; in the second case (see the right branch in fig. 2), when the size of the canvas is smaller than or equal to the size of the screen, the screen can display the entire canvas, i.e., the screen can display all the images in the canvas. The absolute position coordinates of the target point on the screen may be used to select a target object in the image file, and the target point may be a pixel point of the target object (e.g., an element in a canvas in a CAD image file, a CAD irregular figure, a CAD tile, an image layer, or a CAD image object).
Alternatively, when the size of the canvas is larger than the size of the screen, in order to identify a specific area (e.g., the lower left area in fig. 2) partial image displayed in the screen, a template (e.g., a CAD canvas template) in a pre-registered canvas template library may be used to match with the current screen display image, i.e., a specific area partial image of the current screen display image is matched from the canvas template library, so as to identify information (where the information includes, but is not limited to, coordinate information of the canvas at four opposite corners of the screen, and a ratio between the canvas and the screen, etc.) of the specific area partial image displayed in the screen, and then, a target point may be selected from all pixel points in the specific area partial image to perform coordinate conversion. The canvas template library can be a library for storing CAD canvas style templates, and the size and the color of the canvas of the CAD can be set according to specific conditions for different CAD software or CAD users.
It is understood that the target point may be obtained by an Application Program Interface (API) of the Application program, may be selected from a target object (e.g., a target pixel point in a CAD image file, a CAD irregular pattern, a CAD pattern block, or a CAD image object) by a mouse, or may be determined and selected after inputting coordinates of the target point by a keyboard.
In the implementation process, the mapping proportion between the canvas and the screen displaying the canvas is used for converting the relative position coordinates of the target point in the canvas of the image file to obtain the absolute position coordinates of the target point on the screen, and the target object in the image file is determined according to the absolute position coordinates of the target point, so that the purpose of accurately determining one pixel point of the corresponding target object on the screen through the program is effectively achieved, and the accuracy of the program for determining the selected target object on the screen is improved.
Please refer to fig. 3, which illustrates a schematic process diagram of determining a region block according to an embodiment of the present application; as an optional implementation of the step S110, when obtaining the relative position coordinate of the target point in the canvas of the image file, the target point may be obtained through an area box containing the target point, and the implementation may include:
step S111: and determining an area box containing the target point according to the size of the screen and the size of the canvas.
The embodiment of step S111 described above is, for example: judging whether the size of the canvas is smaller than that of a screen or not by using an executable program compiled or interpreted by a preset programming language; among others, programming languages that may be used are, for example: C. c + +, java, BASIC, javaScript, LISP, shell, perl, ruby, python, and PHP, among others. If the size of the canvas is smaller than the size of the screen, the canvas is determined to be an area box, and the size of the area box is smaller than the size of the screen. And if the size of the canvas is larger than or equal to the size of the screen, determining a randomly selected preset proportional area in the screen as an area box. The size of the area box can be set according to specific situations, and generally, the minimum size of the area box can be set to be a preset proportion of the screen size (for example, the size of the area box is 10% or 20% of the screen size, etc.), and of course, the maximum size of the area box can be set to cover the whole canvas (for example, the size of the area box is the same as the screen size). In the implementation process of the scheme, the central point coordinate is converted according to the mapping relation between the area square frame and the canvas by determining the preset proportion area randomly selected in the screen as the area square frame, so that the accuracy of converting the central point coordinate by a program is improved.
Step S112: and moving the target point to the central point coordinate of the area box according to the relative position coordinate.
The embodiment of step S112 described above is, for example: it will be appreciated that during the move, the region box or canvas may also be simultaneously scaled (i.e., the relative size between the region box and the canvas changes) to more easily identify the location of the region box within the canvas, such as: inputting an operating instruction zoom in a command bar of the CAD software through an application program, then inputting center point coordinates in the command bar of the CAD software through the application program so that the CAD software obtains the relative position coordinates of the target point in a canvas of an image file, and then automatically adjusting a display window according to the relative position coordinates so that the CAD software moves the target point to the center point coordinates of the area box.
Step S113: and converting the coordinates of the central point according to the mapping relation between the area square block and the canvas to obtain the relative position coordinates of the target point in the canvas.
Please refer to fig. 4, which illustrates a schematic diagram of a specific calculation process of the center point coordinates provided in the embodiment of the present application; the center point coordinate may be obtained through calculation by an application program, or a mapping relationship between the region box and the canvas may be obtained through an API interface of the application program, and the center point coordinate is converted according to the mapping relationship to obtain a relative position coordinate of the target point in the canvas. The embodiment of step S113 described above is, for example: in practice, the target point may also be moved to other fixed relative position coordinates of the area box according to the relative position coordinates, where the movement of the target point to the coordinates of the center point of the area box is for the convenience of identifying and calculating the mapping relationship. The specific calculation process of the center point coordinates is as follows: assuming that the minimum abscissa value of the area box in the screen is x1, the maximum abscissa value is x2, the minimum ordinate value is y1, and the maximum ordinate value is y2, then one diagonal coordinate of the area box on the screen can be represented as (x 1, y 1) and (x 2, y 2), and then the relative position coordinate of the target point in the canvas is the center coordinate which can be represented as the center coordinate
Figure BDA0003973842430000101
In the implementation process of the scheme, the coordinate of the central point is converted according to the mapping relation between the area square frame and the canvas to obtain the relative position coordinate of the target point in the canvas, and the target object in the image file is determined according to the absolute position coordinate of the target point, so that the pixel point of the corresponding target object on the screen is effectively determined accurately through the program, and the accuracy of the program for determining the selected target object on the screen is improved.
Please refer to fig. 5, which illustrates two schematic diagrams of the determination of the mapping ratio provided in the embodiment of the present application; as an alternative embodiment of the above step S120, the mapping ratio may be determined in a first manner, and before the relative position coordinates are converted according to the mapping ratio between the canvas and the screen displaying the canvas, the mapping ratio may be determined by a width value of a horizontal axis of the canvas and a width value of a size of the screen, and the embodiment includes:
step S121: and acquiring the width value of the horizontal axis of the canvas and the size width value of the screen.
Step S122: and dividing the width value of the transverse axis of the canvas by the size width value of the screen to obtain the mapping ratio.
The embodiments of the above steps S121 to S122 are, for example: referring to fig. 5, the width value of the horizontal axis of the obtained canvas may be represented as (M2-M1), where M2 represents the maximum value of the horizontal axis of the canvas and M1 represents the minimum value of the horizontal axis of the canvas. Referring to fig. 5, the size width value of the acquired screen may be represented as (N2-N1), where N2 represents the maximum value of the abscissa of the screen and N1 represents the minimum value of the abscissa of the screen. Thus, dividing the horizontal-axis width value (M2-M1) of the canvas by the size width value (N2-N1) of the screen, the mapping ratio can be expressed as (M2-M1):
Figure BDA0003973842430000111
wherein M2-M1 represents the width value of the horizontal axis of the canvas, and N2-N1 represents the size width value of the screen.
As an optional implementation of the foregoing step S120, the mapping ratio may be determined in a second manner, and before the relative position coordinates are converted according to the mapping ratio between the canvas and the screen displaying the canvas, the mapping ratio may also be determined by a length value of a longitudinal axis of the canvas and a length value of a size of the screen, where the implementation includes:
step S123: and acquiring a length value of a longitudinal axis of the canvas and a size length value of the screen.
Step S124: and dividing the length value of the longitudinal axis of the canvas by the length value of the size of the screen to obtain the mapping proportion.
The embodiments of the above steps S123 to S124 are, for example: referring to fig. 5, the obtained value of the length of the vertical axis of the canvas may be represented as (P2-P1), where P2 represents the maximum value of the vertical coordinate of the canvas and P1 represents the minimum value of the vertical coordinate of the canvas. Referring to fig. 5, the size length value of the acquired screen may be represented as (Q2-Q1), where Q2 represents the maximum value of the ordinate of the screen and Q1 represents the minimum value of the ordinate of the screen. Thus, the length of the longitudinal axis of the canvasThe value (P2-P1) divided by the size length value (Q2-Q1) of the screen, the resulting mapping ratio can be expressed as (P2-P1):
Figure BDA0003973842430000112
wherein P2-P1 represents the length value of the vertical axis of the canvas, and Q2-Q1 represents the length value of the size of the screen.
The embodiment of converting the relative position coordinates according to the mapping ratio between the canvas and the screen on which the canvas is displayed as the above step S120 may include: firstly, after obtaining the mapping ratio, obtaining canvas relative distance vectors of four corners, specifically for example: assuming that the coordinates of the upper left corner of the canvas are (a 1, a 2) and the coordinates of the upper left corner of the region box are (b 1, b 2), the relative distance vector of the canvas between the coordinates of the upper left corner of the canvas and the coordinates of the upper left corner of the region can be calculated to be (b 1-a1, b2-a 2), and the calculation process of the relative distance vectors of the canvas of the other three corners is also similar. Then, the canvas relative distance vector is converted into a screen relative distance vector according to the mapping proportion, and the specific screen relative distance vector can be
Figure BDA0003973842430000121
Wherein M2-M1 represents the width value of the horizontal axis of the canvas, N2-N1 represents the size width value of the screen, P2-P1 represents the length value of the vertical axis of the canvas, and Q2-Q1 represents the size length value of the screen. Finally, the relative position coordinates of the target point in the canvas are obtained from the memory (the coordinate data can be regarded as a two-dimensional vector), and the relative position coordinates are added to the screen relative distance vector respectively, so that the absolute position coordinates of the target point on the screen can be obtained.
As an optional embodiment of the above coordinate conversion method, after obtaining absolute position coordinates of the target point on the screen, the target object may be further selected according to the absolute position coordinates, and the embodiment may include:
step S130: and responding to the interactive operation on the absolute position coordinates, selecting a target object in the image file by using the absolute position coordinates, and performing interactive operation on the target object.
The embodiment of step S130 described above is, for example: and the electronic equipment responds to the interactive operation on the absolute position coordinates, selects a target object in the image file by using the absolute position coordinates, and carries out interactive operation on the target object. Or, the electronic device receives an interactive operation request sent by a target device through a hypertext Transfer Protocol (HTTP) or a Hypertext Transfer Protocol Security (HTTPs), and parses a relative position coordinate of the target point in a canvas of the image file from the interactive operation request, then converts the relative position coordinate according to a mapping ratio between the canvas and a screen displaying the canvas to obtain an absolute position coordinate of the target point on the screen, and finally selects a target object in the image file using the absolute position coordinate and performs an interactive operation on the target object, where the interactive operation includes, but is not limited to: drag operation, right-click menu operation, shortcut key operation, and the like. As an optional implementation manner of the coordinate transformation method, the target object may include: an element, graphic, or image layer in the canvas.
In the implementation process of the scheme, the target object in the image file is selected by using the absolute position coordinates, and the target object is interactively operated, so that the accuracy of the program for interactively operating the target object on the screen is improved.
Please refer to fig. 6, which illustrates a schematic structural diagram of a coordinate transformation apparatus provided in an embodiment of the present application. The embodiment of the present application provides a coordinate transformation apparatus 200, including:
a relative coordinate obtaining module 210, configured to obtain a relative position coordinate of the target point in a canvas of the image file.
And an absolute coordinate obtaining module 220, configured to convert the relative position coordinate according to a mapping ratio between the canvas and a screen on which the canvas is displayed, so as to obtain an absolute position coordinate of the target point on the screen.
Optionally, in an embodiment of the present application, the relative coordinate obtaining module includes:
and the area box determining sub-module is used for determining an area box containing the target point according to the size of the screen and the size of the canvas.
And the central coordinate moving submodule is used for moving the target point to the central point coordinate of the area box according to the relative position coordinate.
And the central coordinate conversion sub-module is used for converting the central point coordinate according to the mapping relation between the area square box and the canvas to obtain the relative position coordinate of the target point in the canvas.
Optionally, in an embodiment of the present application, the area block determination sub-module includes:
and the canvas size judging submodule is used for judging whether the size of the canvas is smaller than that of the screen.
And the area box determining submodule is used for determining the canvas as the area box if the size of the canvas is smaller than that of the screen, otherwise, determining a preset proportion area randomly selected in the screen as the area box, wherein the size of the area box is smaller than that of the screen.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes:
and the transverse axis width acquisition module is used for acquiring the transverse axis width value of the canvas and the size width value of the screen.
And the first proportion obtaining module is used for dividing the width value of the transverse axis of the canvas by the size width value of the screen to obtain the mapping proportion.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes:
and the vertical axis size obtaining module is used for obtaining the vertical axis length value of the canvas and the size length value of the screen.
And the second proportion obtaining module is used for dividing the length value of the longitudinal axis of the canvas by the length value of the size of the screen to obtain the mapping proportion.
Optionally, in an embodiment of the present application, the coordinate transformation apparatus further includes:
and the interactive operation processing module is used for responding to the interactive operation on the absolute position coordinates, selecting the target object in the image file by using the absolute position coordinates and carrying out interactive operation on the target object.
Optionally, in this embodiment of the present application, the target point is a pixel point of a target object, and the target object includes: an element, graphic, or image layer in the canvas.
It should be understood that the apparatus corresponds to the above-mentioned embodiment of the coordinate transformation method, and can perform the steps related to the above-mentioned embodiment of the method, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or solidified in the Operating System (OS) of the device.
An electronic device provided in an embodiment of the present application includes: a processor and a memory, the memory storing processor-executable machine-readable instructions, the machine-readable instructions when executed by the processor performing the method as above.
Embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the method as above is performed. The computer-readable storage medium may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as a Static Random Access Memory (SRAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), an Erasable Programmable Read-Only Memory (EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
In addition, functional modules of the embodiments in the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part. Furthermore, in the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the embodiments of the present application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an alternative embodiment of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the embodiments of the present application, and all the changes or substitutions should be covered by the scope of the embodiments of the present application.

Claims (10)

1. A coordinate conversion method, comprising:
acquiring the relative position coordinates of a target point in the canvas of the image file;
and converting the relative position coordinate according to the mapping proportion between the canvas and a screen displaying the canvas to obtain the absolute position coordinate of the target point on the screen.
2. The method of claim 1, wherein obtaining the relative position coordinates of the target point in the canvas of the image file comprises:
determining an area box containing a target point according to the size of the screen and the size of the canvas;
moving the target point to the central point coordinate of the area square frame according to the relative position coordinate;
and converting the coordinates of the central point according to the mapping relation between the area box and the canvas to obtain the relative position coordinates of the target point in the canvas.
3. The method of claim 2, wherein determining the region box containing the target point according to the size of the screen and the size of the canvas comprises:
judging whether the size of the canvas is smaller than that of the screen;
and if so, determining the canvas as the area box, otherwise, determining a randomly selected preset proportion area in the screen as the area box, wherein the size of the area box is smaller than that of the screen.
4. The method of claim 1, further comprising, prior to the converting the relative position coordinates according to a mapping ratio between the canvas and a screen on which the canvas is displayed:
acquiring a width value of a transverse shaft of the canvas and a size width value of the screen;
and dividing the width value of the transverse axis of the canvas by the size width value of the screen to obtain the mapping ratio.
5. The method of claim 1, further comprising, prior to the converting the relative position coordinates according to a mapping ratio between the canvas and a screen on which the canvas is displayed:
acquiring a length value of a longitudinal axis of the canvas and a size length value of the screen;
and dividing the length value of the longitudinal axis of the canvas by the length value of the size of the screen to obtain the mapping proportion.
6. The method according to any one of claims 1-5, further comprising, after said obtaining absolute position coordinates of said target point on said screen:
and responding to the interactive operation on the absolute position coordinates, selecting a target object in the image file by using the absolute position coordinates, and performing interactive operation on the target object.
7. The method of claim 6, wherein the target point is a pixel of the target object, and wherein the target object comprises: an element, graphic, or image layer in the canvas.
8. A coordinate conversion apparatus, characterized by comprising:
the relative coordinate acquisition module is used for acquiring the relative position coordinates of the target point in the canvas of the image file;
and the absolute coordinate obtaining module is used for converting the relative position coordinate according to the mapping proportion between the canvas and a screen displaying the canvas to obtain the absolute position coordinate of the target point on the screen.
9. An electronic device, comprising: a processor and a memory, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions, when executed by the processor, performing the method of any of claims 1 to 7.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the method of any one of claims 1 to 7.
CN202211528710.2A 2022-11-30 2022-11-30 Coordinate conversion method and device, electronic equipment and storage medium Pending CN115933949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211528710.2A CN115933949A (en) 2022-11-30 2022-11-30 Coordinate conversion method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211528710.2A CN115933949A (en) 2022-11-30 2022-11-30 Coordinate conversion method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115933949A true CN115933949A (en) 2023-04-07

Family

ID=86551684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211528710.2A Pending CN115933949A (en) 2022-11-30 2022-11-30 Coordinate conversion method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115933949A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115570A (en) * 2023-10-25 2023-11-24 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system
CN117115570B (en) * 2023-10-25 2023-12-29 成都数联云算科技有限公司 Canvas-based image labeling method and Canvas-based image labeling system

Similar Documents

Publication Publication Date Title
CN109771951B (en) Game map generation method, device, storage medium and electronic equipment
CN114902294B (en) Fine-grained visual recognition in mobile augmented reality
US20210256708A1 (en) Interactive image matting using neural networks
CN108564082B (en) Image processing method, device, server and medium
US10726599B2 (en) Realistic augmentation of images and videos with graphics
US10204447B2 (en) 2D image processing for extrusion into 3D objects
AU2019205973A1 (en) Responsive resize
US11314400B2 (en) Unified digital content selection system for vector and raster graphics
JP2014035656A (en) Image processing apparatus, image processing method, and program
US20170212661A1 (en) 3D Model Generation from 2D Images
CN112717414B (en) Game scene editing method and device, electronic equipment and storage medium
CN115933949A (en) Coordinate conversion method and device, electronic equipment and storage medium
US10846889B2 (en) Color handle generation for digital image color gradients using machine learning
US20230162413A1 (en) Stroke-Guided Sketch Vectorization
CN113838134A (en) Image key point detection method, device, terminal and storage medium
KR20120075626A (en) Apparatus and method for processing electric navigational chart in web-based service
US9075494B2 (en) Systems and methods for performing object selection
CN112734900A (en) Baking method, baking device, baking equipment and computer-readable storage medium of shadow map
CN109242941B (en) Three-dimensional object synthesis using visual guidance as part of a two-dimensional digital image
CN115375847A (en) Material recovery method, three-dimensional model generation method and model training method
CN115908116A (en) Image processing method, device, equipment and storage medium
CN115455510A (en) Automatic drawing method, electronic equipment and storage medium
CN113126866B (en) Object determination method, device, electronic equipment and storage medium
CN113810626A (en) Video fusion method, device and equipment based on three-dimensional map and storage medium
CN112346579A (en) Pointing and interaction control device for large-scale desktop model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination