CN115752487A - Scene map display method and device based on WEB and storage medium - Google Patents
Scene map display method and device based on WEB and storage medium Download PDFInfo
- Publication number
- CN115752487A CN115752487A CN202211027350.8A CN202211027350A CN115752487A CN 115752487 A CN115752487 A CN 115752487A CN 202211027350 A CN202211027350 A CN 202211027350A CN 115752487 A CN115752487 A CN 115752487A
- Authority
- CN
- China
- Prior art keywords
- robot
- picture
- web page
- web
- icon
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses a scene map display method based on WEB, which comprises the steps of obtaining a map image scanned by a robot in a navigation way; converting the map image into a first picture, and adding the first picture into an HTML (hypertext markup language) file of a WEB page; acquiring real-time position coordinates of the robot, converting the real-time position coordinates into pixel coordinates of the robot in a first picture, further acquiring the position of an icon of the robot in a WEB page, and adding the icon of the robot to an HTML file; the HTML file is rendered to display the map of the robot navigation and the real-time location in a WEB page. The invention can realize the display of the navigation map of the robot and the real-time position of the robot through the WEB page, solves the problem that the position can be rendered and presented only by customizing professional client software or adding an additional plug-in of map software in the prior art, and reduces the cost of software and equipment. The invention also discloses a scene map display device based on WEB and a storage medium.
Description
Technical Field
The present invention relates to map rendering, and in particular, to a method, an apparatus, and a storage medium for displaying a scene map based on WEB.
Background
With the development of the artificial intelligence machine industry, more and more robots gradually enter the visual field of people in use scenes. For example, a scene convenient for a user to operate and use is needed for large-screen real-time monitoring of the position and related data of the robot or various operations on the robot. At present, maps are often rendered through customized professional client software to provide operation and use scenes for users, and the method has the advantages of high restoration degree and good display effect, but has the disadvantages of high customization cost and the like. In addition, by means of plug-ins of third-party software such as a high-resolution map and a Baidu map, coordinates of the robot are converted into coordinates used by user operation, and then rendering and presentation are performed, for indoor scenes, the map plug-ins can often make road building information incomplete and cannot restore original appearances of small-range scenes; meanwhile, in the mode, when coordinates are acquired, GPS equipment needs to be additionally arranged, and equipment cost is increased.
Disclosure of Invention
In order to overcome the defects of the prior art, one of the objectives of the present invention is to provide a method for displaying a scene map based on WEB, which can solve the problems of high customization cost or high equipment cost during displaying the scene map in the prior art.
The second objective of the present invention is to provide a scene map display device based on WEB, which can solve the problems of high customization cost or high equipment cost in displaying the scene map in the prior art.
The invention also aims to provide a storage medium, which can solve the problems of high customization cost or high equipment cost and the like in scene map display in the prior art.
One of the purposes of the invention is realized by adopting the following technical scheme:
the scene map display method based on WEB comprises the following steps:
a map acquisition step: acquiring a map image of the robot navigation scanning;
a picture conversion step: converting the map image into a first picture in a preset format, and adding the first picture into an HTML (hypertext markup language) file of a WEB page;
a coordinate acquisition step: acquiring real-time position coordinates of the robot;
and (3) coordinate conversion: converting the real-time position coordinates of the robot into pixel coordinates of the robot in the first picture, further obtaining the position of an icon of the robot in a WEB page according to the pixel coordinates of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page;
a rendering step: and rendering the HTML file of the WEB page to display the map navigated by the robot and the real-time position of the robot in the navigated map in the WEB page.
Further, the map image is a pgm image; the preset format is any one of png format, JPG format or JPEG format.
Further, adding the first picture to an HTML file of a WEB page in the picture conversion step specifically includes: and adding an < img > tag in an HTML file of a WEB page, or writing a file path of the first picture in a background-image attribute of the < img > tag in the HTML file.
Further, the coordinate conversion step includes: firstly, acquiring a parameter resolution and a parameter origin according to a yaml file; then calculating to obtain the pixel coordinate of the robot in the first picture according to the real-time position coordinate of the robot, the parameter resolution, the parameter origin and the height attribute of the first picture; finally, obtaining the position of the icon of the robot in the WEB page according to the pixel coordinate of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page; the yaml file is acquired when the map image of the robot navigation scan is acquired;
the calculation formula of the pixel coordinates of the robot in the first picture is as follows:
y=(Y-originy)/resolution,
x=(X-originx)/resolution;
(x, y) are the pixel coordinates of the robot in the first picture; (X, Y) is the real-time position coordinates of the robot; the parameter resolution is the scaling of the zooming conversion of the picture; the parameter origin is the origin coordinate of the map image, and is origin x and origin ny); height is the height attribute of the first picture;
wherein, the position coordinates of the icon of the robot in the first picture are:
y'=height–y=height-(Y-originy)/resolution,
x'=x=(X-originx)/resolution;
wherein, (x ', y') is the position coordinates of the icon of the robot in the first picture; height is the height attribute of the first picture.
Further, adding the icon of the robot to the HTML file of the WEB page at the position of the icon of the robot in the WEB page specifically includes: CSS tags are added to the HTML file, and the attribute top value of the CSS tags is set to y 'and the attribute left value is set to x'.
Further, the travel route generation step: and obtaining a driving route of the robot according to a plurality of positions of the robot in the first picture, adding a stroke () function of a < canvas > tag in the HTML file, drawing the driving route of the robot in the first picture, and then executing a rendering step to display the driving route of the robot in a WEB page.
Further, still include: a shape forbidden region generating step: the method comprises the steps of obtaining a coordinate range of a forbidden area of a robot, converting the coordinate range of the forbidden area of the robot into the coordinate range of the forbidden area of the robot in a first picture, adding a file () function of a < canvas > tag in an HTML file, drawing the forbidden area in the first picture, and then executing a rendering step, so that the forbidden area of the robot is displayed in the first picture of a Web page.
Further, the rendering step further comprises: when the forbidden zone changes and a new forbidden zone is rendered, the original forbidden zone is firstly erased, and then a new drawing graph is rendered.
The second purpose of the invention is realized by adopting the following technical scheme:
the WEB-based scene map display device comprises a memory and a processor, wherein a scene map display program running on the processor is stored on the memory, the scene map display program is a computer program, and the processor executes the scene map display program to realize the steps of the WEB-based scene map display method adopted as one of the purposes of the invention.
The third purpose of the invention is realized by adopting the following technical scheme:
a storage medium which is a computer-readable storage medium having stored thereon a scene map display program which is a computer program, the conversion program, when executed by a processor, realizing the steps of a WEB-based scene map display method employed as one of the objects of the present invention.
Compared with the prior art, the invention has the beneficial effects that:
the method and the device have the advantages that the map image of the navigation of the robot is converted into the first picture and then added into the HTML file of the WEB page, so that the display of the map is realized at the WEB end, and meanwhile, the position of the robot in the first picture is obtained after the real-time coordinate of the robot is converted, so that the real-time position of the robot is displayed in the map, and the problems of high customization cost, high equipment cost and the like caused by the fact that professional software or other additional plug-ins or equipment is needed to realize the map display in the prior art are solved.
Drawings
Fig. 1 is a flowchart of a method for displaying a WEB-based scene according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Example one
Based on the defects in the prior art, the invention provides the scene display method based on the WEB, and the map of the monitoring robot can be implemented without additional equipment or installation of additional programs or software.
As shown in fig. 1, the present invention provides a preferred embodiment, a method for displaying a scene map based on WEB, including the following steps:
s1, acquiring a map image of the robot navigation scanning.
The map image is an image of a map obtained by the robot navigation scanning, and the map is scanned according to own equipment to obtain a corresponding map image in the running process of the robot. Preferably, the map image in the present invention is preferably an image in pgm format.
And S2, converting the map image into a first picture in a preset format, and adding the first picture in the preset format into the HTML file.
Preferably, the picture with the preset format in the invention can be a png picture, a JPG picture or a jpeg picture, and is in a common picture format.
Because the scanned map image of the robot cannot be directly added in the HTML file of the WEB, the map image needs to be converted into the picture in the png format, so that the first picture can be directly added in the HTML file of the WEB page to display the map in the WEB page.
For example, in this embodiment, the first picture is added to the HTML file by adding an < img > tag to the HTML file. Specifically, the < img png > tag can be added in the Html file; or, writing a storage path of the first picture in the background-image attribute of the < img > tag, specifically as follows:
and S3, acquiring the real-time position coordinates of the robot.
Specifically, the real-time position coordinates of the robot in the invention can be obtained by communicating the WEB terminal with the background server, and then the background server obtains the real-time position coordinates of the robot and transmits the real-time position coordinates to the WEB terminal. The WEB end and the background server are in two-way communication by adopting a WebSocekt protocol, so that the real-time position coordinate of the robot is acquired by the background server. For example, the WEB end establishes connection with the background server by using a new WebSocket () instantiation object in the JS file, and then monitors and receives the background server by using an onmessage event to obtain the real-time position coordinates of the robot.
Meanwhile, the real-time position coordinates are coordinates in the map image, and are not coordinates of the robot in the first picture, and therefore, the coordinates need to be converted.
And S4, converting the real-time position coordinates of the robot into pixel coordinates of the robot in the first picture.
And S5, obtaining the position of the icon of the robot in the WEB page according to the pixel coordinate of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page.
The real-time position coordinates of the robot are acquired by the background server, and are the real-time coordinates of the robot in the map image and not the coordinates of the robot in the first picture. Therefore, after the real-time position coordinates of the robot are acquired, the real-time position coordinates are converted into pixel coordinates of the robot in the first picture.
More preferably, when the real-time position coordinates of the robot are converted into the pixel coordinates of the robot in the first picture, two parameters resolution and origin in the yaml file are used. Wherein, the parameter resolution is the scaling of scaling conversion, and the parameter origin is the origin coordinate of the map image, which is (origin ). The yaml file is used for map coordinate calibration and is a file that matches a map image (pgm file), and is generated simultaneously with the generation of the map image. Since the image in the pgm format cannot be directly displayed on the WEB page, the map image in the pgm format needs to be converted into a picture which can be identified by the WEB page, and the yaml file is used for coordinate calibration in the image conversion process. The file is acquired synchronously when acquiring the map image of the robot navigation.
Setting: the pixel coordinate of the robot in the first picture is (X, Y), and the real-time position coordinate of the robot is (X, Y), the transformation relationship between the two follows the following formula:
y=(Y-originy)/resolution,
x=(X-originx)/resolution(1)。
after the pixel coordinates of the robot in the first picture are obtained, the position of the icon of the robot in the first picture can be obtained by using an absolute positioning mode. In the conversion process, the rectangular coordinate systems of different planes are translated and rotated, and then the coordinate of one coordinate system is projected into the other coordinate system.
Since the absolute positioning is performed with respect to the upper left corner of the parent tag as a reference, and the above formula (1) is based on the lower left corner of the first picture as a reference, it is necessary to mirror-convert the ordinate of the pixel coordinate of the robot in the first picture. That is, the position coordinates of the icon of the robot in the first picture are:
y'=height–y=height-(Y-originy)/resolution,
x'=x=(X-originx)/resolution(2)。
wherein height is a height attribute of the first picture, and (x ', y') is a position coordinate of the icon of the robot in the first picture.
In this way, the icon of the robot can be added to the HTML culture according to the position coordinates of the icon in the first picture. Specifically, the method is implemented by adding CSS tags to an HTML file, setting the attribute top value of the CSS tags to y ', and setting the left value to x', such as:
and S6, rendering the HTML file of the WEB page to display a map of the robot navigation and the real-time position of the robot in the navigation map in the WEB page.
Preferably, in order to further provide good experience for the user, the invention can also display a driving route, a forbidden area and the like for the user through a WEB page.
Specifically, the method obtains a driving route through a plurality of position coordinates of the calculated icon of the robot in the first picture, and draws the driving route of the robot in the first picture through a stroke () function of a < canvas > tag in an HTML file. Therefore, when the WEB page is rendered, the driving route of the robot can be displayed in the WEB page.
Wherein, for the label < canvas > label, the attribute position is set as absolute positioning, and z-index is set to be covered on the map label; and sets a string () function, such as:
similarly, the forbidden area of the robot can be drawn according to the coordinate range of the forbidden area of the robot. Specifically, the coordinate range of the forbidden area of the robot is obtained, the coordinate range of the forbidden area of the robot is converted into the coordinate range of the forbidden area of the icon of the robot in the first picture, the forbidden area is drawn in the first picture according to the coordinate range through a file () function of a < canvas > tag, and then a rendering step is executed, so that the forbidden area of the robot is displayed in the first picture of the Web page.
For example, for the forbidden area, first, four coordinate values of the forbidden area of the robot need to be obtained, then, four position coordinates of the icon of the robot in the first picture are obtained after the four coordinate values are converted by the formula, and then, the corresponding forbidden area is drawn on the first picture by using the fim () function.
In addition, since the robot is in the process of real-time operation, the travel route, the prohibited area, and the like are changed in real time, and therefore, before a new graphic is rendered, the clearRect () function is called to erase the original graphic, and then the new graphic is rendered again. For example, step S7 further includes: when the forbidden zone changes, when a new forbidden zone is rendered, the clarerect () function is called to erase the original forbidden zone, and then a new drawing graph is rendered.
In addition, when setting the < canvas > tag in the HTML file of the WEB page, in order to avoid the problem that different graphs cannot be distinguished, the invention adopts the mode that each type of graph is separately stored in one < canvas > tag. When the graph is changed, the original data of the < canvas > tag corresponding to the graph is cleared, and then new data is added to the < canvas > tag and drawing rendering is performed.
In addition, when the data volume is extremely large and the data needs to be changed in real time frequently, the method and the device also increase the smoothness of the picture by controlling the push frequency of the WebSocket or using off-screen canvas cache at the Web end and the like. The invention can directly display the relevant information of the map and the real-time change position of the robot in real time in the Web page of the browser, and is suitable for most scenes such as real-time monitoring, large-screen data exhibition and the like.
Example two
The invention also provides a WEB-based scene map display device, which comprises a memory and a processor, wherein the memory is stored with a scene map display program running on the processor, the scene map display program is a computer program, and the processor executes the scene map display program to realize the following steps:
a map acquisition step: acquiring a map image of the robot navigation scanning;
a picture conversion step: converting the map image into a first picture in a preset format, and adding the first picture into an HTML (hypertext markup language) file of a WEB page;
a coordinate acquisition step: acquiring real-time position coordinates of the robot;
and a coordinate conversion step: converting the real-time position coordinates of the robot into pixel coordinates of the robot in the first picture, further obtaining the position of an icon of the robot in a WEB page according to the pixel coordinates of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page;
a rendering step: and rendering the HTML file of the WEB page to display the map navigated by the robot and the real-time position of the robot in the navigated map in the WEB page.
Further, the map image is a pgm image; the preset format is any one of png format, JPG format or JPEG format.
Further, adding the first picture to an HTML file of a WEB page in the picture conversion step specifically includes: and adding an < img > tag in an HTML file of a WEB page, or writing a file path of the first picture in a background-image attribute of the < img > tag in the HTML file.
Further, the coordinate conversion step includes: firstly, acquiring a parameter resolution and a parameter origin according to a yaml file; then calculating to obtain the pixel coordinate of the robot in the first picture according to the real-time position coordinate of the robot, the parameter resolution, the parameter origin and the height attribute of the first picture; finally, obtaining the position of the icon of the robot in the WEB page according to the pixel coordinate of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page; the yaml file is obtained when a map image of the robot navigation scan is obtained;
the calculation formula of the pixel coordinates of the robot in the first picture is as follows:
y=(Y-originy)/resolution,
x=(X-originx)/resolution;
(x, y) are pixel coordinates of the robot in the first picture; (X, Y) is the real-time position coordinates of the robot; the parameter resolution is the scaling of the zooming conversion of the picture; the parameter origin is the origin coordinates of the map image, and is originx and originy); height is the height attribute of the first picture;
wherein, the position coordinates of the icon of the robot in the first picture are:
y'=height–y=height-(Y-originy)/resolution,
x'=x=(X-originx)/resolution;
wherein, (x ', y') is the position coordinates of the icon of the robot in the first picture; height is the height attribute of the first picture.
Further, adding the icon of the robot to the HTML file of the WEB page at the position of the icon of the robot in the WEB page specifically includes: CSS tags are added to the HTML file, and the attribute top value of the CSS tags is set to y 'and the attribute left value is set to x'.
Further, the travel route generation step: and obtaining a driving route of the robot according to the plurality of positions of the robot in the first picture, adding a stroke () function of a < canvas > tag in an HTML file to draw the driving route of the robot in the first picture, and then performing a rendering step to display the driving route of the robot in a WEB page.
Further, still include: a forbidden shape region generating step: the method comprises the steps of obtaining a coordinate range of a forbidden area of a robot, converting the coordinate range of the forbidden area of the robot into the coordinate range of the forbidden area of the robot in a first picture, adding a file () function of a < canvas > tag in an HTML file, drawing the forbidden area in the first picture, and then executing a rendering step, so that the forbidden area of the robot is displayed in the first picture of a Web page.
Further, the rendering step further comprises: when the forbidden zone changes, when a new forbidden zone is rendered, the original forbidden zone is firstly erased, and then a new drawing graph is rendered.
EXAMPLE III
A storage medium which is a computer-readable storage medium having stored thereon a scene map display program which is a computer program and which, when executed by a processor, realizes the steps of:
a map acquisition step: acquiring a map image of the robot navigation scanning;
a picture conversion step: converting the map image into a first picture in a preset format, and adding the first picture into an HTML (hypertext markup language) file of a WEB page;
a coordinate acquisition step: acquiring real-time position coordinates of the robot;
and (3) coordinate conversion: converting the real-time position coordinates of the robot into pixel coordinates of the robot in the first picture, further obtaining the position of an icon of the robot in a WEB page according to the pixel coordinates of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page;
a rendering step: and rendering the HTML file of the WEB page to display the map navigated by the robot and the real-time position of the robot in the navigated map in the WEB page.
Further, the map image is a pgm image; the preset format is any one of png format, JPG format or JPEG format.
Further, adding the first picture to an HTML file of a WEB page in the picture conversion step specifically includes: and adding an < img > tag in an HTML file of a WEB page, or writing a file path of the first picture in a background-image attribute of the < img > tag in the HTML file.
Further, the coordinate converting step includes: firstly, acquiring a parameter resolution and a parameter origin according to a yaml file; then calculating to obtain a pixel coordinate of the robot in the first picture according to the real-time position coordinate of the robot, the parameter resolution, the parameter origin and the height attribute of the first picture; finally, obtaining the position of the icon of the robot in the WEB page according to the pixel coordinate of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page; the yaml file is acquired when the map image of the robot navigation scan is acquired;
the calculation formula of the pixel coordinates of the robot in the first picture is as follows:
y=(Y-originy)/resolution,
x=(X-originx)/resolution;
(x, y) are pixel coordinates of the robot in the first picture; (X, Y) are real-time position coordinates of the robot; the parameter resolution is the scaling of the zooming conversion of the picture; the parameter origin is the origin coordinates of the map image, and is originx and originy); height is the height attribute of the first picture;
wherein, the position coordinates of the icon of the robot in the first picture are:
y'=height–y=height-(Y-originy)/resolution,
x'=x=(X-originx)/resolution;
wherein, (x ', y') is the position coordinates of the icon of the robot in the first picture; height is the height attribute of the first picture.
Further, adding the icon of the robot to the HTML file of the WEB page at the position of the icon of the robot in the WEB page specifically includes: CSS tags are added to the HTML file, and the attribute top value of the CSS tags is set to y 'and the attribute left value is set to x'.
Further, the travel route generation step: and obtaining a driving route of the robot according to the plurality of positions of the robot in the first picture, adding a stroke () function of a < canvas > tag in an HTML file to draw the driving route of the robot in the first picture, and then performing a rendering step to display the driving route of the robot in a WEB page.
Further, still include: a forbidden shape region generating step: the method comprises the steps of obtaining a coordinate range of a forbidden area of the robot, converting the coordinate range of the forbidden area of the robot into the coordinate range of the forbidden area of the robot in a first picture, adding a file () function of a < canvas > tag in an HTML file, drawing the forbidden area in the first picture, and then performing a rendering step, so that the forbidden area of the robot is displayed in the first picture of a Web page.
Further, the rendering step further comprises: when the forbidden zone changes, when a new forbidden zone is rendered, the original forbidden zone is firstly erased, and then a new drawing graph is rendered.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention should not be limited thereby, and any insubstantial changes and substitutions made by those skilled in the art based on the present invention are intended to be covered by the claims.
Claims (10)
1. The scene map display method based on WEB is characterized by comprising the following steps:
a map acquisition step: acquiring a map image of the robot navigation scanning;
a picture conversion step: converting the map image into a first picture in a preset format, and adding the first picture into an HTML (hypertext markup language) file of a WEB page;
a coordinate acquisition step: acquiring real-time position coordinates of the robot;
and (3) coordinate conversion: converting the real-time position coordinates of the robot into pixel coordinates of the robot in the first picture, further obtaining the position of an icon of the robot in a WEB page according to the pixel coordinates of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page;
a rendering step: and rendering the HTML file of the WEB page to display a map navigated by the robot and the real-time position of the robot in the navigated map in the WEB page.
2. The WEB-based scene map display method according to claim 1, wherein the map image is a pgm image; the preset format is any one of png format, JPG format or JPEG format.
3. The method for displaying a scene map based on WEB according to claim 2, wherein the adding the first picture to an HTML file of a WEB page in the picture conversion step specifically includes: and adding an < img > tag into an HTML file of a WEB page, or writing a background-image attribute of the < img > tag in the HTML file into a file path of the first picture.
4. The WEB-based scene map display method according to claim 2, wherein the coordinate transformation step includes: firstly, acquiring a parameter resolution and a parameter origin according to a yaml file; then calculating to obtain a pixel coordinate of the robot in the first picture according to the real-time position coordinate of the robot, the parameter resolution, the parameter origin and the height attribute of the first picture; finally, obtaining the position of the icon of the robot in the WEB page according to the pixel coordinate of the robot in the first picture, and adding the icon of the robot to an HTML file of the WEB page according to the position of the icon of the robot in the WEB page; the yaml file is obtained when a map image of the robot navigation scan is obtained;
the calculation formula of the pixel coordinates of the robot in the first picture is as follows:
y=(Y-originy)/resolution,
x=(X-originx)/resolution;
(x, y) are pixel coordinates of the robot in the first picture; (X, Y) is the real-time position coordinates of the robot; the parameter resolution is the scaling of the picture scaling conversion; the parameter origin is the origin coordinates of the map image, and is originx and originy); height is the height attribute of the first picture;
wherein, the position coordinates of the icon of the robot in the first picture are:
y'=height–y=height-(Y-originy)/resolution,
x'=x=(X-originx)/resolution;
wherein, (x ', y') is the position coordinates of the icon of the robot in the first picture; height is the height attribute of the first picture.
5. The method for displaying a scene map based on WEB according to claim 4, wherein the step of adding the icon of the robot to the HTML file of the WEB page at the position of the icon of the robot in the WEB page specifically comprises: CSS tags are added to the HTML file, and the attribute top value of the CSS tags is set to y 'and the attribute left value is set to x'.
6. The WEB-based scene map display method according to claim 1, wherein the travel route generation step: and obtaining a driving route of the robot according to a plurality of positions of the robot in the first picture, adding a stroke () function of a < canvas > tag in the HTML file, drawing the driving route of the robot in the first picture, and then executing a rendering step to display the driving route of the robot in a WEB page.
7. The WEB-based scene map display method according to claim 1, further comprising: a forbidden shape region generating step: the method comprises the steps of obtaining a coordinate range of a forbidden area of a robot, converting the coordinate range of the forbidden area of the robot into the coordinate range of the forbidden area of the robot in a first picture, adding a file () function of a < canvas > tag in an HTML file, drawing the forbidden area in the first picture, and then executing a rendering step, so that the forbidden area of the robot is displayed in the first picture of a Web page.
8. The WEB-based scene map display method according to claim 7, wherein the rendering step further comprises: when the forbidden zone changes, when a new forbidden zone is rendered, the original forbidden zone is firstly erased, and then a new drawing graph is rendered.
9. A WEB-based scene map display apparatus comprising a memory and a processor, wherein the memory stores a scene map display program running on the processor, and the scene map display program is a computer program, wherein the processor implements the steps of the WEB-based scene map display method according to any one of claims 1 to 8 when executing the scene map display program.
10. A storage medium which is a computer-readable storage medium having a scene map display program stored thereon, wherein the scene map display program is a computer program, and when executed by a processor, the method realizes the steps of the WEB-based scene map display method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027350.8A CN115752487A (en) | 2022-08-25 | 2022-08-25 | Scene map display method and device based on WEB and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211027350.8A CN115752487A (en) | 2022-08-25 | 2022-08-25 | Scene map display method and device based on WEB and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115752487A true CN115752487A (en) | 2023-03-07 |
Family
ID=85349329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211027350.8A Pending CN115752487A (en) | 2022-08-25 | 2022-08-25 | Scene map display method and device based on WEB and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115752487A (en) |
-
2022
- 2022-08-25 CN CN202211027350.8A patent/CN115752487A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9396578B2 (en) | Method and system for displaying visual content in a virtual three-dimensional space | |
JP4819164B2 (en) | Image-mapped point cloud with the ability to accurately display point coordinates | |
CN111080799A (en) | Scene roaming method, system, device and storage medium based on three-dimensional modeling | |
US20130300740A1 (en) | System and Method for Displaying Data Having Spatial Coordinates | |
CN109741431B (en) | Two-dimensional and three-dimensional integrated electronic map frame | |
JP2502728B2 (en) | Video data processor | |
US9501812B2 (en) | Map performance by dynamically reducing map detail | |
CN115752487A (en) | Scene map display method and device based on WEB and storage medium | |
WO2023066412A1 (en) | Dynamic processing method and apparatus based on unmanned aerial vehicle video pyramid model | |
CN116912361A (en) | Mapbox-gl-based 3D annotation editing method and system | |
US20020085014A1 (en) | Rendering device | |
CN111429549B (en) | Route image generation method, device and storage medium | |
CN116647657A (en) | Responsive Video Canvas Generation | |
JP2006222901A (en) | Method and program for supporting moving picture reproduction, server device and computer system | |
CN110766599B (en) | Method and system for preventing white screen from appearing when Qt Quick is used for drawing image | |
CN112927327B (en) | Three-dimensional visualization method for biomedical platform data map | |
KR100270140B1 (en) | Method and apparatus for generating and displaying hotlinks in a panoramic three dimensional scene | |
KR102696006B1 (en) | System for providing three dimentional event display service using spatial coordinates | |
US20240331177A1 (en) | System and method for defining a virtual data capture point | |
CN117197364B (en) | Region modeling method, device and storage medium | |
CN115908675A (en) | Traffic environment visualization method and device, terminal equipment and storage medium | |
CN117252974A (en) | Mapping method and device for three-dimensional image, electronic equipment and storage medium | |
Husz et al. | Woolz IIP: a tiled on-the-fly sectioning server for 3D volumetric atlases | |
CN118227062A (en) | Image processing method, device, electronic equipment and storage medium | |
JPH04160479A (en) | Setting system for character string display position |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |