CN110647698B - Page loading method and device, electronic equipment and readable storage medium - Google Patents

Page loading method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN110647698B
CN110647698B CN201910741270.0A CN201910741270A CN110647698B CN 110647698 B CN110647698 B CN 110647698B CN 201910741270 A CN201910741270 A CN 201910741270A CN 110647698 B CN110647698 B CN 110647698B
Authority
CN
China
Prior art keywords
page
rendering
data
data source
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910741270.0A
Other languages
Chinese (zh)
Other versions
CN110647698A (en
Inventor
薛洪立
王东川
沈军
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201910741270.0A priority Critical patent/CN110647698B/en
Publication of CN110647698A publication Critical patent/CN110647698A/en
Application granted granted Critical
Publication of CN110647698B publication Critical patent/CN110647698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9577Optimising the visualization of content, e.g. distillation of HTML documents

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention relates to the technical field of data processing, and provides a page loading method, a page loading device, electronic equipment and a readable storage medium. The method is applied to the client and comprises the following steps: sending a page initial loading request to a background server; receiving a data source and a first state code returned by a background server in response to a page initial loading request; caching a data source; extracting first page data corresponding to the page initial loading request from a cached data source according to the first state code to perform rendering display; sending a page switching request to a background server; and extracting second page data corresponding to the page switching request from the cached data source according to a second state code returned by the background server based on the page request, and performing rendering display. When the page is switched, data required by rendering is extracted from the cached data source, so that the frequency of data requests between the client and the background server is reduced, and the rendering efficiency is improved.

Description

Page loading method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a page loading method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, a page switching manner in the related art is to switch pages according to network addresses of the pages, that is, after a client triggers a page switching request, the client skips from a currently displayed page to a page corresponding to the triggered network address.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art: when the client performs page skipping or refreshing display, the page rendering display efficiency is low, the user needs to spend more time waiting for page display, and the use experience is poor.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a page loading method, apparatus, electronic device, and computer-readable storage medium that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention discloses a page loading method, where the method is applied to a client, and the method includes:
sending a page initial loading request to a background server;
receiving a data source and a first state code returned by the background server in response to the page initial loading request;
caching the data source;
extracting first page data corresponding to the page initial loading request from a cached data source according to the first state code to perform rendering display;
sending a page switching request to the background server;
receiving a second state code returned by the background server in response to the page switching request;
and extracting second page data corresponding to the page switching request from a cached data source according to the second state code for rendering and displaying.
Optionally, the caching the data source includes:
inquiring first page data corresponding to the page initial loading request in the data source according to the first state code;
caching the data source starting from the first page data.
Optionally, the extracting, according to the first status code, first page data corresponding to the page initial loading request from a cached data source for rendering and displaying includes:
judging whether the first page data is cached completely;
under the condition that the first page data is cached completely, extracting the first page data according to the first state code;
acquiring a first rendering authority aiming at the page initial loading request in the first state code;
and rendering and displaying the first page data according to the first rendering authority.
Optionally, the method further comprises:
and caching the data source after the first page data is extracted while rendering and displaying the first page data.
Optionally, the extracting, according to the second state code, second page data corresponding to the page switching request from a cached data source for rendering and displaying includes:
acquiring a display area to be switched corresponding to the page switching request;
extracting second page data corresponding to the display area to be switched from a cached data source according to the second state code;
acquiring a second rendering permission aiming at the page switching request in the second state code;
and rendering the second page data according to the second rendering permission and replacing the display area to be switched.
Optionally, the method further comprises:
and when a closing operation for the page triggered by the user is detected, clearing the cached data source.
In a second aspect, an embodiment of the present invention further discloses a page loading apparatus, where the apparatus is applied to a client, and the apparatus includes:
the first sending module is used for sending a page initial loading request to the background server;
the first receiving module is used for receiving a data source and a first state code returned by the background server in response to the page initial loading request;
the cache module is used for caching the data source;
the first rendering and displaying module is used for extracting first page data corresponding to the page initial loading request from a cached data source according to the first state code to perform rendering and displaying;
the second sending module is used for sending a page switching request to the background server;
a second receiving module, configured to receive a second state code returned by the background server in response to the page switching request;
and the second rendering and displaying module is used for extracting second page data corresponding to the page switching request from a cached data source according to the second state code to perform rendering and displaying.
Optionally, the cache module includes:
the query submodule is used for querying first page data corresponding to the page initial loading request in the data source according to the first state code;
and the first cache submodule is used for caching the data source from the first page data.
Optionally, the first rendering and displaying module comprises:
the judgment submodule is used for judging whether the first page data is cached completely;
the first extraction submodule is used for extracting the first page data under the condition that the first page data is cached completely;
a first obtaining submodule, configured to obtain a first rendering permission in the first status code, where the first rendering permission is for the page initial loading request;
and the first rendering and displaying submodule is used for rendering and displaying the first page data according to the first rendering authority.
Optionally, the apparatus further comprises:
and the asynchronous cache module is used for caching the data source after the first page data is extracted while rendering and displaying the first page data.
Optionally, the second rendering and displaying module comprises:
the second obtaining sub-module is used for obtaining a display area to be switched corresponding to the page switching request;
the second extraction submodule is used for extracting second page data corresponding to the display area to be switched from a cached data source according to the second state code;
a third obtaining sub-module, configured to obtain a second rendering permission for the page switching request in the second state code;
and the second rendering and displaying submodule is used for rendering the second page data according to the second rendering authority and replacing the display area to be switched.
Optionally, the apparatus further comprises:
and the clearing module is used for clearing the cached data source when the closing operation aiming at the page triggered by the user is detected.
In a third aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors; and one or more machine readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform the page loading method as described in any of the first aspects above.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium storing a computer program for causing a processor to execute the page loading method according to any one of the above first aspects.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the client carries out page initial loading, a page initial loading request is sent to the background server, and then a first state code and a data source returned by the background server are received. After receiving the data source and the first state code, the client caches the data source, extracts first page data for the page initial loading request from the cached data source according to the first state code, and finally renders and displays the first page data according to the first state code to complete rendering and displaying of the initial page.
And then, when the page is switched, the client sends a page switching request to the background server, the background server returns the second state code, and the client extracts the second page data aiming at the page switching request from the cached data source according to the received second state code to perform rendering display. In the invention, when the client side switches the page, the corresponding page data is extracted from the cached data source according to the state code, and the background server returns the corresponding state code when responding to the page switching request, so that the client side can finish the rendering display of the switched page according to the state code, thereby reducing the frequency of data requests between the client side and the background server and improving the rendering efficiency.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a schematic diagram of an application scenario of a page loading method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating steps of a method for loading a page according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating an embodiment of a page loading method according to the present invention;
fig. 8 is a schematic structural diagram of an embodiment of a page loading apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved the traditional Ethernet (Ethernet) to face the potentially huge first video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: server, exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, code board, memory, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node server, access exchanger (including Ethernet protocol conversion gateway), terminal (including various set-top boxes, coding board, memory, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204;
the network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module mainly includes a network interface module (a downlink network interface module 301 and an uplink network interface module 302), a switching engine module 303 and a CPU module 304;
wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the incoming data packet of the CPU module 304 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues and may include two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 308 is configured by the CPU module 304, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
In the related art, the switching loading of the web page is processed by the front end and the back end together, the front end controls the loading and rendering of the page, and the back end controls the sending of the page data and the processing of the data logic. The existing page loading or switching can be carried out according to the link address of the page. For example, when a client on a terminal detects loading or switching of a page, the client sends a request to a background server, the background returns corresponding data according to the request, and the client renders and displays the data. However, in the related art, the client needs the background server to return the corresponding data every time the page is switched, and sends a request to the background server many times to obtain a large amount of data, which reduces rendering efficiency, and the performance of the page is poor and user experience is poor.
Referring to fig. 5, an application scenario diagram of a page loading method according to an embodiment of the present invention is shown. In the video network, a server can be in communication connection with a terminal, and a client capable of loading and displaying a network page is installed on the terminal.
Referring to fig. 6, a flowchart illustrating steps of an embodiment of a page loading method according to the embodiment of the present invention is shown, where the page loading method is applied to a client, and specifically includes the following steps:
step S601, sending a page initial loading request to a background server.
The page initial loading request is generated when the client detects an initial loading operation which is triggered by a user and aims at the page. For example, after the user opens the client, when the client automatically accesses a preset network address, a page initial loading request is automatically generated. The page initial loading request may also be generated when the client detects an access operation of the user to the network address when the user clicks a certain network address in the client.
In this embodiment, the terminal may be an intelligent device (such as a smart phone), a computer, or a tablet computer, and this embodiment does not specifically limit the terminal, and the client installed on the terminal may be a video client, a search client, an information browsing client, or the like.
Step S602, receiving a data source and a first status code returned by the background server in response to the page initial loading request.
The background server can analyze the page initial loading request after receiving the page initial loading request, the page initial loading request can include page data request information, state verification information and the like, and the data source and the first state code are returned according to the data request information and the state verification information. The data request information is used for requesting to obtain a data source for page loading; the status verification information is used for identifying the login account type of the current client, and may specifically include whether the current client has a user account to log in, where the logged-in user account is a guest account, a common account, a member account, or an administrator account. The background server can obtain the page control authority corresponding to the user account according to the state of the user account logged in by the current client, so that a first state code for identifying the rendering authority corresponding to the page control authority is returned to the client.
For example, when the login account of the current client is a guest account or an unregistered account, a first status code returned to the first client identifies the lowest rendering authority; when the login account is a member account, a first status code returned to the first client identifies a common rendering authority; and when the login account is the administrator account, identifying the highest rendering authority by the first state code returned to the first client.
Step S603, caching the data source.
In this embodiment, the data returned by the background server according to the page initial loading request may include not only the page data required for loading the initial page, but also other page data pointed by the link address in the initial page, and other data that may be required for video playing and audio playing in the initial page. Therefore, after the client receives the data source, the data can be cached firstly, and after the data source is cached locally, when other switched pages are loaded, the needed page data can be directly extracted from the locally cached data source, so that the frequency of requesting the page data with the background server is reduced, and the page loading efficiency is improved.
Step S604, extracting the first page data corresponding to the initial page loading request from the cached data source according to the first status code, and performing rendering display.
After the client performs the cache operation on the data source, first page data required for loading the initial page can be extracted from the cached data source according to the first state code, and then the first page data is rendered and displayed. The first status code may carry a data identifier for identifying page data required in the page initial loading request and rendering permission required by the client for rendering the page data.
For example, after the user clicks www.abc.com, the client sends an initial page load request for www.abc.com to the background server, where the initial page load request also includes the user account of the current client login. After receiving the page initial loading request, the background server returns all page data in the storage list for www.abc.com network links in the server, for example, page data in child storage lists www.abc.com/1 and www.abc.com/2 in root storage list www.abc.com. Meanwhile, the background server may also return a first status code for identifying that the page data targeted by the page initial loading request is www.abc.com page data, and the client searches for the page data corresponding to www.abc.com in the cached data source containing all the page data according to the first status code, and performs rendering display.
Step S605, sending a page switching request to the background server.
Step S606, receiving a second state code returned by the background server in response to the page switching request.
In practice, after the client loads the initial page, the user may also perform page switching during the use process, for example, when the user clicks a certain network link in the currently displayed page, the client jumps from the currently displayed page to a page pointed by the network link.
When the client detects a page switching operation of a user for a current page, such as clicking a certain network link, refreshing the page, or the like, a page switching request may be generated according to a user account of the current client and corresponding parameters in the network link, and then the page switching request is sent to the background server.
After receiving the page switching request, the background server analyzes the page switching request to obtain parameters corresponding to the network link and a user account number logged in by the current client, obtains a user authority corresponding to the current user account number according to the current user account number, then generates a second state code according to the corresponding user account authority and the parameters corresponding to the network link, and sends the second state code to the client. The second state code is used for identifying second page data required by the current page switching request and rendering permission corresponding to the current client login user account.
Step S607, extracting the second page data corresponding to the page switching request from the cached data source according to the second state code, and performing rendering display.
After receiving the second state code, the client caches the second page data in a local data source to extract the second page data when the page is initially loaded according to a data identifier, which is used for identifying the second page data corresponding to the current page switching request, in the second state code, and then renders and displays the second page data according to the rendering authority, which is identified in the second state code and corresponds to the current client login user account, in the current client login user account.
In this embodiment, when the client performs initial page loading, a page initial loading request is first sent to the background server, and then a first state code and a data source returned by the background server are received. After receiving the data source and the first state code, the client caches the data source, extracts first page data for the page initial loading request from the cached data source according to the first state code, and finally renders and displays the first page data according to the first state code to complete rendering and displaying of the initial page.
And then, when the page is switched, the client sends a page switching request to the background server, the background server returns the second state code, and the client extracts the second page data aiming at the page switching request from the cached data source according to the received second state code to perform rendering display. In the invention, when the client side switches the page, the corresponding page data is extracted from the cached data source according to the state code, and the background server returns the corresponding state code when responding to the page switching request, so that the client side can finish the rendering display of the switched page according to the state code, thereby reducing the frequency of data requests between the client side and the background server and improving the rendering efficiency.
With reference to the foregoing embodiment, in an implementation manner, in step S603, caching the data source may specifically include the following steps:
s6031, the data source is queried for the first page data corresponding to the initial page loading request according to the first status code.
S6032, buffer-storing the data source from the first page data.
In practice, a data source returned by the background server according to the page initial loading request may contain a large amount of data, and therefore, in order to improve the efficiency of the client in rendering the initial page, when the client caches data, the client may cache first page data corresponding to the current initial page loading request contained in the data source, then query the first page data corresponding to the page initial loading request in the data source according to the data identifier in the first status code, and start caching from the first page data. The cache is started from the first page data, the rendering display sequence of the client during initial page loading can be adjusted, and the client can extract data from the cached data source when reading the page data for rendering, so that the phenomenon that initial rendering display is incomplete when the data source contains a large amount of data is avoided.
With reference to the foregoing embodiment, in an implementation manner, in step S604, extracting, according to the first status code, the first page data corresponding to the initial page loading request from the cached data source for rendering and displaying, may specifically include the following steps:
s6041, determine whether the first page data is cached.
In practice, when the client determines whether the first page data is completely cached, the process of caching the data source is not stopped. Whether the first page data is cached can be judged in the following modes: in one mode, whether the first page data identified by the first status code and corresponding to the initial page loading request still exists in the data source may be queried, and if the currently uncached data source does not already contain the first page data, it may be considered that the first page data is completely cached. In another mode, it may also be determined whether the currently cached data is the first page data, in this embodiment, the client starts caching from the first page data, and caches other page data after the first page data is cached, so that if the currently cached data of the client is other page data, it may be considered that the caching of the first page data is completed.
S6042, when the first page data is cached, extracting the first page data according to the first status code.
After the client determines that the first page data is completely cached, the client can extract the first page data from the cached data source according to the data identification information in the first status code.
S6043, obtain a first rendering permission in the first status code for the page initial loading request.
S6044, performing rendering display on the first page data according to the first rendering authority.
After the client extracts the first page data, rendering and displaying are performed according to a first rendering authority in the first state code aiming at the page initial loading request.
In this embodiment, when rendering is performed on a page, there may be multiple control states, and corresponding states need to be displayed under different rendering permissions. When the page is rendered, the page can be rendered in a module mode, one page is divided into a plurality of different modules, each module can correspond to different states, and then the state corresponding to each module is identified in the rendering authority of the state code. For example, the first page data includes three modules, which are a first module, a second module and a third module, rendering permissions of the three modules are identified in the first rendering permission, respectively, the rendering permission of the first module is 0, the rendering permission of the second module is 1, and the rendering permission of the third module is 2, where 0 represents that the module is displayed, 1 represents that the module is hidden, and 2 represents that the module is readable.
The rule of the client rendering page data is arranged to be executed through the state codes, the state codes are standard in design and clear and concise, the client can quickly respond when the page data is rendered, and the rendering efficiency is improved.
With reference to the foregoing embodiment, in an implementation manner, the page loading method may further include the following steps:
and caching the data source after the first page data is extracted while rendering and displaying the first page data.
In this embodiment, a manner of asynchronously caching data may be adopted to improve the efficiency of loading and displaying the initial page. The asynchronous cache data means that when the page is displayed in a rendering mode, the rendering can be performed without waiting for all data in the data source to be cached completely, and when the first page data is cached completely, the other remaining page data in the data source can be cached simultaneously under the condition that the rendering of the first page data is not affected. Therefore, the initial page can be completely rendered and displayed under the condition of large data volume, and friendly use experience is brought to a user.
With reference to the foregoing embodiment, in an implementation manner, in step S607, the extracting, according to the second state code, second page data corresponding to the page switching request from a cached data source for rendering and displaying may specifically include the following steps:
step S6071, acquiring a display area to be switched corresponding to the page switching request.
When the page is switched, whether local page switching or global page switching is performed can be determined according to corresponding parameters in the link address of the page. In order to improve the efficiency of page rendering, a to-be-switched display area needing page switching is obtained from a page request, and the rendering burden of a client is reduced.
And step S6072, extracting second page data corresponding to the display area to be switched from the cached data source according to the second state code.
And after the corresponding display area to be switched is obtained from the page switching request, extracting corresponding second page data from the page data in the cached data source according to the first state code.
It should be noted that, in this embodiment, the sequence of step S6071 and step S6072 is not limited, and in actual application, the second page data required by the page switching request may be directly extracted from the cached data source according to the second state code, and then the to-be-switched display area corresponding to the page switching request may be obtained according to the second state code.
Step S6073, obtain a second rendering permission for the page switching request in the second state code.
And S6074, rendering the second page data according to the second rendering authority and replacing the display area to be switched.
Similarly, before rendering the page data, a second rendering authority needs to be obtained from the second state code, and the second page data is rendered according to the second rendering authority.
For example, when the client jumps from the www.abc.com page to the www.abc.com/1 page, the client generates a page switching request according to corresponding parameters carried in the www.abc.com/1 link and a login user account of the current client and sends the page switching request to the background server, after receiving the page switching request, the background server establishes a corresponding relationship between the visualized page data and the status code according to corresponding parameter information carried in the page switching request and permissions corresponding to the user account, returns a corresponding status code, extracts www.abc.com/1 link-corresponding page data from the cached data source according to the status code, and finally renders and displays the page data according to rendering permissions in the status code.
In this embodiment, when the page is initially loaded, the background server already returns other page data related to the initial page, and caches the related page data to the local of the client, so that, when the page is switched, after the client sends a page switching request to the background server, the background server only needs to return a corresponding status code, and the client obtains the page data required by page switching locally according to the corresponding status code, thereby reducing the number of times of requesting data resources with the background server, and improving the page rendering efficiency.
Optionally, in this embodiment of the present application, the client may further clear the cached data source when detecting a closing operation for the page triggered by the user. When the client is closed, the data source cached in the local part of the client during the initial page loading is removed, so that the caching burden of the client can be reduced, and the efficiency of caching the data source when the client starts to load the initial page next time is improved, thereby reducing the time required by the page loading and rendering process and improving the rendering efficiency.
In this embodiment, when the client performs initial page loading, a page initial loading request is first sent to the background server, and then a first state code and a data source returned by the background server are received. After receiving the data source and the first state code, the client caches the data source, extracts first page data for the page initial loading request from the cached data source according to the first state code, and finally renders and displays the first page data according to the first state code to complete rendering and displaying of the initial page.
And then, when the page is switched, the client sends a page switching request to the background server, the background server returns the second state code, and the client extracts the second page data aiming at the page switching request from the cached data source according to the received second state code to perform rendering display. In the invention, when the client side switches the page, the corresponding page data is extracted from the cached data source according to the state code, and the background server returns the corresponding state code when responding to the page switching request, so that the client side can finish the rendering display of the switched page according to the state code, thereby reducing the frequency of data requests between the client side and the background server and improving the rendering efficiency.
For ease of understanding, the page loading process of the page loading method of the present invention is described below as an example.
Referring to fig. 7, a flowchart of an embodiment of a page loading method according to the embodiment of the present invention is shown.
Step S701, a user clicks a network link, and a client enters a page loading process;
step S702, a client sends a page initial loading request to a background server;
step S703, the background server returns a data source and a status code according to the page initial loading request;
step S704, the client caches the received data source locally;
then, the process proceeds to step S708;
step S708, the client reads the page data in the cached data source according to the status code;
step S709, the client renders and displays the read page data according to the state;
when the client needs to switch pages due to the change of the client requirement, the process proceeds to step S705;
step S705, the client detects an operation requiring a change, such as clicking a link address in a currently displayed page;
step S706, the client sends a page switching request to the background server;
step S707, the background server returns a status code according to the page switching request, and the client receives the status code and enters step S705;
when the client detects a trigger operation of page closing, the method proceeds to step S710;
step S710, the client clears the cached data source and closes the display page.
In practical application, the client and the background server can perform data request interaction in a mode of calling an interface through the interface. The method specifically comprises the following steps: the client detects the change of the demand, and sends a page switching request to a background server in a mode of calling an interface for processing the template management of data logic; and the template management obtains a corresponding state code according to the page switching request, then an interface of a client product set is called through the interface of the template management, the state code is sent to the client, and the product set of the client performs rendering display according to the corresponding rendering authority contained in the state code.
In order to further improve the rendering efficiency, a set of state codes may be stored in the background server in advance, and different rendering permissions, such as 0-display, 1-hiding, 2-readable, etc., may be set for each state code. When the background server acquires the state code according to the page request, the background server only needs to extract the stored state code, so that the number of times of generating the state code is reduced, and the rendering efficiency is further improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 8, a schematic structural diagram of an embodiment of a page loading apparatus according to the embodiment of the present invention is shown, where the apparatus is applied to a client, and the apparatus includes:
a first sending module 801, configured to send a page initial loading request to a background server;
a first receiving module 802, configured to receive a data source and a first status code returned by the background server in response to the page initial loading request;
a caching module 803, configured to cache the data source;
a first rendering and displaying module 804, configured to extract, according to the first status code, first page data corresponding to the page initial loading request from a cached data source for rendering and displaying;
a second sending module 805, configured to send a page switching request to the background server;
a second receiving module 806, configured to receive a second state code returned by the background server in response to the page switching request;
a second rendering and displaying module 807, configured to extract, according to the second state code, second page data corresponding to the page switching request from a cached data source for rendering and displaying.
Optionally, the cache module includes:
the query submodule is used for querying first page data corresponding to the page initial loading request in the data source according to the first state code;
and the first cache submodule is used for caching the data source from the first page data.
Optionally, the first rendering and displaying module comprises:
the judgment submodule is used for judging whether the first page data is cached completely;
the first extraction submodule is used for extracting the first page data under the condition that the first page data is cached completely;
a first obtaining submodule, configured to obtain a first rendering permission in the first status code, where the first rendering permission is for the page initial loading request;
and the first rendering and displaying submodule is used for rendering and displaying the first page data according to the first rendering authority.
Optionally, the apparatus further comprises:
and the asynchronous cache module is used for caching the data source after the first page data is extracted while rendering and displaying the first page data.
Optionally, the second rendering and displaying module comprises:
the second obtaining sub-module is used for obtaining a display area to be switched corresponding to the page switching request;
the second extraction submodule is used for extracting second page data corresponding to the display area to be switched from a cached data source according to the second state code;
a third obtaining sub-module, configured to obtain a second rendering permission for the page switching request in the second state code;
and the second rendering and displaying submodule is used for rendering the second page data according to the second rendering authority and replacing the display area to be switched.
Optionally, the apparatus further comprises:
and the clearing module is used for clearing the cached data source when the closing operation aiming at the page triggered by the user is detected.
For the embodiment of the page loading device, since it is basically similar to the embodiment of the page loading method, the description is simple, and for relevant points, reference may be made to partial description of the embodiment of the page loading method.
An embodiment of the present invention further provides an electronic device, including:
one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the electronic device to perform a page loading method as in any one of the embodiments of the invention.
The embodiment of the invention also provides a computer readable storage medium, and a stored computer program enables a processor to execute the page loading method according to the embodiment of the invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The page loading method, the page loading device, the electronic device and the computer-readable storage medium provided by the invention are described in detail, and a specific example is applied in the text to explain the principle and the implementation of the invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A page loading method is applied to a client side and comprises the following steps:
sending a page initial loading request to a background server, wherein the page initial loading request comprises page data request information and state verification information;
receiving a data source returned by the background server in response to the page initial loading request, obtaining a page control authority corresponding to a user account according to the state of the user account logged in by the current client, and receiving a first state code which is returned by the background server and identifies a rendering authority corresponding to the page control authority; the first state code is generated according to data request information and state verification information by a background server;
caching the data source;
extracting first page data corresponding to the page initial loading request from a cached data source according to the first state code to perform rendering display;
sending a page switching request to the background server;
receiving a second state code returned by the background server in response to the page switching request;
and extracting second page data corresponding to the page switching request from a cached data source according to the second state code for rendering and displaying.
2. The method of claim 1, wherein caching the data source comprises:
inquiring first page data corresponding to the page initial loading request in the data source according to the first state code;
caching the data source starting from the first page data.
3. The method of claim 2, wherein the extracting, according to the first status code, the first page data corresponding to the initial page load request from the cached data source for rendering presentation comprises:
judging whether the first page data is cached completely;
under the condition that the first page data is cached completely, extracting the first page data according to the first state code;
acquiring a first rendering authority aiming at the page initial loading request in the first state code;
and rendering and displaying the first page data according to the first rendering authority.
4. The method of claim 3, further comprising:
and caching the data source after the first page data is extracted while rendering and displaying the first page data.
5. The method according to claim 1, wherein the extracting, according to the second state code, second page data corresponding to the page switch request from a cached data source for rendering presentation, includes:
acquiring a display area to be switched corresponding to the page switching request;
extracting second page data corresponding to the display area to be switched from a cached data source according to the second state code;
acquiring a second rendering permission aiming at the page switching request in the second state code;
and rendering the second page data according to the second rendering permission and replacing the display area to be switched.
6. The method according to any one of claims 1-5, further comprising:
and when a closing operation for the page triggered by the user is detected, clearing the cached data source.
7. A page loading apparatus, wherein the apparatus is applied to a client, the apparatus comprising:
the system comprises a first sending module, a second sending module and a third sending module, wherein the first sending module is used for sending a page initial loading request to a background server, and the page initial loading request comprises page data request information and state verification information;
the first receiving module is used for receiving a data source and a first state code returned by the background server in response to the page initial loading request; obtaining a page control authority corresponding to a user account according to the state of the user account logged in by the current client, and receiving a first state code which is returned by the background server and identifies the rendering authority corresponding to the page control authority; the first state code is generated according to data request information and state verification information by a background server;
the cache module is used for caching the data source;
the first rendering and displaying module is used for extracting first page data corresponding to the page initial loading request from a cached data source according to the first state code to perform rendering and displaying;
the second sending module is used for sending a page switching request to the background server;
a second receiving module, configured to receive a second state code returned by the background server in response to the page switching request;
and the second rendering and displaying module is used for extracting second page data corresponding to the page switching request from a cached data source according to the second state code to perform rendering and displaying.
8. The apparatus of claim 7, wherein the cache module comprises:
the query submodule is used for querying first page data corresponding to the page initial loading request in the data source according to the first state code;
and the first cache submodule is used for caching the data source from the first page data.
9. The apparatus of claim 8, wherein the first rendering presentation module comprises:
the judgment submodule is used for judging whether the first page data is cached completely;
the first extraction submodule is used for extracting the first page data according to the first state code under the condition that the first page data is cached completely;
a first obtaining submodule, configured to obtain a first rendering permission in the first status code, where the first rendering permission is for the page initial loading request;
and the first rendering and displaying submodule is used for rendering and displaying the first page data according to the first rendering authority.
10. The apparatus of claim 9, further comprising:
and the asynchronous cache module is used for caching the data source after the first page data is extracted while rendering and displaying the first page data.
11. The apparatus of claim 7, wherein the second rendering presentation module comprises:
the second obtaining sub-module is used for obtaining a display area to be switched corresponding to the page switching request;
the second extraction submodule is used for extracting second page data corresponding to the display area to be switched from a cached data source according to the second state code;
a third obtaining sub-module, configured to obtain a second rendering permission for the page switching request in the second state code;
and the second rendering and displaying submodule is used for rendering the second page data according to the second rendering authority and replacing the display area to be switched.
12. The apparatus of any one of claims 7-11, further comprising:
and the clearing module is used for clearing the cached data source when the closing operation aiming at the page triggered by the user is detected.
13. An electronic device, comprising:
one or more processors; and
one or more machine-readable media having instructions stored thereon that, when executed by the one or more processors, cause the electronic device to perform the page loading method of any of claims 1-6.
14. A computer-readable storage medium storing a computer program for causing a processor to execute the page loading method according to any one of claims 1 to 6.
CN201910741270.0A 2019-08-12 2019-08-12 Page loading method and device, electronic equipment and readable storage medium Active CN110647698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910741270.0A CN110647698B (en) 2019-08-12 2019-08-12 Page loading method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910741270.0A CN110647698B (en) 2019-08-12 2019-08-12 Page loading method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110647698A CN110647698A (en) 2020-01-03
CN110647698B true CN110647698B (en) 2022-01-14

Family

ID=69009487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910741270.0A Active CN110647698B (en) 2019-08-12 2019-08-12 Page loading method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110647698B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112256363A (en) * 2020-09-21 2021-01-22 北京三快在线科技有限公司 Application page rendering method and device and electronic equipment
CN113778544B (en) * 2020-10-26 2024-05-17 北京沃东天骏信息技术有限公司 Resource loading optimization method, device and system, electronic equipment and storage medium
CN112748843B (en) * 2021-01-29 2022-06-03 腾讯科技(深圳)有限公司 Page switching method and device, computer equipment and storage medium
CN113254819B (en) * 2021-05-27 2022-09-13 广东太平洋互联网信息服务有限公司 Page rendering method, system, equipment and storage medium
CN113434234B (en) * 2021-06-29 2023-06-09 青岛海尔科技有限公司 Page jump method, device, computer readable storage medium and processor
CN114064174B (en) * 2021-11-09 2024-03-08 贝壳找房(北京)科技有限公司 Page control method, device and storage medium
CN114443200B (en) * 2022-01-29 2024-04-09 苏州达家迎信息技术有限公司 Page display method, device and equipment of mobile client and storage medium
CN114265661B (en) * 2022-03-03 2022-06-21 苏州达家迎信息技术有限公司 Page display method, device, equipment and storage medium for mobile client
CN115514650A (en) * 2022-09-21 2022-12-23 杭州网易再顾科技有限公司 Bandwidth management method, device, medium and electronic equipment in current limiting scene

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038245A (en) * 2017-04-25 2017-08-11 努比亚技术有限公司 Page switching method, mobile terminal and storage medium
CN107451181A (en) * 2017-06-14 2017-12-08 北京小度信息科技有限公司 Page rendering method and apparatus
CN107729516A (en) * 2017-10-26 2018-02-23 北京百度网讯科技有限公司 Single page application methods of exhibiting and device, server, equipment and computer-readable recording medium
CN109213947A (en) * 2018-08-31 2019-01-15 北京京东金融科技控股有限公司 Browser page methods of exhibiting, device, electronic equipment and readable medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9122657B2 (en) * 2013-05-16 2015-09-01 International Business Machines Corporation Webpage display system leveraging OSGI
CN106294594A (en) * 2016-07-29 2017-01-04 东软集团股份有限公司 Page rendering method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107038245A (en) * 2017-04-25 2017-08-11 努比亚技术有限公司 Page switching method, mobile terminal and storage medium
CN107451181A (en) * 2017-06-14 2017-12-08 北京小度信息科技有限公司 Page rendering method and apparatus
CN107729516A (en) * 2017-10-26 2018-02-23 北京百度网讯科技有限公司 Single page application methods of exhibiting and device, server, equipment and computer-readable recording medium
CN109213947A (en) * 2018-08-31 2019-01-15 北京京东金融科技控股有限公司 Browser page methods of exhibiting, device, electronic equipment and readable medium

Also Published As

Publication number Publication date
CN110647698A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110647698B (en) Page loading method and device, electronic equipment and readable storage medium
CN109617956B (en) Data processing method and device
CN111193788A (en) Audio and video stream load balancing method and device
CN109309806B (en) Video conference management method and system
CN109788247B (en) Method and device for identifying monitoring instruction
CN111092863B (en) Method, client, server, device and medium for accessing internet website
CN110719258B (en) Server access method and system
CN109743550B (en) Method and device for monitoring data flow regulation
CN110602039A (en) Data acquisition method and system
CN111431966A (en) Service request processing method and device, electronic equipment and storage medium
CN110012063B (en) Data packet processing method and system
CN110134892B (en) Loading method and system of monitoring resource list
CN109743585B (en) Method and device for collecting monitoring videos and cloning favorites
CN110418169B (en) Monitoring data carousel method and device
CN108989896B (en) Video-on-demand request processing method and device
CN108965219B (en) Data processing method and device based on video network
CN110557411A (en) video stream processing method and device based on video network
CN111031090B (en) Data processing method and device, electronic equipment and readable storage medium
CN110516141B (en) Data query method and device, electronic equipment and readable storage medium
CN110120937B (en) Resource acquisition method, system, device and computer readable storage medium
CN110267110B (en) Concurrent on-demand processing method and system based on video network
CN110475088B (en) User login method and device
CN109756476B (en) User-defined nickname setting method and system based on video network
CN109413460B (en) Method and system for displaying function menu of video network terminal
CN109379222B (en) Method and system for comparing versions of core servers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant