CN115334010B - Query information processing method and device, storage medium and electronic device - Google Patents

Query information processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN115334010B
CN115334010B CN202210945830.6A CN202210945830A CN115334010B CN 115334010 B CN115334010 B CN 115334010B CN 202210945830 A CN202210945830 A CN 202210945830A CN 115334010 B CN115334010 B CN 115334010B
Authority
CN
China
Prior art keywords
queue
data
query information
target
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210945830.6A
Other languages
Chinese (zh)
Other versions
CN115334010A (en
Inventor
代沆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210945830.6A priority Critical patent/CN115334010B/en
Publication of CN115334010A publication Critical patent/CN115334010A/en
Application granted granted Critical
Publication of CN115334010B publication Critical patent/CN115334010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a method and a device for processing query information, a storage medium and an electronic device, wherein the method comprises the following steps: determining a first queue in the multi-stage feedback queue set by the NVR according to the queue priority and the queue state, wherein the first queue comprises: query information corresponding to the front end of NVR docking, a first resource share corresponding to the query information, and the data amount sent by the front end in the first queue; determining target query information in a plurality of query information of a first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives all query data sent by the target front end according to the first data request, determining second data quantity sent by the target front end in the first queue; and adding the target query information into the second queue under the condition that the second data volume is larger than the preset data volume of the first queue.

Description

Query information processing method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of communications, and in particular, to a method and apparatus for processing query information, a storage medium, and an electronic device.
Background
When a network video recorder (Network Video Recorder, abbreviated as NVR) is connected with a plurality of paths of front ends, statistical data of the front ends when the front ends are offline needs to be acquired and stored in the NVR end. Typically, when the front end comes online, a request is initiated to the front end. If the offline data size is large, multiple data query requests are sent to the same front end, and the front end needs to continuously send data to the NVR, so that the performance of the front end is affected to a certain extent. If the NVR inquiry frequency and the timing can be controlled, the situation that the front end always transmits data in a certain period of time and is idle later is avoided, the pressure on the front end can be reduced, and the data transmission load of the front end is balanced, so that the problem to be solved is solved.
Aiming at the problems that in the related art, under the condition that the data volume of front-end equipment is overlarge, multiple data query requests are sent to the same front-end, the performance of the front-end equipment is further affected, and the like, no effective solution is proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing query information, a storage medium and an electronic device, which at least solve the problems that in the related art, under the condition that the data volume of front-end equipment is too large, multiple data query requests are sent to the same front-end, and further the performance of the front-end equipment is affected.
According to an embodiment of the present invention, there is provided a method for processing query information, including: determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that NVR receives first query data sent by the target front end according to the first data request, determining second data quantity sent by the target front end in the first queue; and adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
In one exemplary embodiment, determining target query information from the plurality of query information for the first queue based on the first resource share includes: and determining target query information from the plurality of query information according to the first resource share respectively corresponding to the plurality of query information and the random number generated by the random number generator in the NVR.
In an exemplary embodiment, determining target query information from the plurality of query information according to the first resource share respectively corresponding to the plurality of query information and the random number generated by the random number generator in the NVR includes: determining the resource share sum of the first queue according to the first resource shares corresponding to the plurality of inquiry information respectively; determining a target share interval corresponding to the random number according to the resource share and the random number generated by the random number generator, wherein the plurality of inquiry information respectively correspond to different share intervals, and the range size of the share interval corresponding to any inquiry information in the plurality of inquiry information is consistent with the numerical value size of the resource share of the any inquiry information; and taking the query information corresponding to the target share interval as the target query information.
In an exemplary embodiment, after sending the first data request to the target front end corresponding to the target query information, the method further includes: receiving first query data sent by the target front end according to the first data query request; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; and under the condition that a second resource share corresponding to the target front end exists in the query data, updating the first resource share into the second resource share.
In an exemplary embodiment, after sending the first data request to the target front end corresponding to the target query information, the method further includes: determining whether the target front end transmits second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end; and under the condition that the target front end does not send the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain the second data volume.
In an exemplary embodiment, before determining the first queue in the multi-stage feedback queue according to the queue priority and the queue status, the method further comprises: setting a timer for the multi-stage feedback queue of the NVR; and under the condition that the timer is overtime, adding the query information of the first front end in the multi-stage feedback queue to the highest priority queue in the multi-stage feedback queue, wherein the first front end is used for indicating the front end of the incomplete query in the timing time of the timer.
In one exemplary embodiment, before or after determining the first queue in the multi-stage feedback queue set by the NVR according to the queue priority and the queue status, the method further includes: determining whether to send a second data request to the second front end if the second front end is detected to be accessed to the NVR; and setting the query information of the second front end in the highest priority queue in the multi-stage feedback queues under the condition that the second data request is sent to the second front end.
In one exemplary embodiment, determining a first queue in a multi-stage feedback queue based on queue priority and queue status includes: the method comprises the steps of obtaining the queue priority and the queue state of each stage of feedback queue in the multi-stage feedback queues, wherein the queue state at least comprises one of the following steps: an empty state, a non-empty state; determining a queue with a non-empty state in the multi-stage feedback queue according to the queue state of each stage of feedback queue; and determining a queue with highest priority from the queues in the non-empty state, and taking the queue with highest priority as the first queue.
According to another embodiment of the present invention, there is also provided a query information processing apparatus, including: the first determining module is configured to determine a first queue in a multi-level feedback queue set by the NVR according to a queue priority and a queue status, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; the sending module is used for determining target query information in a plurality of query information of the first queue according to the first resource share and sending a first data request to a target front end corresponding to the target query information; the second determining module is used for determining a second data amount sent by the target front end in the first queue under the condition that the NVR receives first query data sent by the target front end according to the first data request; and the updating module is used for adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
According to still another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the above-described query information processing method when run.
According to still another aspect of the embodiments of the present invention, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the method for processing query information according to the computer program.
In the embodiment of the invention, a first queue is determined in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that NVR receives first query data sent by the target front end according to the first data request, determining second data quantity sent by the target front end in the first queue; adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue; by adopting the technical scheme, the problems that when the data volume of the front-end equipment is overlarge, multiple data query requests are sent to the same front-end, the performance of the front-end equipment is affected and the like are solved, the resource share is distributed to the multi-stage feedback queue, the data request is sent to the target front-end according to the resource share, and when the target front-end sends all query data according to the data request, the target front-end is added to the second queue, the state of the front-end is considered for query, the pressure of the front-end for sending the query data can be balanced, or when a plurality of front-ends are butted, the query to the front-end is balanced, the query pressure to the front-end is balanced, and the performance of the front-end equipment is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a hardware block diagram of a computer terminal of a method for processing query information according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of processing query information according to an embodiment of the application;
FIG. 3 is a schematic diagram of a method of processing query information according to an embodiment of the present application;
fig. 4 is a block diagram of a query information processing apparatus according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Taking a computer terminal as an example, fig. 1 is a block diagram of a hardware structure of a computer terminal according to a method for processing query information according to an embodiment of the present application. As shown in fig. 1, the computer terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and in one exemplary embodiment, may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the computer terminal described above. For example, a computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than the equivalent functions shown in FIG. 1 or more than the functions shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for processing query information in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the computer terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a computer terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for processing query information is provided and applied to the computer terminal, and fig. 2 is a flowchart of a method for processing query information according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and data volume statistical information sent by the front end corresponding to the query information in the first queue;
step S204, determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
step S206, determining a second data quantity sent by the target front end in the first queue under the condition that the target front end sends all query data according to the first data request;
step S208, adding the target query information to a second queue when the second data size is greater than the preset data size of the first queue, where the priority of the second queue is smaller than that of the first queue.
It should be noted that, the priority of the second queue is a queue with a priority that is smaller than that of the first queue.
Through the steps, a first queue is determined in the multi-stage feedback queues set by the NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that NVR receives first query data sent by the target front end according to the first data request, determining second data quantity sent by the target front end in the first queue; and when the target front end sends all query data according to the data request, the target front end is added into the second queue, the state of the front end is considered for query, the pressure of the front end for sending the query data can be balanced, and the query pressure of the front end is balanced when a plurality of front ends are in butt joint, so that the performance of the front end is improved.
It should be noted that, the application scenario of the above embodiment may be the following scenario: 1) Counting the number of people in security monitoring: NVR needs to obtain demographic details from the front end for a specified period of time (e.g., one year). When the front end is online, the NVR sends a query message to the front end, the front end sends data to the NVR, and the NVR is stored in a local database, so that a user can conveniently perform self-defined query; 2) Traffic flow control in security monitoring: NVR needs to acquire offline video structuring direction statistics to the front end. In street, square, road, etc. scenes, the movement of people, vehicles, non-motor vehicles is directional, such as from left to right, right to left, or bi-directional. The video structured direction statistics function can be used for counting flow data of people, vehicles and non-motor vehicles as required, and provides guidance for urban road planning, traffic flow/people flow control, route arrangement and the like.
In one exemplary embodiment, determining target query information from the plurality of query information for the first queue based on the first resource share includes: and determining target query information from the plurality of query information according to the first resource share respectively corresponding to the plurality of query information and the random number generated by the random number generator in the NVR.
Specifically, a multistage feedback queue is set inside the NVR, and the multistage feedback queue includes: and the query information corresponding to the front ends of the NVR butt joints can only exist in one queue at any moment. Each query in the queue is configured with a resource share, e.g., a value of 1-100, and a default value, e.g., 50, is initially set. When the NVR schedules the first queue, a random number is obtained through a random number generator, and corresponding target query information is determined according to the random number generated by the random number generator and the first resource share.
There are various ways of determining target query information from the plurality of query information according to the first resource share corresponding to the plurality of query information and the random number generated by the random number generator in the NVR, and the embodiment of the present invention provides an implementation manner, and in particular: determining the resource share sum of the first queue according to the first resource shares corresponding to the plurality of inquiry information respectively; determining a target share interval corresponding to the random number according to the resource share and the random number generated by the random number generator, wherein the plurality of inquiry information respectively correspond to different share intervals, and the range size of the share interval corresponding to any inquiry information in the plurality of inquiry information is consistent with the numerical value size of the resource share of the any inquiry information; and taking the query information corresponding to the target share interval as the target query information.
That is, the resource share of all the query information in the first queue is obtained first, all the resource shares are added to obtain the corresponding resource share sum of the first queue, then a random number from 0 to the resource share sum is generated through the random number generator, the interval in which the resource share of the query information falls is determined, the query information corresponding to the target share interval is used as the target query information, and the first data request is sent to the target front end corresponding to the target query information. Thus, the front end with large share of resources is ensured, and the probability of the front end is selected in the query.
For example, after determining the highest non-empty priority queue, it is assumed that the queue has 5 pieces of query information corresponding to the front end, and the value range of the resource share value is [0,100]. The resource share values in the 5 inquiry information are 30, 60, 40, 30 and 80 respectively, and if the sum of the resource shares is 30+60+40+30+80=240, a random number in the range of 0-239 is generated, namely the result of the random number generator is 0,1, 2 … 239, and the total number of the random numbers is 240 integers. The share intervals corresponding to the 5 inquiry information are [0,29], [30,89], [90,129], [130,159], [160,239], and the size of each interval corresponds to the size of the resource share value. Since the probability of the random number generator generating each integer from 0 to 239 is the same, there is a greater probability that query information with a large share of resources is selected. If the random number generator generates a random number 113 at a time, the interval of [90,129] is hit, and the 3 rd inquiry information in the queue is selected to inquire the corresponding front-end data. Determining the "target resource share interval" determines the query information.
In an exemplary embodiment, after a first data request is sent to a target front end corresponding to the target query information, first query data sent by the target front end according to the first data query request is received; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; and under the condition that a second resource share corresponding to the target front end exists in the query data, updating the first resource share into the second resource share.
Specifically, when the front end supports calculation of the resource share, and when the front end sends complete query data to the NVR, a new second resource share is sent to the NVR, where it is required to be noted that the second resource share is calculated according to the load, bandwidth utilization rate, IO throughput rate and the like of the front end, and reflects the real-time load condition of the front end; in the case where the front end does not support the resource share calculation, then in the case where the front end sends complete query data to the NVR, a default resource share 50 is sent to the NVR, i.e., the front end's resource share is not modified.
After the above step S206, it is also necessary to: determining the size relation between the quantity of all the query data and the total quantity of the query data of the first queue; determining whether the target front end transmits second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end; and under the condition that the target front end does not send the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain the second data volume.
That is, after the target front end sends the query data with a certain data amount, the priority of the target front end is reduced, that is, the target front end is added to the second queue, so that the problems that in the related art, when the data amount of the front end device is too large, multiple data query requests are sent to the same front end, the performance of the front end device is affected, and the like are solved, and when the size relation indicates that the number of all the query data is smaller than or equal to the total amount of the query data, the target front end is continuously placed in the first queue, so that the front end with small data amount is guaranteed, the query is completed as soon as possible, and the resources of the front end are reduced for a long time.
In other words, each queue will set a data value, and each query in the queue will contain a sent data amount, with an initial value set to 0. And after each inquiry is finished, if the target front end has sent all inquiry data, deleting the inquiry information corresponding to the front end in the queue. Otherwise, the front end query data size is added to the sent data size, that is, when each sending is finished, the sent data size stored in the query information is updated, and if the updated second data size is larger than the preset data size of the queue (each queue can specify a threshold value of the sent data size of the queue during initializing), the query information is moved to the lower-level queue. If no lower queue exists, no moving process is performed.
In one exemplary embodiment, determining that the target front-end has sent all query data includes: when inquiring the data of the front end of the target, the NVR inquires for a plurality of times, wherein each time the NVR inquires for an offset and inquires the data number information, and the return information of the front end of the target comprises the data number and the offset. For example: 10000 pieces of data need to be queried, the target front end returns an offset for each query to indicate which piece of data is queried (multiple pieces of data can be queried for one query). If the offset reaches 10000, it indicates that the data need not be queried from the target front end.
In one exemplary embodiment, a timer is set for the multi-stage feedback queue of the NVR before a first queue is determined in the multi-stage feedback queue based on queue priority and queue status; and under the condition that the timer is overtime, adding the query information of the first front end in the multi-stage feedback queue to the highest priority queue in the multi-stage feedback queue, wherein the first front end is used for indicating the front end of the incomplete query in the timing time of the timer.
And after a period of time, namely the timer is overtime, the front end which is not queried and the front end which does not receive the data request are added into the highest priority queue again. The rule solves the problem that the NVR does not query data of a front end for a long time. After the timer times out, the front end which is not queried is put into the highest priority queue, and the front end in the highest priority queue performs polling query with other front ends.
It should be noted that, all the front ends that are not queried are put into the highest priority queue. I.e. when the timer times out, all the ticket values at the front end are unchanged in the highest priority queue, and the statistic value of the sent data quantity at the front end is set to 0.
In an exemplary embodiment, before or after determining a first queue in a multi-stage feedback queue set by an NVR according to a queue priority and a queue status, determining whether to send a second data request to a second front end if the second front end is detected to be accessed to the NVR; and setting the query information of the second front end in the highest priority queue in the multi-stage feedback queues under the condition that the second data request is sent to the second front end.
That is, when the second front end is just accessed into the NVR, and when the second front end needs to send a second data request to the second front end, the query information of the second front end is set in the highest priority queue in the multi-level feedback queue, and when the first queue is the most priority queue, the query information of the second front end is added into the first queue.
In one exemplary embodiment, determining a first queue in a multi-stage feedback queue based on queue priority and queue status includes: the method comprises the steps of obtaining the queue priority and the queue state of each stage of feedback queue in the multi-stage feedback queues, wherein the queue state at least comprises one of the following steps: an empty state, a non-empty state; determining a queue with a non-empty state in the multi-stage feedback queue according to the queue state of each stage of feedback queue; and determining a queue with highest priority from the queues in the non-empty state, and taking the queue with highest priority as the first queue.
In the embodiment of the invention, firstly, the queue in the non-empty state is determined in a plurality of queues through the queue state, then, the queue with the highest priority is determined in the queue in the non-empty state, so that the non-empty highest priority queue is determined in the plurality of queues, and if all the queues are empty, whether new query information arrives is checked regularly.
In order to better understand the process of the query information processing method, the implementation method flow of the query information processing is described below in conjunction with the alternative embodiment, but the implementation method flow is not limited to the technical scheme of the embodiment of the present invention.
In the prior art, for example, patent publication No. CN110503284a, the design gist of the present invention is to provide a statistical method based on queuing data, which includes: receiving a statistics instruction based on queuing data, wherein the statistics instruction comprises a time period to be counted and a parameter to be counted; obtaining queuing data of the video image corresponding to the time period to be counted, wherein the queuing data comprises the following steps: total number of queuing personnel and/or queuing time of queuing personnel; and carrying out statistical analysis on the acquired queuing data based on the parameters to be counted, and generating a statistical result. By applying the queuing data-based statistical method, queuing data in a video image can be acquired according to the requirement of a user, and statistical analysis is performed on the queuing data required by the user, so that more visual statistical results can be provided for the user.
For another example, the patent with the publication number of CN111585915A discloses a long and short flow balance transmission method, a system, a storage medium and a cloud server, and a deep reinforcement learning framework of the long and short flow balance transmission method of the data center is constructed; short flow real-time optimization, namely improving interactive short flow transmission delay according to a multi-level queue threshold optimization method based on reinforcement learning; selecting a transmission strategy by utilizing the decision probability, initializing the decision probability, and executing the selected transmission strategy according to the probability; and dynamically adjusting the decision probability, and iteratively updating the transmission strategy to adapt to the change of the flow types of the data center, so as to finally realize the balanced transmission of the long flow and the short flow.
The patent publication No. CN110503284A has major drawbacks in its design: the problem of load balancing when a plurality of front ends (devices) are accessed simultaneously to acquire data is not considered; patent publication number CN111585915a, which is designed mainly to use a multi-stage feedback queue for long and short flow control in cloud storage. And the prior art exists: under the condition that the data volume of the front-end equipment is overlarge, multiple data query requests are sent to the same front-end, and therefore the performance of the front-end equipment is affected.
In order to solve the above-described problems, in the present embodiment, there is provided a processing method of query information based on the following principle: the Multi-stage feedback queue (Multi-level Feedback Queue, MLFQ) and the proportion share scheduling idea are applied to the front-end statistic data acquired by the NVR and used for balancing the load when the front-end transmits a large amount of data and reducing the pressure of front-end data transmission.
The MLFQ is used in the process scheduling by the operating system, and the query information to be executed is put into priority queues of different levels, and the scheduling rule is as follows:
rule 1: if the priority of A is greater than the priority of B, operating A;
rule 2: if priority of a = priority of B, running a and B in rotation;
rule 3: when a process enters the system, the process is placed in the highest priority queue;
rule 4: once a process runs out of quota of the process in a certain layer of queue, the priority of the process is reduced, namely the process is moved into a lower level of queue;
rule 5: and after a period of time S, all the processes in the system are added into the highest priority queue again.
Proportion share scheduling: ensuring that each process obtains a certain proportion of CPU time, and distributing a lottery number (corresponding to the share of the resources in the embodiment) for each process, wherein the lottery number represents the share of the certain resource occupied by the process, and when the resource is distributed, the lottery number is distributed according to the share of the lottery number.
MLFQ combines proportional share scheduling, queries on front-end data, analogizing to the execution of a process. The processing rule of the query information corresponding to the embodiment of the invention is as follows:
the NVR is internally provided with a multi-stage feedback queue to manage the data query of the front end of the NVR, and certain query information can only exist in one queue at any moment. The query information in each queue is configured with a ticket value, such as a value of 1-100, and a default value, such as 50, is initially set. And the front end calculates a lottery value according to the running condition of the front end and feeds back the lottery value to the NVR at the end of sending the query data according to the data request at a time. If the pressure is smaller, a larger ticket value is set, so that the NVR has higher probability of inquiring the front end when scheduling. Otherwise, a smaller ticket value is set, so that the probability of being queried next time is smaller.
The MLFQ rule of the embodiment of the present invention is as follows:
1. if the priority of the query information corresponding to the front end A in the queue is greater than the priority of the query information corresponding to the front end B, sending a data request to the front end A;
2. the front ends in the same priority are scheduled by using a proportional share scheduling method, and the specific method is as follows:
a) When the front end is on line, a default ticket value 50 is set for the front end, and the range is [1,100]. If the front end supports the calculation of the lottery value, when the complete query data is sent to the NVR, a new lottery value is sent to the NVR, and the value is calculated according to the self load, the bandwidth utilization rate, the IO throughput rate and the like, so that the real-time condition of the front end is reflected.
b) If the front end does not support lottery value transmission, then at the end of the round of data transmission, a default lottery value 50 is transmitted to the NVR.
c) After the NVR receives the lottery value fed back by the front end, the lottery value of the query information is modified.
d) When the NVR dispatches a queue, the sum of all inquiry ticket values in the queue is calculated, then a random number is obtained through a random number generator, and corresponding front-end data is inquired when the random number falls into the section where the front-end ticket value is located. Thus, the front end with large lottery value is ensured, and the probability of the lottery value is selected in the next inquiry.
3. After the front end is online, the front end is placed in a highest priority queue;
4. when the front end sends a certain amount of data, the priority of the data is reduced, namely the data is shifted into a lower queue, and when the front end finishes sending all the data to be queried in the query, query information corresponding to the front end is removed from the appointed queue.
Hunger problem: if there are many front-end data sizes less and one front-end data size is very large, when the priority of the front-end with small data size is higher than that of the front-end with large data size or the front-end with large data size is in the same priority queue, the front-end proportion share value with small data size is larger, then the NVR will always inquire the front-end with small data size, and the front-end data with large data size will not inquire (or inquire at last) and the situation can be improved by rule 5.
5. After a period of time S, the front end (all work queues) that is not queried is rejoined to the highest priority queue. The rule solves the problem that the NVR does not query data of a front end for a long time. After the time S has elapsed, it is placed in the highest priority queue, where it is polled with other head ends.
The priority of using the MLFQ rule to control the front end of the NVR query is: for the front end with small data quantity, the complete part data can be queried faster; for the front end with large data volume, the data can be acquired in a fair and stable way without causing too much pressure on the front end.
Fig. 3 is a schematic diagram of a query information processing method according to an embodiment of the present invention, as shown in fig. 3, specifically including the following steps:
step S301: starting;
step S302: creating a multi-stage feedback queue, and starting a timer (after overtime, putting all the query information of incomplete queries into the highest-stage queue);
note that, the NVR determines the MLFQ-related parameters of the query control:
a) The number of queues;
b) Each layer of queue inquires about the total amount of data (corresponding to the preset data amount in the above embodiment);
c) Default ticket values of the query information corresponding to the front end;
d) The time interval S of putting all queries into the highest priority queue;
step S303: determining whether there is a second front end access, and whether a data request needs to be sent to the second front end, executing step S304 if it is determined that there is a second front end access, and a query request (corresponding to the data request in the above embodiment) needs to be sent to the second front end, otherwise executing step S305;
step S304: when a second front end is on line and data needs to be inquired to the second front end, the second front end is added into a highest priority queue, and a default ticket value is configured for inquiry information corresponding to the second front end;
step S305: determining whether a non-empty queue exists, executing step S306 if the non-empty queue exists, otherwise executing step S315;
step S306: counting the sum ticket_sum of all ticket values in the non-empty highest priority queue, obtaining a random value through a random number generator, determining a first front end (equivalent to the target front end in the embodiment) according to the range of the interval in which the random value falls, and sending a query request to the first front end;
step S307: receiving complete query data of the query request sent by the first front end;
Step S308: determining whether the first front end has sent all query data; if the target front end has sent all the query data, step S316 is executed; if the target front end does not send all the query data, step S309 is executed;
step S309: counting the transmission quantity of the first front end in the level queue, and if the transmission quantity threshold of the level queue is reached, executing step S311;
step S310: determining whether the query data has a lottery value, if so, executing step S312, and if not, executing step S313;
step S311: reducing the priority of the query information corresponding to the first front end, and moving to a next-stage queue;
step S312: updating the ticket value of the query information corresponding to the first front end;
step S313: setting the lottery value of the query information corresponding to the first front end as a default lottery value;
when the NVR performs the next query, the ticket_sum at the front end of the highest priority queue is summed up (the ticket value may be updated or moved to the next queue), then a random number on [0, ticket_sum ] is generated, and the front end of the query is determined according to the interval in which the random number falls.
Step S314: determining whether the timer is overtime, executing step S314 if the timer is overtime, otherwise executing step S303;
Step S315: transferring the query information of all the queues into the highest priority queue;
step S316: a sleep preset time period, and step S303 is executed;
step S317: the front-end information is removed from the queue.
According to the embodiment of the invention, through combining a multi-level feedback queue (MLFQ) rule and a proportion share scheduling idea and combining a front-end real-time load, the time for inquiring the data to the front-end is adjusted, so that the load balance of the front-end for sending the data is realized, and the pressure of the front-end for sending the data is reduced; the NVR maintains a multi-level queue, preferentially selects the front end in the high-level queue for data query, selects the front end in the equal-level queue according to the concept of proportional share scheduling, and adjusts elements in each queue according to a certain rule; firstly, summing lottery values of all front ends of the queue, then generating random numbers, and selecting the front ends to inquire according to intervals in which the random numbers fall. The front end sends ticket values related to the state of the front end to the NVR, and when inquiring next time, the NVR carries out scheduling according to the new ticket values, and the front end with high ticket values has higher probability to be inquired; when the front end is online and data needs to be queried, the front end enters the highest priority queue, and a default ticket value is configured for the front end. If the front end has sent the specified amount of data in the queue, then the front end is lowered by one level and moved into the next level queue; in order to ensure the equality of the data of all front-ends, after a period of time, the queries which are not finished in all queues are uniformly put into the highest priority queue.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the various embodiments of the present invention.
In this embodiment, a device for processing query information is further provided, and the device for processing query information is used to implement the foregoing embodiments and preferred embodiments, which have already been described and will not be described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram showing a structure of a device for processing query information according to an embodiment of the present invention; as shown in fig. 4, includes:
a first determining module 42, configured to determine a first queue from among the multiple levels of feedback queues set by the NVR according to the queue priority and the queue status, where the first queue is a non-empty highest level queue, and the first queue includes: query information corresponding to the front end of the NVR docking and a first resource share corresponding to the query information;
a sending module 44, configured to determine target query information from a plurality of query information in the first queue according to the first resource share, and send a first data request to a target front end corresponding to the target query information;
a second determining module 46, configured to determine, when the NVR receives first query data sent by the target front end according to the first data request, a second data amount that has been sent by the target front end in the first queue;
and an updating module 48, configured to add the target query information to a second queue if the second data size is greater than the preset data size of the first queue, where the priority of the second queue is smaller than that of the first queue.
Through the device, the first queue is determined in the multi-stage feedback queues set by the NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that NVR receives first query data sent by the target front end according to the first data request, determining second data quantity sent by the target front end in the first queue; adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue; by adopting the technical scheme, the problems that when the data volume of the front-end equipment is overlarge, multiple data query requests are continuously sent to the same front-end, the performance of the front-end equipment is affected and the like are solved, the resource share is distributed for the multi-stage feedback queue, the data request is sent to the target front-end according to the resource share, and when the target front-end sends all query data according to the data request, the target front-end is added to the second queue, the state of the front-end is considered for query, the pressure of the front-end for sending the query data can be balanced, or when a plurality of front-ends are butted, the query pressure of the front-end is balanced, and the performance of the front-end equipment is further improved.
In an exemplary embodiment, the sending module is further configured to determine target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information respectively and the random numbers generated by the random number generator in the NVR.
In an exemplary embodiment, the sending module is further configured to determine a sum of the resource shares of the first queue according to the first resource shares corresponding to the plurality of query information respectively; determining a target share interval corresponding to the random number under the condition that the random number generator generates the random number according to the resource share, wherein the plurality of inquiry information respectively correspond to different share intervals; and taking the query information corresponding to the target share interval as the target query information.
In an exemplary embodiment, the update module is further configured to receive first query data sent by the target front end according to the first data query request; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; and under the condition that a second resource share corresponding to the target front end exists in the query data, updating the first resource share into the second resource share.
In an exemplary embodiment, the second determining module is further configured to determine whether the target front end has sent second query data corresponding to the target front end, where the second query data is all query data corresponding to the target front end; and under the condition that the target front end does not send the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain the second data volume.
In an exemplary embodiment, the first determining module is further configured to set a timer for the multi-stage feedback queue of the NVR; and under the condition that the timer is overtime, adding the query information of the first front end in the multi-stage feedback queue to the highest priority queue in the multi-stage feedback queue, wherein the first front end is used for indicating the front end of the incomplete query in the timing time of the timer.
In an exemplary embodiment, the first determining module is further configured to determine, if a second front end is detected to be connected to the NVR, whether to send a second data request to the second front end; and setting the query information of the second front end in the highest priority queue in the multi-stage feedback queues under the condition that the second data request is sent to the second front end.
In an exemplary embodiment, the first determining module is further configured to obtain a queue priority and a queue status of each stage of feedback queues in the multi-stage feedback queues, where the queue status includes at least one of: an empty state, a non-empty state; determining a queue with a non-empty state in the multi-stage feedback queue according to the queue state of each stage of feedback queue; and determining a queue with highest priority from the queues in the non-empty state, and taking the queue with highest priority as the first queue.
An embodiment of the present invention also provides a storage medium including a stored program, wherein the program executes the method of any one of the above.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store program code for performing the steps of:
s1, determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking and a first resource share corresponding to the query information;
S2, determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
s3, under the condition that the target front end sends all query data according to the first data request, determining second data quantity sent by the target front end in the first queue;
and S4, adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
S1, determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking and a first resource share corresponding to the query information;
s2, determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
s3, under the condition that the target front end sends all query data according to the first data request, determining second data quantity sent by the target front end in the first queue;
and S4, adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for processing query information, comprising:
determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue;
determining target query information in a plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
determining a second data amount sent by the target front end in the first queue under the condition that the NVR receives first query data sent by the target front end according to the first data request;
and adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
2. The method of claim 1, wherein determining target query information from among the plurality of query information in the first queue based on the first resource share comprises:
And determining the target query information from the plurality of query information according to the first resource share respectively corresponding to the plurality of query information and the random number generated by the random number generator in the NVR.
3. The method for processing query information according to claim 2, wherein determining the target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information respectively and the random numbers generated by the random number generator in the NVR includes:
determining the resource share sum of the first queue according to the first resource shares corresponding to the plurality of inquiry information respectively;
determining a target share interval corresponding to the random number according to the resource share and the random number generated by the random number generator, wherein the plurality of inquiry information respectively correspond to different share intervals, and the range size of the share interval corresponding to any inquiry information in the plurality of inquiry information is consistent with the numerical value size of the resource share of the any inquiry information;
and taking the query information corresponding to the target share interval as the target query information.
4. The method for processing query information according to claim 1, wherein after sending the first data request to the target front end corresponding to the target query information, the method further comprises:
Receiving first query data sent by the target front end according to the first data query request;
determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation;
and under the condition that a second resource share corresponding to the target front end exists in the query data, updating the first resource share into the second resource share.
5. The method of claim 1, wherein determining the second amount of data that the target front end has sent in the first queue comprises:
determining whether the target front end transmits second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end;
and under the condition that the target front end does not send the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain the second data volume.
6. The method of processing query information of claim 1, wherein prior to determining the first queue in the multi-stage feedback queue based on the queue priority and the queue status, the method further comprises:
setting a timer for the multi-stage feedback queue of the NVR;
and under the condition that the timer is overtime, adding the query information of the first front end in the multi-stage feedback queue to the highest priority queue in the multi-stage feedback queue, wherein the first front end is used for indicating the front end of the incomplete query in the timing time of the timer.
7. The method for processing query information according to claim 1, wherein the first queue is determined before or after the first queue in the multi-stage feedback queue set by the NVR according to the queue priority and the queue status, the method further comprising:
determining whether to send a second data request to the second front end if the second front end is detected to be accessed to the NVR;
and setting the query information of the second front end in the highest priority queue in the multi-stage feedback queues under the condition that the second data request is sent to the second front end.
8. The method of claim 1, wherein determining the first queue in the multi-stage feedback queue based on the queue priority and the queue status comprises:
The method comprises the steps of obtaining the queue priority and the queue state of each stage of feedback queue in the multi-stage feedback queues, wherein the queue state at least comprises one of the following steps: an empty state, a non-empty state;
determining a queue with a non-empty state in the multi-stage feedback queue according to the queue state of each stage of feedback queue;
and determining a queue with highest priority from the queues in the non-empty state, and taking the queue with highest priority as the first queue.
9. A query information processing apparatus, comprising:
the first determining module is configured to determine a first queue in a multi-level feedback queue set by the NVR according to a queue priority and a queue status, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to the front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue;
the sending module is used for determining target query information in a plurality of query information of the first queue according to the first resource share and sending a first data request to a target front end corresponding to the target query information;
The second determining module is used for determining a second data amount sent by the target front end in the first queue under the condition that the NVR receives first query data sent by the target front end according to the first data request;
and the updating module is used for adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
10. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 8 by means of the computer program.
CN202210945830.6A 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device Active CN115334010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945830.6A CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945830.6A CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN115334010A CN115334010A (en) 2022-11-11
CN115334010B true CN115334010B (en) 2023-08-29

Family

ID=83922555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945830.6A Active CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115334010B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117098191A (en) * 2023-07-06 2023-11-21 佰路威科技(上海)有限公司 Data stream scheduling control method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060231A1 (en) * 2018-09-19 2020-03-26 주식회사 맥데이타 Network security monitoring method, network security monitoring device, and system
CN111459651A (en) * 2019-01-21 2020-07-28 珠海格力电器股份有限公司 Load balancing method, device, storage medium and scheduling system
CN113111083A (en) * 2021-03-31 2021-07-13 北京沃东天骏信息技术有限公司 Method, device, equipment, storage medium and program product for data query
CN113596188A (en) * 2021-07-12 2021-11-02 浙江大华技术股份有限公司 Multi-device management method and device
CN114363260A (en) * 2021-11-09 2022-04-15 天津大学 Data flow scheduling method for data center network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060231A1 (en) * 2018-09-19 2020-03-26 주식회사 맥데이타 Network security monitoring method, network security monitoring device, and system
CN111459651A (en) * 2019-01-21 2020-07-28 珠海格力电器股份有限公司 Load balancing method, device, storage medium and scheduling system
CN113111083A (en) * 2021-03-31 2021-07-13 北京沃东天骏信息技术有限公司 Method, device, equipment, storage medium and program product for data query
CN113596188A (en) * 2021-07-12 2021-11-02 浙江大华技术股份有限公司 Multi-device management method and device
CN114363260A (en) * 2021-11-09 2022-04-15 天津大学 Data flow scheduling method for data center network

Also Published As

Publication number Publication date
CN115334010A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN109246229B (en) Method and device for distributing resource acquisition request
CN112165691B (en) Content delivery network scheduling method, device, server and medium
CN101753446B (en) Network system, network management server, and configuration scheduling method
CN108566242B (en) Spatial information network resource scheduling system for remote sensing data transmission service
CN102314408B (en) Method, device, equipment and system for acquiring configuration information and configuring
WO2019184445A1 (en) Service resource allocation
CN109615247B (en) Scheduling method, control method and device of delivery robot and electronic equipment
KR20050122207A (en) Method and apparatus for data logging
CN110689254A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN115334010B (en) Query information processing method and device, storage medium and electronic device
CN110673948A (en) Cloud game resource scheduling method, server and storage medium
CN111127154A (en) Order processing method, device, server and nonvolatile storage medium
CN104599085A (en) User motivating method under crowdsourcing mode and crowdsourcing system
CN110633143A (en) Cloud game resource scheduling method, server and storage medium
CN110415068A (en) Order allocation method, device and mobile terminal
CN108415760B (en) Crowd sourcing calculation online task allocation method based on mobile opportunity network
CN100362505C (en) Network system,network control method,and program
CN105850162B (en) Method and apparatus for shared data quota
CN105824919B (en) A kind of dynamic adjusting method and device of data query operation price
CN103049294B (en) A kind of method and apparatus of network element software upgrading
CN104683473A (en) Service quality monitoring method, server side, client and system
CN112019581A (en) Method and device for scheduling task processing entities
CN113849302A (en) Task execution method and device, storage medium and electronic device
CN111338787B (en) Data processing method and device, storage medium and electronic device
CN110198522B (en) Data transmission method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant