CN115334010A - Query information processing method and device, storage medium and electronic device - Google Patents

Query information processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN115334010A
CN115334010A CN202210945830.6A CN202210945830A CN115334010A CN 115334010 A CN115334010 A CN 115334010A CN 202210945830 A CN202210945830 A CN 202210945830A CN 115334010 A CN115334010 A CN 115334010A
Authority
CN
China
Prior art keywords
queue
query information
data
target
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210945830.6A
Other languages
Chinese (zh)
Other versions
CN115334010B (en
Inventor
代沆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210945830.6A priority Critical patent/CN115334010B/en
Publication of CN115334010A publication Critical patent/CN115334010A/en
Application granted granted Critical
Publication of CN115334010B publication Critical patent/CN115334010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for processing query information, a storage medium and an electronic device, wherein the method comprises the following steps: determining a first queue in a multi-stage feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue comprises: query information corresponding to a front end docked by the NVR, a first resource share corresponding to the query information, and a data volume sent by the front end in a first queue; determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives all query data sent by the target front end according to the first data request, determining a second data volume sent by the target front end in the first queue; and adding the target query information into the second queue under the condition that the second data volume is larger than the preset data volume of the first queue.

Description

Query information processing method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of communications, and in particular, to a method and an apparatus for processing query information, a storage medium, and an electronic apparatus.
Background
When a Network Video Recorder (NVR) is connected to multiple front ends, statistical data of the front ends when the front ends are offline needs to be acquired and stored in the NVR. A request is typically initiated to the front-end when the front-end comes online. If the off-line data volume is large, multiple data query requests can be sent to the same front end, and the front end needs to send data to the NVR continuously, so that the performance of the front end is influenced to a certain extent. If the NVR query frequency and timing can be controlled, the situation that the front end is idle after transmitting data all the time within a certain period of time is avoided, the pressure on the front end can be reduced, and balancing the data transmission load of the front end is a problem to be solved.
Aiming at the problems that in the related art, under the condition that the data volume of the front-end equipment is overlarge, multiple data query requests can be sent to the same front end, the performance of the front-end equipment is influenced, and the like, an effective solution is not provided.
Disclosure of Invention
The embodiment of the invention provides a method and a device for processing query information, a storage medium and an electronic device, which are used for at least solving the problems that in the related art, under the condition that the data volume of front-end equipment is overlarge, multiple data query requests can be sent to the same front end, the performance of the front-end equipment is influenced and the like.
According to an embodiment of the present invention, a method for processing query information is provided, including: determining a first queue in a multi-level feedback queue set by an NVR (network video recorder) according to the priority and the state of the queue, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives first query data sent by the target front end according to the first data request, determining a second data volume sent by the target front end in the first queue; and adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
In one exemplary embodiment, determining target query information among the plurality of query information in the first queue based on the first share of resources comprises: and determining target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information respectively and the random numbers generated by the random number generator in the NVR.
In an exemplary embodiment, determining the target query information from the plurality of query information according to the first resource share corresponding to the plurality of query information respectively and the random number generated by the random number generator in the NVR includes: determining the sum of the resource shares of the first queue according to the first resource shares corresponding to the plurality of query information respectively; under the condition that the random number generator generates the random number according to the resource share, determining a target share interval corresponding to the random number, wherein the query information corresponds to different share intervals respectively, and the range size of the share interval corresponding to any query information in the query information is consistent with the numerical value of the resource share of any query information; and taking the query information corresponding to the target share interval as the target query information.
In an exemplary embodiment, after sending the first data request to the target front end corresponding to the target query information, the method further includes: receiving first query data sent by the target front end according to the first data query request; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a numerical value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; in the case that there is a second share of resources in the query data corresponding to the target front end, updating the first resource share to the second resource share.
In an exemplary embodiment, after sending the first data request to the target front end corresponding to the target query information, the method further includes: determining whether the target front end finishes sending second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end; and under the condition that the target front end does not finish sending the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain a second data volume.
In one exemplary embodiment, before determining the first queue in the multi-stage feedback queue according to the queue priority and the queue status, the method further comprises: setting a timer for the multi-stage feedback queue of the NVR; and under the condition that the timer is overtime, adding query information of a first front end in the multi-stage feedback queues to a highest priority queue in the multi-stage feedback queues, wherein the first front end is used for indicating a front end which does not complete the query within the timing time of the timer.
In an exemplary embodiment, before or after determining the first queue in the multi-stage feedback queue set by NVR according to the queue priority and the queue status, the method further includes: under the condition that a second front end is detected to be accessed to the NVR, determining whether to send a second data request to the second front end; and under the condition that the second data request is determined to be sent to the second front end, setting the query information of the second front end in a highest priority queue in the multi-stage feedback queues.
In one exemplary embodiment, determining a first queue in a multi-level feedback queue based on queue priority and queue status comprises: acquiring queue priority and queue state of each level of feedback queues in a multi-level feedback queue, wherein the queue state at least comprises one of the following: null, non-null states; determining a queue with a non-empty queue state in the multi-stage feedback queues according to the queue state of each stage of feedback queues; and determining the queue with the highest priority in the queues in the non-empty state, and taking the queue with the highest priority as the first queue.
According to another embodiment of the present invention, there is also provided a device for processing query information, including: a first determining module, configured to determine a first queue in a multi-level feedback queue set by an NVR according to a queue priority and a queue state, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to a front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue; a sending module, configured to determine target query information from the plurality of query information in the first queue according to the first resource share, and send a first data request to a target front end corresponding to the target query information; a second determining module, configured to determine, when the NVR receives first query data sent by the target front end according to the first data request, a second data amount that has been sent by the target front end in the first queue; and the updating module is used for adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, where the computer program is configured to execute the above processing method for querying information when running.
According to another aspect of the embodiments of the present invention, there is also provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the processing method of the query information through the computer program.
In the embodiment of the present invention, a first queue is determined in a multi-level feedback queue set by NVR according to a queue priority and a queue state, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to a front end of the NVR interface, a first resource share corresponding to the query information, and a first data size sent by the front end in the first queue; determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives first query data sent by the target front end according to the first data request, determining a second data volume sent by the target front end in the first queue; adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue; by adopting the technical scheme, the problems that under the condition that the data volume of the front-end equipment is overlarge, multiple data query requests can be sent to the same front end, the performance of the front-end equipment is influenced and the like are solved, the resource share is distributed to the multi-stage feedback queues, the data request is sent to the target front end according to the resource share, and under the condition that the target front end sends all query data according to the data request, the target front end is added into the second queue, the query is carried out according to the state of the front end, the pressure of the front end for sending the query data can be balanced, the query to the front end can be balanced when the front ends are in butt joint, the query pressure to the front end is balanced, and the performance of the front-end equipment is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a computer terminal of a processing method for query information according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a method of processing query information according to an embodiment of the invention;
FIG. 3 is a diagram illustrating a method for processing query information according to an embodiment of the invention;
fig. 4 is a block diagram of a device for processing query information according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the example of running on a computer terminal, fig. 1 is a block diagram of a hardware structure of the computer terminal of a processing method for querying information according to an embodiment of the present invention. As shown in fig. 1, the computer terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and in an exemplary embodiment, may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the computer terminal. For example, the computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration with equivalent functionality to that shown in FIG. 1 or with more functionality than that shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the processing method of query information in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to a computer terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In this embodiment, a method for processing query information is provided, which is applied to the above-mentioned computer terminal, and fig. 2 is a flowchart of a method for processing query information according to an embodiment of the present invention, where the flowchart includes the following steps:
step S202, determining a first queue in a multi-level feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end docked by the NVR, a first resource share corresponding to the query information, and data volume statistical information sent by the front end corresponding to the query information in the first queue;
step S204, determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
step S206, under the condition that the target front end finishes sending all query data according to the first data request, determining a second data volume sent by the target front end in the first queue;
step S208, adding the target query information into a second queue when the second data size is larger than the preset data size of the first queue, where the priority of the second queue is smaller than that of the first queue.
It should be noted that the priority of the second queue is a queue whose immediate priority is lower than that of the first queue.
Through the steps, a first queue is determined in a multi-level feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR interface, a first resource share corresponding to the query information, and a first data size sent by the front end in the first queue; determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives first query data sent by the target front end according to the first data request, determining a second data volume sent by the target front end in the first queue; the method comprises the steps that when the second data volume is larger than the preset data volume of the first queue, the target query information is added into the second queue, the problem that in the related technology, under the condition that the data volume of front-end equipment is overlarge, multiple data query requests can be sent to the same front end, the performance of the front-end equipment is affected and the like is solved, the resource share is distributed to the multi-stage feedback queue, the data request is sent to the target front end according to the resource share, and under the condition that the target front end sends all query data according to the data request, the target front end is added into the second queue, the state of the front end is queried, the pressure of the front end for sending query data can be balanced, the query to the front end can be balanced when the front ends are connected, the query to the front end is balanced, the query pressure to the front end is balanced, and the performance of the front-end equipment is improved.
It should be noted that, the application scenario of the above embodiment may be the following scenario: 1) Counting the number of people in security monitoring: NVR requires demographic details for a specified period of time (e.g., one year) to be obtained from the head end. When the front end is on line, the NVR sends a query message to the front end, the front end sends data to the NVR, and the NVR is stored in a local database, so that a user can conveniently perform user-defined query; 2) Traffic control in security monitoring: NVR needs to obtain offline video structured direction statistics from the front end. In a street, square, road, etc. scene, the motion of people, vehicles, non-motorized vehicles is directional, such as from left to right, from right to left, or bi-directional. The video structured direction counting function can count the flow data of people, vehicles and non-motor vehicles according to needs, and provides guidance for urban road planning, traffic/pedestrian flow control, route arrangement and the like.
In one exemplary embodiment, determining target query information among the plurality of query information in the first queue based on the first share of resources comprises: and determining target query information from the plurality of pieces of query information according to the first resource shares corresponding to the plurality of pieces of query information respectively and the random numbers generated by the random number generator in the NVR.
Specifically, multiple stages of feedback queues are arranged inside the NVR, and the multiple stages of feedback queues include: and the query information corresponding to the front end docked by the NVR can only exist in one queue at any time. A resource share is allocated for each query message in the queue, for example, a value of 1-100, and a default value, for example, 50, is initially set. When the NVR schedules the first queue, a random number is obtained through the random number generator, and corresponding target query information is determined according to the random number generated by the random number generator and the first resource share.
There are various ways of determining the target query information from the plurality of query information according to the first resource share corresponding to the plurality of query information and the random number generated by the random number generator in the NVR, and an embodiment of the present invention provides an implementation manner, which is specifically: determining the sum of the resource shares of the first queue according to the first resource shares corresponding to the plurality of query information respectively; under the condition that the random number generator generates the random number according to the resource share, determining a target share interval corresponding to the random number, wherein the query information corresponds to different share intervals respectively, and the range size of the share interval corresponding to any query information in the query information is consistent with the numerical value of the resource share of any query information; and taking the query information corresponding to the target share interval as the target query information.
That is, resource shares of all query information in the first queue are obtained first, all resource shares are added to obtain a resource share sum corresponding to the first queue, then a random number from 0 to the resource share sum interval is generated through a random number generator, the interval where the resource share of which query information falls into the random number is determined, the query information corresponding to the target share interval is used as the target query information, and a first data request is sent to the target front end corresponding to the target query information. This ensures that the front end with a large share of resources has a greater probability of being selected in the query.
For example, after determining the highest non-empty priority queue, assume that there are 5 front-end corresponding query messages in the queue, and the value range of the resource share value is [0,100]. The resource share value in the 5 query messages is 30, 60, 40, 30, 80 respectively, and the sum of resource share is 30+60+40+30+80=240, so that a random number in the range of 0-239 is generated, that is, the result of the random number generator is 0,1, 2 … 239, and a total of 240 integers. The share intervals corresponding to the 5 pieces of query information are [0,29], [30,89], [90,129], [130,159], [160,239], and the size of each interval corresponds to the size of the resource share value. Since the probability of each integer from 0 to 239 generated by the random number generator is the same, query messages with a large share of resources are more likely to be selected. If the random number generator generates the random number 113 at a time, the interval [90,129] is hit, the 3 rd query information in the queue is selected, and the corresponding front-end data is queried. Determining the "target resource share interval" determines the query information.
In an exemplary embodiment, after a first data request is sent to a target front end corresponding to the target query information, first query data sent by the target front end according to the first data query request is received; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a numerical value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; and updating the first resource share to a second resource share corresponding to the target front end when the second resource share exists in the query data.
Specifically, under the condition that the front end supports resource share calculation, and under the condition that the front end sends complete query data to the NVR, a new second resource share is sent to the NVR, it should be noted that the second resource share is calculated according to the load, the bandwidth utilization rate, the IO throughput rate and the like of the front end, and reflects the real-time load condition of the front end; in the case that the front-end does not support resource share calculation, then in the case that the front-end sends the complete query data to the NVR, a default resource share 50 is sent to the NVR, i.e., the resource share of the front-end is not modified.
After step S206, it is necessary to: determining the size relation between the quantity of all query data and the total quantity of the query data of the first queue; determining whether the target front end finishes sending second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end; and under the condition that the target front end does not finish sending the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain a second data volume.
That is to say, after the target front end sends query data of a certain data volume, the priority of the target front end is reduced, that is, the target front end is added into the second queue, so that the problems that in the related art, when the data volume of the front-end device is too large, multiple data query requests are sent to the same front end, the performance of the front-end device is affected, and the like are solved.
In other words, each queue will have a data amount value, and each query message in the queue contains a transmitted data amount, and the initial value is set to 0. And after each query is finished, if the target front end has sent all the query data, deleting the query information corresponding to the front end in the queue. Otherwise, the query data amount of the front end is added to the sent data amount, that is, the sent data amount stored in the query information is updated when each sending is finished, and if the updated second data amount is larger than the preset data amount of the queue (each queue designates a threshold value of the data amount sent by the queue during initialization), the query information is moved to a lower-level queue. If no lower-level queue exists, no moving processing is performed.
In an exemplary embodiment, determining that the target front end has sent all query data includes: when the data of the target front end is queried, the query is divided into a plurality of times, each query of the NVR specifies an offset and query data number information, and the return information of the target front end comprises the data number and the offset value. For example: 10000 pieces of data need to be queried, the target front end returns an offset value to indicate that the second piece of data is queried (multiple pieces of data can be queried for one query) in each query. If the offset reaches 10000, it indicates that the data does not need to be queried to the target front end any more.
In one exemplary embodiment, a timer is set for the multi-stage feedback queue of the NVR before determining a first queue in the multi-stage feedback queue according to the queue priority and the queue status; and under the condition that the timer is overtime, adding query information of a first front end in the multi-stage feedback queues to a highest priority queue in the multi-stage feedback queues, wherein the first front end is used for indicating a front end which does not complete the query within the timing time of the timer.
After a period of time, namely the timer is overtime, the front end which is not queried and the front end which does not receive the data request are added into the highest priority queue again. The rule solves the problem that NVR does not query data of a front end for a long time. And after the timer is overtime, the front end which is not queried is put into the highest priority queue, and polling is carried out with other front ends in the highest priority queue.
It should be noted that all the front ends that have not been queried are put into the highest priority queue. When the timer is overtime, all lottery values are in the highest priority queue, the lottery value of the front end is unchanged, and the sent data volume statistic value of the front end is set to be 0.
In an exemplary embodiment, before or after determining a first queue in a multi-level feedback queue set by an NVR according to a queue priority and a queue state, in case that a second front end is detected to be accessed to the NVR, determining whether to send a second data request to the second front end; and under the condition that the second data request is determined to be sent to the second front end, setting the query information of the second front end in a highest priority queue in the multi-stage feedback queues.
That is, when the second front end is just accessed to the NVR and needs to send a second data request to the second front end, the query information of the second front end is set in the highest priority queue in the multi-stage feedback queues, and when the first queue is the highest priority queue, the query information of the second front end is added to the first queue.
In one exemplary embodiment, determining a first queue in a multi-level feedback queue based on queue priority and queue status comprises: acquiring queue priority and queue state of each level of feedback queues in a multi-level feedback queue, wherein the queue state at least comprises one of the following: an empty state, a non-empty state; determining a queue with a non-empty queue state in the multi-stage feedback queues according to the queue state of each stage of feedback queues; and determining the queue with the highest priority in the queues in the non-empty state, and taking the queue with the highest priority as the first queue.
In other words, in the embodiment of the present invention, a queue in a non-empty state is determined in the plurality of queues according to the queue state, then a queue with the highest priority is determined in the queue in the non-empty state, so as to determine the queue with the highest priority in the plurality of queues, and if all the queues are empty, whether new query information arrives is periodically checked.
In order to better understand the process of the query information processing method, the following describes an implementation method flow of the query information processing with reference to an optional embodiment, but the technical solution of the embodiment of the present invention is not limited thereto.
In the prior art, for example, patent publication No. CN110503284a, its design point is to provide a statistical method based on queuing data, which includes: receiving a statistical instruction based on queuing data, wherein the statistical instruction comprises a time period to be counted and a parameter to be counted; acquiring queuing data of the video image corresponding to the time period to be counted, wherein the queuing data comprises: the total number of queuing personnel and/or the queuing time of the queuing personnel; and carrying out statistical analysis on the obtained queuing data based on the parameter to be counted to generate a statistical result. By applying the queuing data-based statistical method, the queuing data in the video image can be acquired according to the requirements of the user, and the queuing data required by the user is subjected to statistical analysis, so that a more intuitive statistical result can be provided for the user.
For another example, in a patent with patent publication number CN111585915a, the design key points are that a long and short traffic balanced transmission method, a system, a storage medium, and a cloud server are disclosed, and a deep reinforcement learning architecture of the long and short traffic balanced transmission method of the data center is constructed; optimizing short flow instantaneity, and improving interactive short flow transmission delay according to a multi-stage queue threshold optimization method based on reinforcement learning; selecting a transmission strategy by utilizing the decision probability, initializing the decision probability, and executing the selected transmission strategy according to the probability; and the decision probability dynamic adjustment is used for iteratively updating the transmission strategy to adapt to the traffic type change of the data center, and finally realizing the long and short traffic balanced transmission.
Patent publication No. CN110503284a, the main drawbacks of its design: the problem of load balancing when a plurality of front ends (devices) are accessed to obtain data at the same time is not considered; the patent with patent publication number CN111585915a mainly designs a multistage feedback queue for long and short flow control in cloud storage. Moreover, the prior art comprises: under the condition that the data volume of the front-end equipment is overlarge, a plurality of data query requests can be sent to the same front end, and the performance of the front-end equipment is further influenced.
In order to solve the above problem, in this embodiment, a method for processing query information is provided, which is based on the following principle: the Multi-level Feedback Queue (MLFQ) and proportional share scheduling idea is applied to the front-end statistical data obtained by NVR, so as to balance the load when the front-end sends a large amount of data and reduce the pressure of front-end data sending.
The MLFQ is mostly used in scheduling processes by an operating system, and puts query information to be executed into priority queues of different levels, and the scheduling rules are as follows:
rule 1: if the priority of A is greater than that of B, running A;
rule 2: if the priority of the A = the priority of the B, the A and the B are operated in a rotating mode;
rule 3: when entering the system, the process is placed in the highest priority queue;
rule 4: once the process has run out of its quota on a certain level of queue, the priority of the process is reduced, i.e. the process is moved into a lower level of queue;
rule 5: after a period of time S, all processes in the system are rejoined to the highest priority queue.
Proportional share scheduling: each process is ensured to acquire a certain proportion of CPU time, a lottery number (equivalent to the resource share in the embodiment) is distributed to each process, the lottery number represents the share of the process occupying a certain resource, and when the resource is distributed, the lottery number is distributed according to the lottery number share.
MLFQ, in conjunction with proportional share scheduling, performs a query on front-end data, analogizing to the execution of a process. The processing rule of the query information corresponding to the embodiment of the invention is as follows:
a multi-stage feedback queue is arranged in the NVR to manage data query of the NVR to the front end, and certain query information can only exist in one queue at any time. The query information in each queue is configured with a lottery value, e.g., a value of 1-100, and initially set with a default value, e.g., 50. And the front end calculates a lottery value according to the self running condition of the front end and feeds back the lottery value to the NVR at the end of sending query data according to the data request once. If the pressure is lower, a larger lottery value is set, so that NVR has a higher probability of inquiring the front end when scheduling. Otherwise, a smaller lottery value is set, so that the probability of being inquired next time is smaller.
The MLFQ rule of the embodiment of the present invention is as follows:
1. if the priority of the query information corresponding to the front end A in the queue is greater than that of the query information corresponding to the front end B, sending a data request to the front end A;
2. the front ends in the same priority are scheduled by using a proportional share scheduling method, which comprises the following specific steps:
a) When the front end comes online, it is set with a default lottery value of 50, in the range of [1,100]. If the front end supports lottery value calculation, when sending complete query data to the NVR, a new lottery value is sent to the NVR, and the value is calculated according to self load, bandwidth utilization rate, IO throughput rate and the like, so that the real-time condition of the front end is reflected.
b) If the front end does not support lottery value transmission, the default lottery value 50 is transmitted to the NVR when the round of data transmission is finished.
c) And after receiving the lottery value fed back by the front end, the NVR modifies the lottery value of the information inquired by the NVR.
d) When the NVR schedules a queue, the sum of all the inquiry lottery values in the queue is calculated, then a random number is obtained through a random number generator, and corresponding front-end data is inquired when the random number falls into the interval of the front-end lottery value. Thus, the front end with large lottery value is ensured, and the front end has higher probability of being selected in the next inquiry.
3. After the front end is on line, the front end is put in a highest priority queue;
4. when the front end sends a certain amount of data, the priority is reduced, that is, the front end moves into a lower queue, and when the front end finishes sending all the data to be queried in the query, the query information corresponding to the front end is removed from the specified queue.
Starvation problem: if there are many front-ends with less data amount and one front-end with great data amount, when the priority of the front-end with less data amount is greater than the priority of the front-end with more data amount or in the same priority queue, the proportion share value of the front-end with less data amount is greater, and the proportion share value of the front-end with more data amount is smaller, then the NVR will query the front-ends with less data amount all the time, and the front-end with large data amount will not query (or query at the end), and this situation can be improved through rule 5.
5. And after a period of time S, adding the front ends (all work queues) which are not queried into the highest priority queue again. The rule solves the problem that NVR does not query data of a front end for a long time. After the time S, it is put into the highest priority queue, where it is polled with other front ends.
The priority of using the MLFQ rule to control the NVR query front end lies in: for the front end with small data quantity, the complete data can be inquired quickly; for a front end with a large amount of data, data can be acquired fairly and stably without causing too much pressure on the front end.
Fig. 3 is a schematic diagram of a query information processing method according to an embodiment of the present invention, as shown in fig. 3, specifically including the following steps:
step S301: starting;
step S302: establishing a multi-stage feedback queue, and starting a timer (after timeout, all the inquiry information which is not inquired is put into a highest-level queue);
it should be noted that NVR determines MLFQ-related parameters of query control:
a) The number of queues;
b) The total amount of query data (corresponding to the preset amount of data in the above embodiment) for each layer of queue;
c) The default lottery value of the inquiry information corresponding to the front end;
d) Time interval S where all queries are placed in the highest priority queue;
step S303: determining whether a second front end is accessed and whether a data request needs to be sent to the second front end, and executing step S304 if it is determined that the second front end is accessed and a query request (corresponding to the data request in the above embodiment) needs to be sent to the second front end, otherwise executing step S305;
step S304: when a second front end is on line and data needs to be inquired from the second front end, adding the second front end into the highest priority queue, and configuring a default lottery value for inquiry information corresponding to the second front end;
step S305: determining whether a non-empty queue exists, and executing a step S306 under the condition that the non-empty queue exists, or executing a step S315;
step S306: counting the sum ticket _ sum of all lottery values in the non-empty highest priority queue, then obtaining a random value through a random number generator, determining a first front end (equivalent to the target front end in the embodiment) according to the range of the random value, and sending a query request to the first front end;
step S307: receiving complete query data of the query request sent by the first front end;
step S308: determining whether the first front end has finished sending all query data; if the target front end has finished sending all the query data, executing step S316; if the target front end does not finish sending all the query data, executing step S309;
step S309: counting the transmission amount of the first front end in the level queue, if the transmission amount reaches the threshold of the transmission amount of the level queue, executing step S311;
step S310: determining whether the inquiry data has a lottery value, if so, executing step S312, and if not, executing step S313;
step S311: reducing the priority of the query information corresponding to the first front end, and moving to the next-level queue;
step S312: updating the lottery value of the query information corresponding to the first front end;
step S313: setting the lottery value of the query information corresponding to the first front end as a default lottery value;
when NVR carries out next query, the lottery value ticket _ sum at the front end in the highest priority queue is summed up (the lottery value can be updated or moved to the next level queue), then a random number on [0,ticket _ sum ] is generated, and the front end of the query is judged according to the interval in which the random number falls.
Step S314: determining whether the timer is overtime, and executing the step S314 if the timer is overtime, otherwise executing the step S303;
step S315: transferring the query information of all queues into a highest priority queue;
step S316: a preset sleep duration, and step S303 is executed;
step S317: the front-end information is removed from the queue.
According to the embodiment of the invention, by combining a multi-level feedback queue (MLFQ) rule and a proportional share scheduling idea and combining a front end real-time load, the time for inquiring data by the front end is adjusted, the load balance of data sent by the front end is realized, and the pressure of the data sent by the front end is reduced; NVR maintains a multi-level queue, the front end in the high-level queue is preferentially selected for data query, the front end in the equal-level queue is selected according to the idea of proportional share scheduling, and elements in each queue are adjusted according to a certain rule; firstly, the lottery values of all front ends of the queue are summed, then random numbers are generated, and the front ends are selected to be inquired according to the interval in which the random numbers fall. The front end sends the lottery value related to the self state to the NVR, when the front end inquires next time, the NVR carries out scheduling according to the new lottery value, and the front end with the high lottery value has higher probability to be inquired; when the front end is on line and needs to inquire data, the front end enters the highest priority queue and is configured with a default lottery value. If the front end already sends the specified amount of data in the queue, the front end lowers the data by one level and moves the data into the next level queue; in order to ensure the balance of data query to all front ends, after a period of time, the queries which are not finished in all queues are uniformly put into the queue with the highest priority.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for processing query information is further provided, and the device for processing query information is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated after the description is given. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
FIG. 4 is a block diagram of a query processing device according to an embodiment of the present invention; as shown in fig. 4, includes:
a first determining module 42, configured to determine a first queue in multiple levels of feedback queues set in NVR according to a queue priority and a queue status, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to a front end of the NVR butt joint and a first resource share corresponding to the query information;
a sending module 44, configured to determine target query information from the plurality of query information in the first queue according to the first resource share, and send a first data request to a target front end corresponding to the target query information;
a second determining module 46, configured to determine, when the NVR receives first query data sent by the target front end according to the first data request, a second data amount sent by the target front end in the first queue;
and an updating module 48, configured to add the target query information to a second queue when the second data amount is greater than the preset data amount of the first queue, where a priority of the second queue is smaller than that of the first queue.
Through the device, a first queue is determined in a multi-level feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR interface, a first resource share corresponding to the query information, and a first data size sent by the front end in the first queue; determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information; under the condition that the NVR receives first query data sent by the target front end according to the first data request, determining a second data volume sent by the target front end in the first queue; adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue; by adopting the technical scheme, the problems that under the condition that the data volume of the front-end equipment is overlarge, multiple data query requests can be continuously sent to the same front end, the performance of the front-end equipment is influenced and the like are solved, the resource share is distributed to the multi-stage feedback queue, the data request is sent to the target front end according to the resource share, and under the condition that the target front end sends all query data according to the data request, the target front end is added into the second queue, the query is carried out according to the state of the front end, the pressure of the front end for sending the query data can be balanced, the query to the front end can be balanced when the front ends are butted, the query pressure to the front end is balanced, and the performance of the front-end equipment is improved.
In an exemplary embodiment, the sending module is further configured to determine the target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information respectively and the random number generated by the random number generator in the NVR.
In an exemplary embodiment, the sending module is further configured to determine a sum of resource shares of the first queue according to first resource shares corresponding to the plurality of query information, respectively; under the condition that the random number generator generates the random number according to the resource share, determining a target share interval corresponding to the random number, wherein the plurality of inquiry information respectively correspond to different share intervals; and taking the query information corresponding to the target share interval as the target query information.
In an exemplary embodiment, the updating module is further configured to receive first query data sent by the target front end according to the first data query request; determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a numerical value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation; and updating the first resource share to a second resource share corresponding to the target front end when the second resource share exists in the query data.
In an exemplary embodiment, the second determining module is further configured to determine whether the target front end has finished sending the second query data corresponding to the target front end, where the second query data is all query data corresponding to the target front end; and under the condition that the target front end does not send second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain a second data volume.
In an exemplary embodiment, the first determining module is further configured to set a timer for the multi-stage feedback queue of the NVR; and under the condition that the timer is overtime, adding query information of a first front end in the multi-stage feedback queues to a highest priority queue in the multi-stage feedback queues, wherein the first front end is used for indicating a front end which does not complete the query within the timing time of the timer.
In an exemplary embodiment, the first determining module is further configured to determine whether to send a second data request to a second front end in a case that the second front end is detected to be accessed to the NVR; and under the condition that the second data request is determined to be sent to the second front end, setting the query information of the second front end in a highest priority queue in the multi-stage feedback queues.
In an exemplary embodiment, the first determining module is further configured to obtain a queue priority and a queue status of each stage of the feedback queues in the multiple stages of feedback queues, where the queue status includes at least one of: an empty state, a non-empty state; determining a queue with a non-empty queue state in the multi-stage feedback queues according to the queue state of each stage of feedback queues; and determining the queue with the highest priority in the queues in the non-empty state, and taking the queue with the highest priority as the first queue.
An embodiment of the present invention further provides a storage medium including a stored program, wherein the program executes any one of the methods described above.
Alternatively, in the present embodiment, the storage medium may be configured to store program codes for performing the following steps:
s1, determining a first queue in a multi-level feedback queue set by an NVR (network video recorder) according to the priority and the queue state of the queue, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR docking, and a first resource share corresponding to the query information;
s2, determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
s3, under the condition that the target front end sends all query data according to the first data request, determining a second data volume sent by the target front end in the first queue;
and S4, adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, determining a first queue in a multi-level feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR docking, and a first resource share corresponding to the query information;
s2, determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
s3, under the condition that the target front end sends all query data according to the first data request, determining a second data volume sent by the target front end in the first queue;
and S4, adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized in a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a memory device and executed by a computing device, and in some cases, the steps shown or described may be executed out of order, or separately as individual integrated circuit modules, or multiple modules or steps thereof may be implemented as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A query information processing method is characterized by comprising the following steps:
determining a first queue in a multi-level feedback queue set by NVR according to the queue priority and the queue state, wherein the first queue is a non-empty highest-level queue, and the first queue comprises: query information corresponding to a front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue;
determining target query information in the plurality of query information of the first queue according to the first resource share, and sending a first data request to a target front end corresponding to the target query information;
determining a second data volume sent by the target front end in the first queue under the condition that the NVR receives first query data sent by the target front end according to the first data request;
and adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
2. The method of processing query information according to claim 1, wherein determining target query information among the plurality of query information in the first queue according to the first resource share comprises:
and determining the target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information respectively and the random numbers generated by the random number generator in the NVR.
3. The method for processing query information according to claim 2, wherein determining the target query information from the plurality of query information according to the first resource shares corresponding to the plurality of query information, respectively, and the random number generated by the random number generator in the NVR includes:
determining the sum of the resource shares of the first queue according to the first resource shares corresponding to the plurality of query information respectively;
under the condition that the random number generator generates the random number according to the resource share, determining a target share interval corresponding to the random number, wherein the query information corresponds to different share intervals respectively, and the range size of the share interval corresponding to any query information in the query information is consistent with the numerical value of the resource share of any query information;
and taking the query information corresponding to the target share interval as the target query information.
4. The method for processing query information according to claim 1, wherein after sending the first data request to the target front end corresponding to the target query information, the method further comprises:
receiving first query data sent by the target front end according to the first data query request;
determining whether a second resource share corresponding to the target front end exists in the query data, wherein the second resource share is a numerical value calculated by the target front end according to load balancing information of the target front end, and the second resource share and the load balancing information are in an inverse relation;
and updating the first resource share to a second resource share corresponding to the target front end when the second resource share exists in the query data.
5. The method for processing query information according to claim 1, wherein determining the second amount of data that has been sent by the target front end in the first queue comprises:
determining whether the target front end finishes sending second query data corresponding to the target front end, wherein the second query data are all query data corresponding to the target front end;
and under the condition that the target front end does not finish sending the second query data corresponding to the target front end, adding the data volume of the first query data and the first data volume to obtain a second data volume.
6. The method of processing query information according to claim 1, wherein before determining the first queue in the multi-stage feedback queue according to the queue priority and the queue status, the method further comprises:
setting a timer for the multi-stage feedback queue of the NVR;
and under the condition that the timer is overtime, adding query information of a first front end in the multi-stage feedback queues to a highest priority queue in the multi-stage feedback queues, wherein the first front end is used for indicating a front end which does not complete the query within the timing time of the timer.
7. The method for processing the query information according to claim 1, wherein the first queue is determined before or after in the multi-level feedback queue set by NVR according to the queue priority and the queue status, and the method further comprises:
under the condition that a second front end is detected to be accessed to the NVR, determining whether to send a second data request to the second front end;
and under the condition that the second data request is determined to be sent to the second front end, setting the query information of the second front end in a highest priority queue in the multi-stage feedback queues.
8. The method for processing query information according to claim 1, wherein determining the first queue in the multi-stage feedback queue according to the queue priority and the queue status comprises:
acquiring queue priority and queue state of each stage of feedback queue in a multi-stage feedback queue, wherein the queue state at least comprises one of the following: an empty state, a non-empty state;
determining a queue with a non-empty queue state in the multi-stage feedback queues according to the queue state of each stage of feedback queues;
and determining the queue with the highest priority in the queues in the non-empty state, and taking the queue with the highest priority as the first queue.
9. A device for processing query information, comprising:
a first determining module, configured to determine a first queue in a multi-level feedback queue set by an NVR according to a queue priority and a queue state, where the first queue is a non-empty highest-level queue, and the first queue includes: query information corresponding to a front end of the NVR docking, a first resource share corresponding to the query information, and a first data volume sent by the front end in the first queue;
a sending module, configured to determine target query information from the plurality of query information in the first queue according to the first resource share, and send a first data request to a target front end corresponding to the target query information;
a second determining module, configured to determine, when the NVR receives first query data sent by the target front end according to the first data request, a second data amount sent by the target front end in the first queue;
and the updating module is used for adding the target query information into a second queue under the condition that the second data volume is larger than the preset data volume of the first queue, wherein the priority of the second queue is smaller than that of the first queue.
10. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method of any of claims 1 to 8 by means of the computer program.
CN202210945830.6A 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device Active CN115334010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945830.6A CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945830.6A CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN115334010A true CN115334010A (en) 2022-11-11
CN115334010B CN115334010B (en) 2023-08-29

Family

ID=83922555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945830.6A Active CN115334010B (en) 2022-08-08 2022-08-08 Query information processing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN115334010B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117098191A (en) * 2023-07-06 2023-11-21 佰路威科技(上海)有限公司 Data stream scheduling control method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060231A1 (en) * 2018-09-19 2020-03-26 주식회사 맥데이타 Network security monitoring method, network security monitoring device, and system
CN111459651A (en) * 2019-01-21 2020-07-28 珠海格力电器股份有限公司 Load balancing method, device, storage medium and scheduling system
CN113111083A (en) * 2021-03-31 2021-07-13 北京沃东天骏信息技术有限公司 Method, device, equipment, storage medium and program product for data query
CN113596188A (en) * 2021-07-12 2021-11-02 浙江大华技术股份有限公司 Multi-device management method and device
CN114363260A (en) * 2021-11-09 2022-04-15 天津大学 Data flow scheduling method for data center network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020060231A1 (en) * 2018-09-19 2020-03-26 주식회사 맥데이타 Network security monitoring method, network security monitoring device, and system
CN111459651A (en) * 2019-01-21 2020-07-28 珠海格力电器股份有限公司 Load balancing method, device, storage medium and scheduling system
CN113111083A (en) * 2021-03-31 2021-07-13 北京沃东天骏信息技术有限公司 Method, device, equipment, storage medium and program product for data query
CN113596188A (en) * 2021-07-12 2021-11-02 浙江大华技术股份有限公司 Multi-device management method and device
CN114363260A (en) * 2021-11-09 2022-04-15 天津大学 Data flow scheduling method for data center network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117098191A (en) * 2023-07-06 2023-11-21 佰路威科技(上海)有限公司 Data stream scheduling control method and related equipment

Also Published As

Publication number Publication date
CN115334010B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
Ranadheera et al. Computation offloading and activation of mobile edge computing servers: A minority game
CN112165691B (en) Content delivery network scheduling method, device, server and medium
CN110267214A (en) A kind of note transmission method, server and storage medium
CN114286413B (en) TSN network joint routing and stream distribution method and related equipment
CN108268318A (en) A kind of method and apparatus of distributed system task distribution
CN108566242B (en) Spatial information network resource scheduling system for remote sensing data transmission service
CN112019581B (en) Method and device for scheduling task processing entities
CN111506398B (en) Task scheduling method and device, storage medium and electronic device
CN110673948A (en) Cloud game resource scheduling method, server and storage medium
CN111176840B (en) Distribution optimization method and device for distributed tasks, storage medium and electronic device
CN110633143A (en) Cloud game resource scheduling method, server and storage medium
CN111143036A (en) Virtual machine resource scheduling method based on reinforcement learning
Maia et al. A multi-objective service placement and load distribution in edge computing
CN114867065A (en) Base station computing force load balancing method, equipment and storage medium
CN111127154A (en) Order processing method, device, server and nonvolatile storage medium
CN115334010A (en) Query information processing method and device, storage medium and electronic device
CN107154915A (en) The method of defending distributed refusal service DDoS attack, apparatus and system
Glazebrook et al. On the optimal allocation of service to impatient tasks
CN113849302A (en) Task execution method and device, storage medium and electronic device
CN113242149B (en) Long connection configuration method, apparatus, device, storage medium, and program product
CN116896550B (en) Software updating method, system and storage medium for reducing server pressure
EP3625986A1 (en) Devices, systems, and methods for resource allocation of shared spectrum
CN111831452A (en) Task execution method and device, storage medium and electronic device
Zhao et al. Optimizing allocation and scheduling of connected vehicle service requests in cloud/edge computing
CN111353712A (en) Distribution task scheduling method and device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant