CN113572704A - Information processing method, production end, consumption end and server - Google Patents

Information processing method, production end, consumption end and server Download PDF

Info

Publication number
CN113572704A
CN113572704A CN202010357675.7A CN202010357675A CN113572704A CN 113572704 A CN113572704 A CN 113572704A CN 202010357675 A CN202010357675 A CN 202010357675A CN 113572704 A CN113572704 A CN 113572704A
Authority
CN
China
Prior art keywords
information
processing
processing task
shared queue
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010357675.7A
Other languages
Chinese (zh)
Inventor
刘吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN202010357675.7A priority Critical patent/CN113572704A/en
Publication of CN113572704A publication Critical patent/CN113572704A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses an information processing method, a production end, a consumption end and a server, and relates to the technical field of computers. One embodiment of the method comprises: generating a processing task according to the first information uploaded by the equipment; putting the processing task into a target shared queue so that a consumption end of the server acquires the processing task from the target shared queue, determining second information of the equipment according to the processing task, and storing the second information into a database; the target shared queue is selected from at least two preset shared queues. The embodiment can improve the efficiency of processing the information of the device.

Description

Information processing method, production end, consumption end and server
Technical Field
The invention relates to the technical field of computers, in particular to an information processing method, a production end, a consumption end and a server.
Background
When the server reissues the application, the server disconnects the communication connection with the device, and when the device determines that the communication connection with the server is disconnected, the device reconnects the server within a very short time (for example, 5 seconds), and reports information such as whether the device is networked or not. Therefore, the server needs to process information reported to the server by a large number of devices in an extremely short time.
In the prior art, only one shared queue is used when processing information reported by equipment, and when a current production end operates the shared queue, other production ends or consumption ends cannot operate the shared queue.
Therefore, the information of the prior art processing device is inefficient, and accumulation of processing tasks in the shared queue is easily caused.
Disclosure of Invention
In view of this, embodiments of the present invention provide an information processing method, a production end, a consumption end, and a server, which can improve efficiency of processing information of a device.
In a first aspect, an embodiment of the present invention provides an information processing method, applied to a production end of a server, including:
generating a processing task according to the first information uploaded by the equipment;
putting the processing task into a target shared queue so that a consumption end of the server acquires the processing task from the target shared queue, determining second information of the equipment according to the processing task, and storing the second information into a database;
the target shared queue is selected from at least two preset shared queues.
Alternatively,
the first information includes: any one or more of an encrypted device number, an encrypted device version number, and an encrypted device networking status.
Alternatively,
the second information includes: any one or more of a device number, a device version number, and a device networking status.
In a second aspect, an embodiment of the present invention provides an information processing method, applied to a consuming side of a server, including:
determining a target shared queue in at least two preset shared queues;
acquiring a processing task from the target shared queue;
determining second information of the equipment according to the processing task;
and storing the second information into a database.
Alternatively,
the target shared queue is a shared queue bound with the consumption end in advance.
Alternatively,
the determining second information of the device according to the processing task includes:
adding the processing task into a preset task list;
when the processing tasks in the task list meet a preset trigger condition, aiming at each processing task in the task list: and analyzing the processing task to obtain the second information.
Alternatively,
the trigger condition comprises: the number of processing tasks in the task list is equal to a number threshold, and/or the time from the last processing of the processing tasks in the task list exceeds a time threshold.
Alternatively,
the storing the second information into a database includes:
storing the analyzed second information of the at least two devices into the database through one-time remote Input/Output (Input/Output) operation; wherein, the task list comprises: at least two of the processing tasks.
In a third aspect, an embodiment of the present invention provides a production end of a server, including:
the generating module is configured to generate a processing task according to the first information uploaded by the equipment;
the execution module is configured to place the processing task into a target shared queue, so that a consuming end of the server obtains the processing task from the target shared queue, determines second information of the device according to the processing task, and stores the second information into a database; the target shared queue is selected from at least two preset shared queues.
In a fourth aspect, an embodiment of the present invention provides a consuming side of a server, including:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is configured to determine a target shared queue in at least two preset shared queues;
an obtaining module configured to obtain a processing task from the target shared queue;
the analysis module is configured to determine second information of the equipment according to the processing task;
a storage module configured to store the second information in a database.
In a fifth aspect, an embodiment of the present invention provides a server, including: a production end as described in the above embodiments and a consumer end as described in the above embodiments.
In a sixth aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any of the embodiments described above.
In a seventh aspect, an embodiment of the present invention provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method described in any of the above embodiments.
One embodiment of the above invention has the following advantages or benefits: because at least two sharing queues are preset, when the current production end operates the target sharing queue, other production ends or consumption ends can operate other sharing queues. The embodiment of the invention can improve the processing efficiency of the first information of the equipment and avoid the accumulation of processing tasks in a shared queue. Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a flow chart of a method for remotely controlling a device according to an embodiment of the present invention;
FIG. 2 is a flow chart of an information processing method provided by the prior art;
FIG. 3 is a flowchart of an information processing method applied to a production side according to an embodiment of the present invention;
FIG. 4 is a flow chart of an information processing method implemented in a thread manner according to an embodiment of the present invention;
fig. 5 is a flowchart of an information processing method applied to a consuming side according to an embodiment of the present invention;
fig. 6 is a flowchart of an information processing method applied to a server according to an embodiment of the present invention;
fig. 7 is a flowchart of another information processing method applied to a server according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a production end provided by an embodiment of the present invention;
FIG. 9 is a schematic diagram of a consumer end provided by an embodiment of the invention;
FIG. 10 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 11 is a schematic structural diagram of a computer system suitable for implementing a terminal device or a server according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In an actual application scenario, a user may remotely control a device through a mobile terminal, for example, check whether the device is networked, issue an upgrade command to the device, and the like. And information on whether the device is networked, upgradeable, etc. needs to be reported by the device to the server, which stores the information in a database, as shown in fig. 1. The user can obtain information such as whether the equipment (equipment 1-3) is networked or not, whether upgrading can be carried out or not from the server through the application installed on the mobile terminal, and the information is displayed to the user.
The equipment can be intelligent household appliances, automatic teller machines and other intelligent equipment. The server can be a cloud server or a local server.
After the server adds a new function, the application needs to be reissued. When the server reissues the application, the server disconnects the communication connection with the device, and when a large number of devices determine to disconnect the communication connection with the server, the server is reconnected in a very short time, and relevant information of the devices is reported, such as whether the devices are networked or not, the current version number of the devices, and the like. Therefore, the server needs to process a large amount of information reported by the devices in a very short time and store the information in the database. In view of this, how to efficiently process information reported by a device is an urgent problem to be solved at present.
In the prior art, when processing information of a device, only one shared queue is used, as shown in fig. 2, a production end and a consumption end may process information reported by an intelligent device in an asynchronous manner. However, only one device can operate the shared queue at the same time, that is, when the current production end operates the shared queue, other production ends or consumption ends cannot operate the shared queue. Therefore, the efficiency of uploading information by the prior art processing device is low, and accumulation of processing tasks in the shared queue is easily caused.
In view of the above situation, an embodiment of the present invention provides an information processing method applied to a production side of a server, as shown in fig. 3, including the following steps:
step 301: and receiving the first information uploaded by the equipment.
The first information includes: encrypted device number, encrypted device version number, encrypted device networking status, and the like.
Step 302: and generating a processing task according to the first information.
Specifically, the first information is decrypted to obtain a data model, and a processing task is generated according to the data model.
Step 303: and selecting a target shared queue from at least two preset shared queues.
In the embodiment of the present invention, two or more sharing queues may be preset, and the production end may select one sharing queue from the sharing queues as a target sharing queue. Other production or consumption terminals may operate other shared queues in order to increase the processing speed of the first information.
Step 304: and putting the processing tasks into the target shared queue so that the consumption end acquires the processing tasks from the target shared queue, determining second information of the equipment according to the processing tasks, and storing the second information into the database.
The target shared queue is selected from at least two preset shared queues.
Because at least two sharing queues are preset, when the current production end operates the target sharing queue, other production ends or consumption ends can operate other sharing queues. The embodiment of the invention can improve the processing efficiency of the first information of the equipment and avoid the accumulation of processing tasks in a shared queue.
In an actual application scenario, as shown in fig. 4, the method may be implemented by a production thread, that is, the method of the embodiment of the present invention can be implemented when the production thread is executed by a processor.
In an embodiment of the present invention, selecting a target shared queue from at least two preset shared queues includes:
determining whether a plurality of free shared queues exist in at least two shared queues, and if so, selecting a target shared queue from the plurality of free shared queues.
A free shared queue refers to a shared queue that has no producer or consumer operations. In the embodiment of the present invention, any one shared queue can be selected from the free shared queues as the target shared queue.
In an actual application scenario, the priority order of each shared queue may be preset, and a target shared queue may be selected according to the priority order. For example, if the priority order of shared queue A, B, C, D is B, C, D, A and the current time is C, D, A, the free shared queue with the priority order first in the free queue is selected as the target shared queue.
In an embodiment of the present invention, the producer may also be pre-bound to the shared queue, for example, producer 1 is bound to the shared queue a, producer 2 is bound to the shared queue B, and producer 3 is bound to the shared queue C. And the production end 1 puts the processing tasks into the shared queue A according to the binding relationship.
As shown in fig. 5, an embodiment of the present invention provides an information processing method, applied to a consuming side, including:
step 501: and determining a target shared queue in at least two preset shared queues.
Similar to the producer, the consumer may select a target shared queue from the free shared queues.
Step 502: and acquiring the processing task from the target shared queue.
Step 503: and determining second information of the equipment according to the processing task.
Specifically, the consuming side analyzes the processing task to obtain second information of the device, where the second information includes: device number, device version number, device networking status, etc.
Step 504: and storing the second information into a database.
The consumer may store the second information in the database via a database remote I/O operation.
In the embodiment of the present invention, since at least two shared queues are preset, when the current production end operates the target shared queue, other production ends or consumption ends may operate other shared queues. The embodiment of the invention can improve the processing efficiency of the first information of the equipment and avoid the accumulation of processing tasks in a shared queue.
As shown in fig. 4, in a practical application scenario, the method may be implemented by a consuming thread, that is, the consuming thread can implement the method of the embodiment of the present invention when being executed by a processor.
In one embodiment of the invention, the target shared queue is a shared queue bound with the consumer in advance.
In the embodiment of the invention, the consumption end can acquire the processing task from the target shared queue bound with the consumption end, and the consumption end is bound with the shared queue, so that different consumption ends are prevented from operating the same shared queue, and the execution efficiency of the processing task can be improved.
In one embodiment of the present invention, determining the second information of the device according to the processing task includes:
adding a processing task into a preset task list;
when the processing tasks in the task list meet the preset triggering conditions, aiming at each processing task in the task list: and analyzing the processing task to obtain second information.
In order to reduce the number of times of performing remote I/O operations on the database, the embodiment of the present invention adds the processing task to the task list instead of immediately processing the processing task, and performs unified processing on the processing tasks in the task list when the processing tasks in the task list satisfy the trigger condition.
In one embodiment of the invention, the analyzed second information of at least two devices is stored in a database through one remote I/O operation; wherein, the task list includes: at least two processing tasks.
The embodiment of the invention can store the second information of a plurality of devices through one-time remote I/O operation, thereby greatly improving the efficiency of processing the second information by the consumption end.
In one embodiment of the present invention, the trigger condition may be: the number of processing tasks in the task list is equal to the number threshold.
In one embodiment of the present invention, the trigger condition may also be: the time from the most recent processing task in the processing task list exceeds a time threshold.
In one embodiment of the present invention, the trigger condition may also be: the number of processing tasks in the task list is equal to the number threshold, and the time from the most recent processing task in the task list exceeds the time threshold.
As shown in fig. 6, the embodiment of the present invention takes a server formed by a production end and a consumption end as an example to describe in detail an information processing method, where the method includes:
step 601: the production end receives first information uploaded by the intelligent equipment.
As shown in fig. 7, the production end 1 receives the first information uploaded by the intelligent device 1, the production end 2 receives the first information uploaded by the intelligent device 2, the production end 3 receives the first information uploaded by the intelligent device 3,
step 602: and the production end decrypts the first information to obtain the data model.
Step 603: and the production end generates a processing task according to the data model.
The production side 1 generates a processing task 1, the production side 2 generates a processing task 2, and the production side 3 generates a processing task 3.
Step 604: the production end determines whether there are several idle shared queues in the preset three shared queues, if yes, step 605 is executed, otherwise, step 604 is executed after a specified time interval.
Step 605: the production end selects a target shared queue from a plurality of idle shared queues.
Step 606: and the production end puts the processing task into a target sharing queue.
The shared queues 1 and 2 are idle, the production end 1 puts the processing task 1 into the shared queue 1, and the production end 2 puts the processing task 2 into the shared queue 2. Since the consumer 3 is operating the shared queue 3, the producer 3 re-executes step 604 after a specified time interval.
Step 607: the consumer determines a target shared queue to which the consumer is bound from among the three shared queues.
The consumption end 1 is bound with the shared queue 1, the consumption end 2 is bound with the shared queue 2, and the consumption end 3 is bound with the shared queue 3.
Step 608: and when the target shared queue bound with the consumption end is idle, the consumption end acquires the processing task from the target shared queue.
Step 609: and the consumption end adds the processing task into a preset task list.
The consumer side acquires the processing task from the bound shared queue in a round-robin manner and adds the task to the task list.
Step 610: when the number of processing tasks in the task list is equal to the number threshold, the consuming side, for each processing task in the task list: and analyzing the processing task to obtain second information.
Step 611: the consumption end stores the analyzed second information of the three intelligent devices into a database through one-time remote I/O operation; wherein, the task list includes: three processing tasks.
In fig. 7, the consuming side 1 is persistently storing the second information of the three intelligent devices into the database through the remote I/O operation. Since the shared queue 2 bound by the consumer 2 is being operated by the producer, the consumer 2 can obtain the processing task from the shared queue 2 again after a certain time interval. The consuming side 3 is getting processing tasks from the shared queue 3.
In the embodiment of the present invention, at the same time, there may be three different device operation shared queues, and taking fig. 7 as an example, two production terminals and one consumption terminal may respectively operate three shared queues. By adding the shared queue, different production ends can concurrently process first information of different intelligent devices, and different consumption ends can concurrently execute processing tasks or remote I/O operations, so that the embodiment of the invention can improve the information processing efficiency and avoid the accumulation of the processing tasks in the shared queue.
As shown in fig. 8, an embodiment of the present invention provides a production end of a server, including:
a generating module 801 configured to generate a processing task according to the first information uploaded by the device;
an execution module 802, configured to place the processing task in a target shared queue, so that a consuming end of the server obtains the processing task from the target shared queue, determines second information of the device according to the processing task, and stores the second information in a database; the target shared queue is selected from at least two preset shared queues.
In an embodiment of the present invention, the first information includes: any one or more of an encrypted device number, an encrypted device version number, and an encrypted device networking status.
In an embodiment of the present invention, the second information includes: any one or more of a device number, a device version number, and a device networking status.
As shown in fig. 9, an embodiment of the present invention provides a consuming end, including:
a determining module 901, configured to determine a target shared queue in at least two preset shared queues;
an obtaining module 902 configured to obtain a processing task from a target shared queue;
the analysis module 903 is configured to determine second information of the device according to the processing task;
a storage module 904 configured to store the second information in a database.
In one embodiment of the invention, the target shared queue is a shared queue bound with the consumer in advance.
In an embodiment of the present invention, the parsing module 903 is configured to add a processing task to a preset task list; when the processing tasks in the task list meet the preset triggering conditions, aiming at each processing task in the task list: and analyzing the processing task to obtain second information.
In one embodiment of the invention, the trigger condition comprises: the number of processing tasks in the task list equals a number threshold and/or the time to most recently process a processing task in the task list exceeds a time threshold.
In an embodiment of the present invention, the storage module 904 is configured to store the parsed second information of the at least two devices in a database through one remote I/O operation; wherein, the task list includes: at least two processing tasks.
An embodiment of the present invention provides an electronic device, including:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method of any of the embodiments described above.
Embodiments of the present invention provide a computer-readable medium, on which a computer program is stored, which when executed by a processor implements the method of any of the above embodiments.
Fig. 10 shows an exemplary system architecture 1000 of an information processing method or a production side or a consumption side to which an embodiment of the present invention can be applied.
As shown in fig. 10, the system architecture 1000 may include terminal devices 1001, 1002, 1003, a network 1004, and a server 1005. The network 1004 is used to provide a medium for communication links between the terminal devices 1001, 1002, 1003 and the server 1005. Network 1004 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 1001, 1002, 1003 to interact with a server 1005 via a network 1004 to receive or transmit messages or the like. The terminal devices 1001, 1002, 1003 may have installed thereon various messenger client applications such as shopping applications, web browser applications, search applications, instant messenger, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 1001, 1002, 1003 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 1005 may be a server that provides various services, such as a backend management server (for example only) that supports shopping websites browsed by users using the terminal devices 1001, 1002, 1003. The backend management server may analyze and perform other processing on the received data such as the product information query request, and feed back a processing result (for example, target push information, product information — just an example) to the terminal device.
It should be noted that the information processing method provided by the embodiment of the present invention is generally executed by the server 1005.
It should be understood that the number of terminal devices, networks, and servers in fig. 10 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 11, shown is a block diagram of a computer system 1100 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 11 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 11, the computer system 1100 includes a Central Processing Unit (CPU)1101, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the system 1100 are also stored. The CPU 1101, ROM 1102, and RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105: an input portion 1106 including a keyboard, mouse, and the like; an output portion 1107 including a signal output unit such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. The above-described functions defined in the system of the present invention are executed when the computer program is executed by a Central Processing Unit (CPU) 1101.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes a sending module, an obtaining module, a determining module, and a first processing module. The names of these modules do not form a limitation on the modules themselves in some cases, and for example, the sending module may also be described as a "module sending a picture acquisition request to a connected server".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device.
The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
generating a processing task according to the first information uploaded by the equipment;
putting the processing task into a target shared queue so that a consumption end of the server acquires the processing task from the target shared queue, determining second information of the equipment according to the processing task, and storing the second information into a database;
the target shared queue is selected from at least two preset shared queues.
The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise:
determining a target shared queue in at least two preset shared queues;
acquiring a processing task from the target shared queue;
determining second information of the equipment according to the processing task;
and storing the second information into a database.
According to the technical scheme of the embodiment of the invention, at least two sharing queues are preset, so that when the current production end operates the target sharing queue, other production ends or consumption ends can operate other sharing queues. The embodiment of the invention can improve the processing efficiency of the first information of the terminal equipment and avoid the accumulation of processing tasks in a shared queue. Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (12)

1. An information processing method is applied to a production end of a server, and comprises the following steps:
generating a processing task according to the first information uploaded by the equipment;
putting the processing task into a target shared queue so that a consumption end of the server acquires the processing task from the target shared queue, determining second information of the equipment according to the processing task, and storing the second information into a database;
the target shared queue is selected from at least two preset shared queues.
2. The method of claim 1,
the first information includes: any one or more of encrypted device number, encrypted device version number, and encrypted device networking status;
and/or the presence of a gas in the gas,
the second information includes: any one or more of a device number, a device version number, and a device networking status.
3. An information processing method, applied to a consumption end of a server, includes:
determining a target shared queue in at least two preset shared queues;
acquiring a processing task from the target shared queue;
determining second information of the equipment according to the processing task;
and storing the second information into a database.
4. The method of claim 3,
the target shared queue is a shared queue bound with the consumption end in advance.
5. The method of claim 3,
the determining second information of the device according to the processing task includes:
adding the processing task into a preset task list;
when the processing tasks in the task list meet a preset trigger condition, aiming at each processing task in the task list: and analyzing the processing task to obtain the second information.
6. The method of claim 5,
the trigger condition comprises: the number of processing tasks in the task list is equal to a number threshold, and/or the time from the last processing of the processing tasks in the task list exceeds a time threshold.
7. The method of claim 5 or 6,
the storing the second information into a database includes:
storing the analyzed second information of the at least two devices into the database through one-time remote input/output (I/O) operation; wherein, the task list comprises: at least two of the processing tasks.
8. A production side of a server, comprising:
the generating module is configured to generate a processing task according to the first information uploaded by the equipment;
the execution module is configured to place the processing task into a target shared queue, so that a consuming end of the server obtains the processing task from the target shared queue, determines second information of the device according to the processing task, and stores the second information into a database; the target shared queue is selected from at least two preset shared queues.
9. A consuming side of a server, comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is configured to determine a target shared queue in at least two preset shared queues;
an obtaining module configured to obtain a processing task from the target shared queue;
the analysis module is configured to determine second information of the equipment according to the processing task;
a storage module configured to store the second information in a database.
10. A server, comprising: the production end of claim 8 and the consumption end of claim 9.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
12. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-7.
CN202010357675.7A 2020-04-29 2020-04-29 Information processing method, production end, consumption end and server Pending CN113572704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010357675.7A CN113572704A (en) 2020-04-29 2020-04-29 Information processing method, production end, consumption end and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010357675.7A CN113572704A (en) 2020-04-29 2020-04-29 Information processing method, production end, consumption end and server

Publications (1)

Publication Number Publication Date
CN113572704A true CN113572704A (en) 2021-10-29

Family

ID=78157779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010357675.7A Pending CN113572704A (en) 2020-04-29 2020-04-29 Information processing method, production end, consumption end and server

Country Status (1)

Country Link
CN (1) CN113572704A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979100A (en) * 2022-04-15 2022-08-30 深信服科技股份有限公司 Cloud resource checking method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105978968A (en) * 2016-05-11 2016-09-28 山东合天智汇信息技术有限公司 Real-time transmission processing method, server and system of mass data
CN106201676A (en) * 2016-06-28 2016-12-07 浪潮软件集团有限公司 Task allocation method and device
CN107515795A (en) * 2017-09-08 2017-12-26 北京京东尚科信息技术有限公司 Multi-task parallel data processing method, device, medium and equipment based on queue
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105978968A (en) * 2016-05-11 2016-09-28 山东合天智汇信息技术有限公司 Real-time transmission processing method, server and system of mass data
CN106201676A (en) * 2016-06-28 2016-12-07 浪潮软件集团有限公司 Task allocation method and device
CN107515795A (en) * 2017-09-08 2017-12-26 北京京东尚科信息技术有限公司 Multi-task parallel data processing method, device, medium and equipment based on queue
CN110765167A (en) * 2019-10-23 2020-02-07 泰康保险集团股份有限公司 Policy data processing method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979100A (en) * 2022-04-15 2022-08-30 深信服科技股份有限公司 Cloud resource checking method and related device
CN114979100B (en) * 2022-04-15 2024-02-23 深信服科技股份有限公司 Cloud resource inspection method and related device

Similar Documents

Publication Publication Date Title
CN107844324B (en) Client page jump processing method and device
US20130326502A1 (en) Installing applications remotely
CN109995801B (en) Message transmission method and device
CN110321252B (en) Skill service resource scheduling method and device
CN111427701A (en) Workflow engine system and business processing method
CN112653614A (en) Request processing method and device based on message middleware
CN111831461A (en) Method and device for processing business process
CN113672357A (en) Task scheduling method, device and system
CN112084042A (en) Message processing method and device
CN112685481B (en) Data processing method and device
CN110795328A (en) Interface testing method and device
CN113572704A (en) Information processing method, production end, consumption end and server
CN112948138A (en) Method and device for processing message
CN106933449B (en) Icon processing method and device
CN113779122B (en) Method and device for exporting data
CN114896244A (en) Method, device and equipment for configuring database table and computer readable medium
CN114417318A (en) Third-party page jumping method and device and electronic equipment
CN112306791B (en) Performance monitoring method and device
CN113141403A (en) Log transmission method and device
CN108811036B (en) Method and apparatus for displaying wireless access point information
CN113282455A (en) Monitoring processing method and device
CN113779018A (en) Data processing method and device
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN112749204A (en) Method and device for reading data
CN113626176A (en) Service request processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination