CN114900565A - Method for improving stability and concurrent processing capability of Socket server of embedded platform - Google Patents

Method for improving stability and concurrent processing capability of Socket server of embedded platform Download PDF

Info

Publication number
CN114900565A
CN114900565A CN202210454573.6A CN202210454573A CN114900565A CN 114900565 A CN114900565 A CN 114900565A CN 202210454573 A CN202210454573 A CN 202210454573A CN 114900565 A CN114900565 A CN 114900565A
Authority
CN
China
Prior art keywords
service
request
client
sub
concurrent processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210454573.6A
Other languages
Chinese (zh)
Inventor
文冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Zhongke Shangyuan Technology Co Ltd
Original Assignee
Nanjing Zhongke Shangyuan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Zhongke Shangyuan Technology Co Ltd filed Critical Nanjing Zhongke Shangyuan Technology Co Ltd
Priority to CN202210454573.6A priority Critical patent/CN114900565A/en
Publication of CN114900565A publication Critical patent/CN114900565A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • H04L69/162Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields involving adaptations of sockets based mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses a concurrent processing method based on an embedded platform server, which comprises the steps of calling an APIsocket of a Linux system to create a socket, binding an APIbind service port number, receiving a request of an APIaccept and creating a sub-process of APIfork to process the request; running a redirection processing program in the subprocess in the fork, and simultaneously running a plurality of fork subprocesses to realize concurrent processing; and after the sub-process in the fork runs and finishes the directional processing program, releasing the sub-process and finishing a complete request processing flow. The method presets the sub-service process, and avoids frequent operations of memory copy, process application, release and the like in the program running state; the code logic is simple, no extra codes such as thread management and event calling exist, the original service can be compatible, the expanded sub-service and the original service coexist and are independent, and even if the expanded service fails, the original service can still run normally.

Description

Method for improving stability and concurrent processing capability of Socket server of embedded platform
Technical Field
The invention relates to the technical field of back-end service concurrent processing, in particular to a concurrent processing method based on an embedded platform service end.
Background
At present, common software architectures include a CS architecture and a BS architecture, both of which are implemented by socket sockets, and a common server needs to support a plurality of C-side or B-side access requests at the same time, so that the server is required to have concurrent processing capability; the common server concurrency implementation scheme is as follows: sub-process, sub-thread, pooling (sub-process pool or thread pool), asynchronous event handling.
The disadvantages of the above solutions:
the child process can copy the memory of the parent process, the CPU utilization rate can be improved due to frequent copying, resources can be occupied by the application and release of the process, the CPU resources of the embedded platform are relatively tense, and the normal service can be affected due to the fact that the CPU utilization rate is high due to the scheme.
The multithreading shares the memory space of the parent process, the utilization rate of a CPU is reduced compared with that of the child process, but the use scene (the use scene needing memory isolation) is limited, the error of one thread often causes the error of other threads, and in addition, the frequent application and release of the threads still occupy additional resources.
The pooling technology mainly solves the problem of frequent application and release of threads, and has the disadvantages of increased code complexity, low reliability, possibility of thread jitter caused by frequent thread switching, and instability of the server caused by too many threads.
Asynchronous event processing is efficient, but scheduling among events is complex, and is suitable for projects from zero.
Disclosure of Invention
This section is for the purpose of summarizing some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. In this section, as well as in the abstract and the title of the invention of this application, simplifications or omissions may be made to avoid obscuring the purpose of the section, the abstract and the title, and such simplifications or omissions are not intended to limit the scope of the invention.
The present invention has been made in view of the above-mentioned conventional problems.
In order to solve the technical problems, the invention provides the following technical scheme: calling an API socket of a Linux system to create a socket, an API bind binding service port number, an API accept to receive a request, and an API fork to create a sub-process to process the request; running a redirection processing program in the subprocess in the fork, and simultaneously running a plurality of fork subprocesses to realize concurrent processing; and after the sub-process in the fork runs and finishes the directional processing program, releasing the sub-process and finishing a complete request processing flow.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: also comprises the following steps of (1) preparing,
performing loop iteration, wherein an API accept receives a request, and an API fork creates a sub-process to process the request;
continuously receiving requests from the client;
when the request interval time of the client is less than the time required by service processing, a plurality of fork sub-processes exist, and the client is in a concurrent processing state.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: adding a virtual network card driver, namely a virtual network card ed 0;
when a program is started, a plurality of sub-processes are preset, each sub-process corresponds to a monitoring port, a network card is ed0, an original service is also bound with a 3990 port, and the network card is eth 1.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: setting a port mapping function module, and capturing the client access request message by using a HooK mechanism of Netfilter;
replacing the original 3990 port number by combining an expansion service port, and calculating a header check value of the TCP or the UDP after modifying the port according to the protocol requirement of the TCP/UDP;
and replacing the value of skb- > dev by eth1 into an ed0 interface, resubmitting the skb into a system protocol stack, and receiving the request by a preset sub-service process bound to ed 0.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: presetting a service process, receiving a request message of the client from ed0, responding the request by a service subprogram according to service processing logic, and sending a response message generated by the service subprogram through ed 0;
and after capturing the response message sent to the client, the port mapping module restores the expanded port number in the message to 3990 and calls a sending interface of eth1 to send the message to the client.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: the port mapping module is a kernel module, a kernel driver of ko is obtained after code compiling, and the ko module is loaded by using insmod.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: after the ko module is started, capturing a message of a client accessing Web Portal service by using a HOOK mechanism of Netfilter of a Linux system, wherein the service port number is 3990, and port mapping is obtained according to the MAC address of the client.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: defining the destination port number of the captured client request message as 3990, selecting the last byte of the client MAC address and the number of the expansion ports to carry out remainder calculation, adding 3990 to the calculation result to obtain the port to be replaced, and determining which preset service process the request is sent to for processing.
As a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: if 32 service ports are expanded, and the currently captured client MAC address is 48-51-C5-70-9B-49, the service port number requested to be replaced is 3999, and the client request will be processed by the preset sub-service of binding 3999, and the calculation formula is as follows:
0x49%32+3990=3999。
as a preferred scheme of the concurrent processing method based on the embedded platform server described in the present invention, wherein: the Skb represents a data packet in the Linux kernel code, and Skb- > dev represents which network card the data packet comes from, and the application program cannot receive the data packet without replacement.
The invention has the beneficial effects that: presetting a sub-service process to avoid frequent operations of memory copy, process application, release and the like in a program running state; secondly, the code logic is simple, and additional codes such as thread management and event calling do not exist; and thirdly, the original service can be compatible, the expanded sub-service and the original service are coexistent and independent from each other, and even if the expanded service fails, the original service can still normally operate.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise. Wherein:
fig. 1 is a schematic flowchart of a concurrent processing method based on an embedded platform server according to a first embodiment of the present invention;
fig. 2 is a schematic view of a Web Portal service processing flow of the concurrent processing method based on the embedded platform server according to the first embodiment of the present invention;
fig. 3 is a schematic diagram of a flow of a sub-process for calling a concurrent processing method based on an embedded platform server according to a first embodiment of the present invention;
fig. 4 is a diagram illustrating preset service processing of a concurrent processing method based on an embedded platform server according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a test architecture of a concurrent processing method based on an embedded platform server according to a second embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, specific embodiments accompanied with figures are described in detail below, and it is apparent that the described embodiments are a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making creative efforts based on the embodiments of the present invention, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Furthermore, reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
The present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially in general scale for convenience of illustration, and the drawings are only exemplary and should not be construed as limiting the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
Meanwhile, in the description of the present invention, it should be noted that the terms "upper, lower, inner and outer" and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and operate, and thus, cannot be construed as limiting the present invention. Furthermore, the terms first, second, or third are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The terms "mounted, connected and connected" in the present invention are to be understood broadly, unless otherwise explicitly specified or limited, for example: can be fixedly connected, detachably connected or integrally connected; they may be mechanically, electrically, or directly connected, or indirectly connected through intervening media, or may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1, fig. 2, and fig. 3, a first embodiment of the present invention provides a concurrent processing method based on an embedded platform server, which specifically includes:
s1: calling the Linux system API socket to create a socket.
S2: and calling the Linux system API bind to bind the service port number.
S3: and calling the Linux system API accept to receive the request.
S4: calling Linux system API fork to create a sub-process to process the request.
S5: the redirection processing program is operated in the sub-process in the fork, and a plurality of fork sub-processes can be operated simultaneously, so that the purpose of concurrent processing is achieved.
S6: after the subprocess in the Fork runs the redirection processing program, the subprocess is released, and a complete request processing flow is completed.
S3 to S6 are in a loop, continuously receive requests from the client, and give the requests to S4 for processing, when the interval time of the client request is less than the time required by the service processing, there will be multiple fork sub-processes, and the client is in a concurrent processing state at this time.
The invocation code is schematically as follows:
Figure BDA0003618304710000051
referring to fig. 1, each Accept request corresponds to a service processing process, so as to achieve the purpose of concurrent processing.
Referring to fig. 2, the present embodiment further provides a Web Portal service processing unit, which is responsible for processing a client request and sending a response packet, where a 3990 port is a default Web Portal service port, a 3991-xxxxx port is an extended port number in the present embodiment and is also used for the Web Portal service port, and a virtual interface ed0, where the present embodiment desires to improve concurrency performance, and at the same time, an original software architecture is not changed, so that the extended service port is bound to the virtual interface ed 0.
And port mapping, namely mapping the original 3990 to an expanded service port number (3991-xxxx) according to a certain rule.
Specifically, the method comprises the following steps:
(1) and adding a virtual network card driver, namely a virtual network card ed 0.
Figure BDA0003618304710000061
(2) When the program is started, a plurality of subprocesses (the specific number can be determined according to the service scale) are preset, each subprocess corresponds to a monitoring port, a network card uses ed0, the original service is also bound with 3990 ports, and the network card uses eth 1.
Further, in order to ensure that the technical solution provided by this embodiment does not have a worse result than that in the case of no use, the service subprocess in fig. 2 also has the capability of a fork subprocess, and although the requests of different clients are allocated to different service subprocesses for processing, theoretically, there is still a possibility that the requests may be concentrated into a certain service subprocess, and the probability of occurrence of this situation is small according to statistical calculation of the MAC address of the client and actual tests.
(3) And setting a port mapping function module, and capturing a client access request message by using a HooK mechanism of Netfilter.
Specifically, the port mapping module is a kernel module, codes are compiled to obtain a kernel drive of ko, the ko module is loaded by using insmod, after the module is started, a message of a client accessing Web Portal service is captured by using a HOOK mechanism of Netfilter of a Linux system, the service port number is 3990, and the port mapping is obtained according to the MAC address of the client.
Supposing that the destination port number of a client request message captured by a module is 3990, selecting the last byte of a client MAC address and the number of expansion ports to carry out remainder calculation (a remainder algorithm can discretely distribute different MAC addresses, the invention uses the remainder algorithm to map different MAC address requests to different port numbers and avoid concentrating a plurality of requests to a certain port number), adding 3990 to the calculation result to obtain a port to be replaced, and determining which preset service process the request is sent to for processing by the port.
Still further, if 32 service ports are expanded, and the currently captured client MAC address is 48-51-C5-70-9B-49, the service port number replaced by the request is 3999, and the client request is processed by the preset sub-service of binding 3999, which is calculated as follows:
1. 0x49%32+3990=3999
(4) and (4) replacing the original 3990 port number with the expanded service port calculated in the step (3), and only calculating a header check value of the TCP or the UDP after modifying the port according to the protocol requirement of the TCP/UDP, so that the performance is still guaranteed.
(5) Replacing the value of skb- > dev in the step (4) by eth1 to an ed0 interface, and resubmitting the skb into a system protocol stack, so that the preset sub-service process bound to ed0 can receive the request.
Specifically, Skb represents a data packet in the Linux kernel code, and Skb- > dev represents which network card the data packet comes from, and the application program cannot receive the data packet without replacement.
(6) After the step (5), the preset service process can receive the request message of the client from ed0, and the service subprogram responds to the request according to the service processing logic and sends out the response message generated by the service logic processing module through ed 0.
(7) And (4) after capturing the response message sent to the client, the port mapping module restores the expanded port number in the message to 3990, the restoration operation method is the same as the step (4), only the mapped 3999 is restored to 3990, and then the sending interface of eth1 is called to send the message to the client.
The steps complete a request and response interactive process, and the whole process is transparent and invisible to the client.
Preferably, this embodiment further needs to be described in detail, the method of pre-starting a plurality of service sub-processes with business functions in the present invention enables the server to have concurrent processing capability, compared with the common concurrent technology, the method of the present invention has a simpler code structure, can be compatible with the existing codes, and when the preset sub-service fails, the original service can run normally, so that the stability is higher.
Preferably, the invention presets a plurality of sub-service processes and binds their respective service port numbers at the initialization stage of the server, and maps the original service port number into the sub-service port number according to the client identifier such as the MAC address, compared with the process pool, the method of the invention does not need to be completely static for the scheduling and management of the sub-processes; in addition, since each service sub-process is completely independent (including port numbers), even if a sub-process fails, the whole service is not affected.
Example 2
Referring to fig. 4 and 5, a second embodiment of the present invention is different from the first embodiment in that an experimental test based on a concurrent processing method of an embedded platform server is provided, which specifically includes:
if the traditional concurrent processing method of the server side has relatively limited CPU, memory and flash resources when running on an embedded platform, the design idea of the invention is to make the server side program static, replace the sub-process of dynamic application with the static preset service sub-process, each preset service process is bound to an extension port, each sub-process is an independent server side, and the server side distributes a plurality of client side requests discretely to each sub-process for processing through client side identification.
Preferably, each server software has a port number to the outside, such as 80 for HTTP; 3306 for MySQL; 21, 22 for FTP; if one server software is 8000 to external port, the invention expands several service ports on original basis, these expanded ports are transparent to client, the service port number is 8000 from client to client.
Referring to fig. 4, the port mapping module uses a discrete algorithm to distribute the client request to each preset service process according to the client identifier (such as MAC address), and the priority control and function can be easily implemented in the module.
In this embodiment, open-source covachilli software is tested on an embedded development platform, and cova chili is a widely-used solution for Web Portal authentication (Captive Portal, UMA) and gateway, can be run as an embedded program in a router or as independent service software, and is currently integrated into an operating system Openwrt.
To verify the effectiveness of the present invention, the CPU occupancy rates of the covachilli software using the present invention and the covachilli software not using the present invention were compared in responding to the same number of QPS on the same hardware platform.
Table 1: test environment.
Hardware platform BCM4908
Operating system Linux 3.6.7
WEB Server Coovachilli
WEB Client IE
The testing steps are as follows:
SS 1: the gitubb downloads a cova-chilli source code and a source code address;
SS 2: compiling and installing cova-chili software;
SS 3: three PCs are connected with a BCM4908 platform router LAN port;
SS 4: when the PC browser accesses an IP address, such as 1.2.3.4, the covachilli Web Portal authentication function redirects the page request to a specified page, such as: 192.168.0.1/index. htm;
SS 5: setting an automatic refreshing function of an IE browser of the PC at a time interval of 100 ms;
SS 6: the TOP command counts BCM4908 CPU usage.
Table 2: and (6) testing results.
TABLE 2-1
Figure BDA0003618304710000091
Tables 2 to 2
Figure BDA0003618304710000092
Tables 2 to 3
Figure BDA0003618304710000093
Referring to the table contents, it can be seen intuitively from table 2-1 that the coovacilli software scheme without the invention occupies a large amount of CPU resources, while the test results using the invention are shown in table 2-2, the CPU occupancy rate is significantly reduced, and from table 2-3, the method of the invention enables the same hardware platform to access and process more service requests.
It should be recognized that embodiments of the present invention can be realized and implemented in computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein. A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being: a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of example, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

Claims (10)

1. A concurrent processing method based on embedded platform server is characterized in that: comprises the steps of (a) preparing a mixture of a plurality of raw materials,
calling an API socket of a Linux system to create a socket, binding an API bind service port number, receiving a request of an API accept and creating a subprocess of the API fork to process the request;
running a redirection processing program in the subprocess in the fork, and simultaneously running a plurality of fork subprocesses to realize concurrent processing;
and after the sub-process in the fork runs and finishes the directional processing program, releasing the sub-process and finishing a complete request processing flow.
2. The embedded platform server-based concurrent processing method according to claim 1, wherein: also comprises the following steps of (1) preparing,
performing loop iteration, wherein an API accept receives a request, and an API fork creates a sub-process to process the request;
continuously receiving requests from the client;
when the request interval time of the client is less than the time required by service processing, a plurality of fork sub-processes exist, and the client is in a concurrent processing state.
3. The embedded platform server-based concurrent processing method according to claim 1 or 2, wherein: adding a virtual network card driver, namely a virtual network card ed 0;
when a program is started, a plurality of sub-processes are preset, each sub-process corresponds to a monitoring port, a network card is ed0, an original service is also bound with a 3990 port, and the network card is eth 1.
4. The embedded platform server-based concurrent processing method according to claim 3, wherein: setting a port mapping function module, and capturing the client access request message by using a HooK mechanism of Netfilter;
replacing the original 3990 port number by combining an expansion service port, and calculating a header check value of the TCP or the UDP after modifying the port according to the protocol requirement of the TCP/UDP;
and replacing the value of skb- > dev by eth1 into an ed0 interface, resubmitting the skb into a system protocol stack, and receiving the request by a preset sub-service process bound to ed 0.
5. The embedded platform server-based concurrent processing method according to claim 4, wherein: presetting a service process, receiving a request message of the client from ed0, responding the request by a service subprogram according to service processing logic, and sending a response message generated by the service subprogram through ed 0;
and after capturing the response message sent to the client, the port mapping module restores the expanded port number in the message to 3990 and calls a sending interface of eth1 to send the message to the client.
6. The embedded platform server-based concurrent processing method according to claim 5, wherein: the port mapping module is a kernel module, a kernel driver of ko is obtained after code compiling, and the ko module is loaded by using insmod.
7. The embedded platform server-based concurrent processing method according to claim 6, wherein: after the ko module is started, capturing a message of a client accessing Web Portal service by using a HOOK mechanism of Netfilter of a Linux system, wherein the service port number is 3990, and port mapping is obtained according to the MAC address of the client.
8. The embedded platform server-based concurrent processing method according to claim 7, wherein: defining the destination port number of the captured client request message as 3990, selecting the last byte of the client MAC address and the number of the expansion ports to make a remainder calculation, adding 3990 to the calculation result to obtain the port to be replaced, and determining which preset service process the request is sent to for processing.
9. The embedded platform server-based concurrent processing method according to claim 8, wherein: if 32 service ports are expanded, and the currently captured client MAC address is 48-51-C5-70-9B-49, the service port number requested to be replaced is 3999, and the client request will be processed by the preset sub-service of binding 3999, and the calculation formula is as follows:
0x49%32+3990=3999。
10. the embedded platform server-based concurrent processing method according to claim 4, wherein: the Skb represents a data packet in the Linux kernel code, and Skb- > dev represents which network card the data packet comes from, and the application program cannot receive the data packet without replacement.
CN202210454573.6A 2022-04-24 2022-04-24 Method for improving stability and concurrent processing capability of Socket server of embedded platform Pending CN114900565A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210454573.6A CN114900565A (en) 2022-04-24 2022-04-24 Method for improving stability and concurrent processing capability of Socket server of embedded platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210454573.6A CN114900565A (en) 2022-04-24 2022-04-24 Method for improving stability and concurrent processing capability of Socket server of embedded platform

Publications (1)

Publication Number Publication Date
CN114900565A true CN114900565A (en) 2022-08-12

Family

ID=82719603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210454573.6A Pending CN114900565A (en) 2022-04-24 2022-04-24 Method for improving stability and concurrent processing capability of Socket server of embedded platform

Country Status (1)

Country Link
CN (1) CN114900565A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007144891A1 (en) * 2006-06-15 2007-12-21 Xoreax Ltd. A method for the distribution of software processes to a plurality of computers
CN105337755A (en) * 2014-08-08 2016-02-17 阿里巴巴集团控股有限公司 Master-slave architecture server, service processing method thereof and service processing system thereof
US10805275B1 (en) * 2005-08-23 2020-10-13 Trend Micro Incorporated Multi-process architecture for implementing a secure internet service
WO2020242474A1 (en) * 2019-05-30 2020-12-03 Hewlett Packard Enterprise Development Lp Routing nvme-over-fabric packets
CN114020621A (en) * 2021-11-03 2022-02-08 展讯通信(天津)有限公司 Debugging method, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10805275B1 (en) * 2005-08-23 2020-10-13 Trend Micro Incorporated Multi-process architecture for implementing a secure internet service
WO2007144891A1 (en) * 2006-06-15 2007-12-21 Xoreax Ltd. A method for the distribution of software processes to a plurality of computers
CN105337755A (en) * 2014-08-08 2016-02-17 阿里巴巴集团控股有限公司 Master-slave architecture server, service processing method thereof and service processing system thereof
WO2020242474A1 (en) * 2019-05-30 2020-12-03 Hewlett Packard Enterprise Development Lp Routing nvme-over-fabric packets
CN114020621A (en) * 2021-11-03 2022-02-08 展讯通信(天津)有限公司 Debugging method, electronic device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MORALIN_: ""TCP编程(多进程版本)"", 《HTTPS://BLOG.CSDN.NET/MORALIN_/ARTICLE/DETAILS/80296564》, 13 May 2018 (2018-05-13) *
和家强;刘彦隆;: "Linux下基于TCP的预先派生子进程服务器的Socket编程", 电子设计工程, no. 03, 5 February 2011 (2011-02-05) *

Similar Documents

Publication Publication Date Title
US11210148B2 (en) Reception according to a data transfer protocol of data directed to any of a plurality of destination entities
US11146665B2 (en) Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
EP1861778B1 (en) Data processing system
US11048569B1 (en) Adaptive timeout mechanism
US11829303B2 (en) Methods and apparatus for device driver operation in non-kernel space
US20030050990A1 (en) PCI migration semantic storage I/O
WO2019179026A1 (en) Electronic device, method for automatically generating cluster access domain name, and storage medium
KR20150024845A (en) Offloading virtual machine flows to physical queues
US20150370582A1 (en) At least one user space resident interface between at least one user space resident virtual appliance and at least one virtual data plane
US8972989B2 (en) Computer system having a virtualization mechanism that executes a judgment upon receiving a request for activation of a virtual computer
US8006252B2 (en) Data processing system with intercepting instructions
EP2618257A2 (en) Scalable sockets
US20110173319A1 (en) Apparatus and method for operating server using virtualization technique
CN113709131B (en) Network data transmission method, device, computer equipment and readable medium
US20140068165A1 (en) Splitting a real-time thread between the user and kernel space
KR20100008363A (en) Physical network interface selection
US10862616B2 (en) Communication processing apparatus and communication processing method
JP2017207834A (en) Program, information processing apparatus, information processing system, and information processing method
US20030046474A1 (en) Mixed semantic storage I/O
CN114900565A (en) Method for improving stability and concurrent processing capability of Socket server of embedded platform
CN116939054A (en) Protocol stack implementation method and device and electronic equipment
JP6653786B2 (en) I / O control method and I / O control system
CN114443280A (en) Memory resource management method and device for cloud firewall, computer equipment and medium
CN117762618A (en) Data message storage method, device, equipment and storage medium
CN110955533A (en) Techniques for multi-connection messaging endpoints

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination