CN104394183A - File uploading system and method and Nginx server - Google Patents

File uploading system and method and Nginx server Download PDF

Info

Publication number
CN104394183A
CN104394183A CN201410308845.7A CN201410308845A CN104394183A CN 104394183 A CN104394183 A CN 104394183A CN 201410308845 A CN201410308845 A CN 201410308845A CN 104394183 A CN104394183 A CN 104394183A
Authority
CN
China
Prior art keywords
file
server
nginx
client
nginx server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410308845.7A
Other languages
Chinese (zh)
Inventor
袁孟全
罗辉
傅强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guiyang Longmaster Information and Technology Co ltd
Original Assignee
Guiyang Longmaster Information and Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guiyang Longmaster Information and Technology Co ltd filed Critical Guiyang Longmaster Information and Technology Co ltd
Priority to CN201410308845.7A priority Critical patent/CN104394183A/en
Publication of CN104394183A publication Critical patent/CN104394183A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a file uploading system and method and a Nginx server. The file uploading system comprises the Nginx server which is used for receiving a file uploaded from a client and triggering a PHP rear-end server after the file uploading is finished; and the PHP rear-end server which is connected with the Nginx server and is used for moving the file to an appointed issue directory. According to the technical scheme provided in the invention, through the highly-efficient processing capability of the Nginx server, the client is allowed to be more stable in large file uploading; and meanwhile, based on the monitoring of the Nginx server on a rear-end PHP processing port, the Nginx actively triggers a rear-end PHP program to carry out service logic processing of the server after the file uploading processing is finished, thereby enabling the development of the rear-end server to be more flexible and processing logic to be clearer.

Description

File uploading system, method and Nginx server
Technical field
The present invention relates to the communications field, in particular to a kind of file uploading system, method and Nginx server.
Background technology
At present, in the social software of audio frequency and video, need to upload a large amount of files, such as, picture pastes, and video pastes, head portrait, vivid photograph, note etc.
In correlation technique, stable Appache files passe module execute file is usually adopted to upload.Apache HTML (Hypertext Markup Language) (Hyper Text Transfer Protocol, referred to as HTTP) server (referred to as Apache) is the web page server of an open source code, can run in most computers operating system, because it is multi-platform and fail safe is widely used, be one of most popular Web server end software.It fast, reliable and by simple application programming interface (ApplicationProgramming Interface, referred to as API) expansion, the interpreters such as Perl/Python are compiled in server.
But files passe is not the strong point of Appache, because Appache disposal ability when processing files passe is not enough, causes server to occur a large amount of wait threads, and then causing server resource to be wasted.
Summary of the invention
Main purpose of the present invention is open a kind of file uploading system, method and Nginx server, at least to solve in correlation technique because Appache disposal ability when processing files passe is not enough, server is caused to occur a large amount of wait threads, and then the problem causing server resource to be wasted.
According to an aspect of the present invention, a kind of file uploading system is provided.
File uploading system according to the present invention comprises: Nginx server, for receiving the file coming from client upload, after above-mentioned files passe is complete, triggers PHP back-end server; Above-mentioned PHP back-end server, is connected with above-mentioned Nginx server, under above-mentioned file being moved to the issue catalogue of specifying.
According to a further aspect in the invention, a kind of Nginx server is provided.
Nginx server according to the present invention comprises: receiver module, for receiving the file coming from client upload; Upper transmission module, for being saved in the position of specifying by the above-mentioned file received; Trigger module, for after above-mentioned files passe is complete, triggers PHP back-end server and is moved to the issue catalogue of specifying from above-mentioned position of specifying by above-mentioned file.
According to another aspect of the invention, a kind of file uploading method is provided.
File uploading method according to the present invention comprises: Nginx server receives the file coming from client upload; The above-mentioned file received is saved in the position of specifying by above-mentioned Nginx server; Above-mentioned Nginx server, after above-mentioned files passe is complete, triggers PHP back-end server and is moved to the issue catalogue of specifying from above-mentioned position of specifying by above-mentioned file.
Pass through the present invention, use the high treatment capacity of Nginx server, make client more stable in big file uploading, simultaneously based on the monitoring of Nginx server to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus making the exploitation of back-end server more flexible, processing logic is more clear.
Accompanying drawing explanation
Fig. 1 is the system architecture diagram of the file uploading system according to the embodiment of the present invention;
Fig. 2 is the information interaction schematic diagram of file uploading system according to the preferred embodiment of the invention;
Fig. 3 is the structured flowchart of the Nginx server according to the embodiment of the present invention;
Fig. 4 is the structured flowchart of Nginx server according to the preferred embodiment of the invention; And
Fig. 5 is the flow chart of the file uploading method according to the embodiment of the present invention.
Embodiment
Below in conjunction with Figure of description, specific implementation of the present invention is made a detailed description.
Fig. 1 is the system architecture diagram of the file uploading system according to the embodiment of the present invention.As shown in Figure 1, this file uploading system comprises: Nginx server 10, for receiving the file coming from client upload, after above-mentioned files passe is complete, triggers PHP back-end server; And above-mentioned PHP back-end server 12, be connected with above-mentioned Nginx server, under above-mentioned file being moved to the issue catalogue of specifying.
In correlation technique, usually adopt stable Appache files passe module execute file to upload, because Appache disposal ability when processing files passe is not enough, causes server to occur a large amount of wait threads, and then causing server resource to be wasted.The method shown in Fig. 1 of employing, use the high treatment capacity of Nginx server, make client more stable in big file uploading, simultaneously based on the monitoring of Nginx server to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus making the exploitation of back-end server more flexible, processing logic is more clear.
Wherein, above-mentioned file can paste for picture, and video pastes, head portrait, vivid photograph, note etc.
In preferred implementation process, custom parameter, by calling the configuration address of Nginx server transmitting assembly, is transferred to the upper transmission module of Nginx server by client simultaneously by GET mode.Wherein, custom parameter can see table 1.
Table 1
Under the catalogue that files passe to be uploaded is specified to upload_store instruction by client, Nginx server is by the custom parameter file length of the file size received and client upload (such as, f_length) contrast, determine whether file has passed, further, when judging that above-mentioned files passe does not complete, message digest algorithm (MD5) verification of above-mentioned file can also be performed.If judge that file has passed, then this file is moved to the position of being specified by upload_pass instruction, trigger PHP back-end server, under above-mentioned file being moved to the issue catalogue of specifying, meanwhile, above-mentioned custom parameter can be submitted to PHP back-end server by the upper transmission module of Nginx server.
Preferably, said system can also comprise: client 14, be connected with above-mentioned Nginx server, for judging that this locality is uploaded unsuccessfully, to the first query statement whether above-mentioned Nginx server transmission file has been uploaded, and receive the files passe state information coming from above-mentioned PHP back-end server and return via above-mentioned Nginx server; Above-mentioned Nginx server 10, for after receiving above-mentioned first query statement, to the second query statement whether above-mentioned PHP back-end server transmission file exists, and the files passe state information coming from above-mentioned PHP back-end server is transmitted to above-mentioned client; Above-mentioned PHP back-end server 12, for after receiving above-mentioned second query statement, returns above-mentioned files passe state information to above-mentioned Nginx server.Wherein, uploading state information can see table 2.
Table 2
In preferred implementation process, when client this locality to file upload be in status of fail time, preferentially call file whether to have uploaded interface and judge whether file has been uploaded successfully, when server backspace file does not exist, client call files passe flow process carries out files passe; Exist when server is checked through file, backspace file is uploaded success status to client, client enters flow process of posting.Preferably, above-mentioned PHP back-end server, also for after above-mentioned file is moved to above-mentioned issue catalogue, returns issued state information; Above-mentioned Nginx server, also for the above-mentioned issued state information coming from above-mentioned PHP back-end server is forwarded to above-mentioned client.
In preferred implementation process, after above-mentioned file is moved to above-mentioned issue catalogue by PHP back-end server, the issued state information that PHP back-end server returns can be sent to above-mentioned client via Nginx server.
Preferably, above-mentioned client 14, also for above-mentioned file division is become multiple file bag, and uploads above-mentioned multiple file bag respectively; Above-mentioned PHP back-end server 12, also for being merged by above-mentioned multiple file bag, obtains above-mentioned file.
In preferred implementation process, in client, file can be carried out segmentation to upload, in the support function of server end activation pin to document breaking point uploading, namely, by being with the large files uploaded after client segmentation, the file (multiple file bag) of segmentation is merged by PHP back-end server at server end.
Above-mentioned preferred implementation is further described below in conjunction with Fig. 2.
Fig. 2 is the information interaction schematic diagram of file uploading system according to the preferred embodiment of the invention.As shown in Figure 2, the information interaction of file uploading system mainly comprises following process:
Step S201: client judges whether failed this locality uploads, if failure, then perform step S203.
Step S203: the query statement (i.e. above-mentioned first query statement) whether client has been uploaded to Nginx server transmission file.
The query statement (i.e. above-mentioned second query statement) whether step S205:Nginx server exists to PHP back-end server transmission file.
Step S207:PHP back-end server is through inquiry, and that determines file uploads state.
Files passe state information is returned to Nginx server by step S209:PHP back-end server.
Files passe state information is transmitted to client by step S211:Nginx server.Exist when server is checked through file, then upload state information instruction files passe success status, perform step S225, namely client enters flow process of posting.When uploading state information instruction file and not existing, then perform step S213, client call files passe flow process carries out files passe.
Step S213: client sends upload file request to Nginx server, and execute file upload operation.
File size in the file size received and above-mentioned custom parameter contrasts by step S215:Nginx server, judges whether files passe completes.
Step S217:Nginx server submits to backend file to hold instruction to PHP back-end server.
Under the file uploaded is moved to the issue catalogue of specifying by step S219:PHP back-end server.
Step S221:PHP back-end server returns issued state JSON character string to Nginx server.
Issued state JSON character string is transmitted to client by step S223:Nginx server.
Step S225: client is to applied logic processing server request file publishing;
Step S227: applied logic processing server customer in response end, and to client backspace file issued state.
As can be seen here, by the high treatment capacity of Nginx server, make client more stable in big file uploading, simultaneously based on the monitoring of Nginx to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus make the exploitation of back-end server more flexible, processing logic is more clear, after back-end processing completes, result is transmitted to client by Nginx server, client and server is made to complete complete files passe, business processing logic, substantially increase the operational efficiency of server.
Fig. 3 is the structured flowchart of the Nginx server according to the embodiment of the present invention.As shown in Figure 3, this Nginx server mainly comprises: receiver module 30, for receiving the file coming from client upload; Upper transmission module 32, for being saved in the position of specifying by the above-mentioned file received; Trigger module 34, for after above-mentioned files passe is complete, triggers PHP back-end server and is moved to the issue catalogue of specifying from above-mentioned position of specifying by above-mentioned file.
Adopt the Nginx server shown in Fig. 3, make client more stable in big file uploading, simultaneously based on the monitoring of Nginx server to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus making the exploitation of back-end server more flexible, processing logic is more clear.
Preferably, as shown in Figure 4, above-mentioned upper transmission module 32, also for receiving the custom parameter coming from above-mentioned client, and is transferred to above-mentioned PHP back-end server; Above-mentioned Nginx server can also comprise: contrast module 36, being connected with upper transmission module 32 with receiver module 30 respectively, for being contrasted by the file size in the file size received and above-mentioned custom parameter, judging whether files passe completes.
Preferably, as shown in Figure 4, above-mentioned Nginx server can also comprise: inspection module 38, for when judging that above-mentioned files passe does not complete, performs message digest algorithm (MD5) verification of above-mentioned file.
Fig. 5 is the flow chart of the file uploading method according to the embodiment of the present invention.As shown in Figure 5, this file uploading method comprises:
Step S501:Nginx server receives the file coming from client upload;
Step S503: the above-mentioned file received is saved in the position of specifying by above-mentioned Nginx server;
Step S505: above-mentioned Nginx server, after above-mentioned files passe is complete, triggers PHP back-end server and moved to the issue catalogue of specifying from above-mentioned position of specifying by above-mentioned file.
The method shown in Fig. 5 of employing, use the high treatment capacity of Nginx server, make client more stable in big file uploading, simultaneously based on the monitoring of Nginx server to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus making the exploitation of back-end server more flexible, processing logic is more clear.
Preferably, said method can also comprise: judge that this locality is uploaded unsuccessfully in above-mentioned client, and above-mentioned Nginx server receives the first query statement whether the transmission file that comes from above-mentioned client has been uploaded; The second query statement whether above-mentioned Nginx server exists to above-mentioned PHP back-end server transmission file; The files passe state information coming from above-mentioned PHP back-end server is transmitted to above-mentioned client by above-mentioned Nginx server.
Preferably, before above-mentioned file moves to the issue catalogue of specifying from above-mentioned position of specifying by above-mentioned Nginx server triggers PHP back-end server, following process can also be comprised: above-mentioned Nginx server receives the custom parameter coming from above-mentioned client and transmitted by GET mode; Above-mentioned Nginx server is by the file size received and contrast from the file size above-mentioned custom parameter, judges whether files passe completes.
In preferred implementation process, in client, file can be carried out segmentation to upload, in the support function of server end activation pin to document breaking point uploading, namely, by being with the large files uploaded after client segmentation, the file (multiple file bag) of segmentation is merged by PHP back-end server at server end.
In sum, by above-described embodiment provided by the invention, use the high treatment capacity of Nginx server, make client more stable in big file uploading, simultaneously for the requirement of user in client upload to upload progress and the unsteadiness of network, server additionally provides the function such as monitoring and breakpoint transmission of upload progress, thus the flow that greatly reducing mobile network uses, simultaneously based on the monitoring of Nginx to rear end PHP process port, Nginx initiatively triggers the process that rear end PHP program carries out the service logic of server after files passe process completes, thus make the exploitation of back-end server more flexible, processing logic is more clear, after back-end processing completes, result is transmitted to client by Nginx, client and server is made to complete complete files passe, business processing logic, substantially increase the operational efficiency of server, on year-on-year basis compared with under Appache disposal ability when processing files passe not enough and cause server to occur a large amount of wait threads and cause server resource to be wasted.
Be only several specific embodiment of the present invention above, but the present invention is not limited thereto, the changes that any person skilled in the art can think of all should fall into protection scope of the present invention.

Claims (10)

1. a file uploading system, is characterized in that, comprising:
Nginx server, for receiving the file coming from client upload, triggers PHP back-end server after described files passe is complete;
Described PHP back-end server, is connected with described Nginx server, under described file being moved to the issue catalogue of specifying.
2. system according to claim 1, is characterized in that,
Described system also comprises: described client, be connected with described Nginx server, for judging that this locality is uploaded unsuccessfully, to the first query statement whether described Nginx server transmission file has been uploaded, and receive the files passe state information coming from described PHP back-end server and return via described Nginx server;
Described Nginx server, for after receiving described first query statement, to the second query statement whether described PHP back-end server transmission file exists, and the files passe state information coming from described PHP back-end server is transmitted to described client;
Described PHP back-end server, for after receiving described second query statement, returns described files passe state information to described Nginx server.
3. system according to claim 1, is characterized in that,
Described PHP back-end server, also for after described file is moved to described issue catalogue, returns issued state information;
Described Nginx server, also for the described issued state information coming from described PHP back-end server is forwarded to described client.
4. system according to claim 1, is characterized in that,
Described client, also for described file division is become multiple file bag, and uploads described multiple file bag respectively;
Described PHP back-end server, also for being merged by described multiple file bag, obtains described file.
5. a Nginx server, is characterized in that, comprising:
Receiver module, for receiving the file coming from client upload;
Upper transmission module, for being saved in the position of specifying by the described file received;
Trigger module, for after described files passe is complete, triggers PHP back-end server and is moved to the issue catalogue of specifying from described position of specifying by described file.
6. server according to claim 5, is characterized in that,
Described upper transmission module, also for receiving the custom parameter coming from described client, and is transferred to described PHP back-end server;
Described Nginx server also comprises:
Contrast module, for being contrasted by the file size in the file size received and described custom parameter, judges whether files passe completes.
7. server according to claim 6, is characterized in that, described Nginx server also comprises: inspection module, for when judging that described files passe does not complete, performs the message digest algorithm verification of described file.
8. a file uploading method, is characterized in that, comprising:
Nginx server receives the file coming from client upload;
The described file received is saved in the position of specifying by described Nginx server;
Described Nginx server, after described files passe is complete, triggers PHP back-end server and is moved to the issue catalogue of specifying from described position of specifying by described file.
9. method according to claim 8, is characterized in that, also comprises:
Judge that this locality is uploaded unsuccessfully in described client, described Nginx server receives the first query statement whether the transmission file that comes from described client has been uploaded;
The second query statement whether described Nginx server exists to described PHP back-end server transmission file;
The files passe state information coming from described PHP back-end server is transmitted to described client by described Nginx server.
10. method according to claim 8, is characterized in that, before described file moves to the issue catalogue of specifying from described position of specifying by described Nginx server triggers PHP back-end server, also comprises:
Described Nginx server receives the custom parameter coming from described client and transmitted by GET mode;
Described Nginx server is by the file size received and contrast from the file size described custom parameter, judges whether files passe completes.
CN201410308845.7A 2014-07-01 2014-07-01 File uploading system and method and Nginx server Pending CN104394183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410308845.7A CN104394183A (en) 2014-07-01 2014-07-01 File uploading system and method and Nginx server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410308845.7A CN104394183A (en) 2014-07-01 2014-07-01 File uploading system and method and Nginx server

Publications (1)

Publication Number Publication Date
CN104394183A true CN104394183A (en) 2015-03-04

Family

ID=52612018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410308845.7A Pending CN104394183A (en) 2014-07-01 2014-07-01 File uploading system and method and Nginx server

Country Status (1)

Country Link
CN (1) CN104394183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106487858A (en) * 2015-09-01 2017-03-08 北京大学 Information method for uploading and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067505A (en) * 2012-12-30 2013-04-24 乐视网信息技术(北京)股份有限公司 Method for uploading files to server
CN103248711A (en) * 2013-05-23 2013-08-14 华为技术有限公司 File uploading method and server
CN103401892A (en) * 2013-06-26 2013-11-20 中国科学院声学研究所 HTTP POST based data upload accelerating method and server
CN103533073A (en) * 2013-10-23 2014-01-22 北京网秦天下科技有限公司 File management system and method for mobile equipment
CN103795809A (en) * 2014-03-03 2014-05-14 深圳市华曦达科技股份有限公司 File uploading method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067505A (en) * 2012-12-30 2013-04-24 乐视网信息技术(北京)股份有限公司 Method for uploading files to server
CN103248711A (en) * 2013-05-23 2013-08-14 华为技术有限公司 File uploading method and server
CN103401892A (en) * 2013-06-26 2013-11-20 中国科学院声学研究所 HTTP POST based data upload accelerating method and server
CN103533073A (en) * 2013-10-23 2014-01-22 北京网秦天下科技有限公司 File management system and method for mobile equipment
CN103795809A (en) * 2014-03-03 2014-05-14 深圳市华曦达科技股份有限公司 File uploading method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106487858A (en) * 2015-09-01 2017-03-08 北京大学 Information method for uploading and device
CN106487858B (en) * 2015-09-01 2019-11-08 北京大学 Information method for uploading and device

Similar Documents

Publication Publication Date Title
JP5792850B2 (en) File folder transmission over the network
US11057500B2 (en) Publication of applications using server-side virtual screen change capture
US9503499B1 (en) Concealing latency in display of pages
US20140250158A1 (en) Method and device for obtaining file
WO2016070718A1 (en) Method, device, and browser for file downloading
US20220060558A1 (en) Systems and Methods for Platform-Independent Application Publishing to a Front-End Interface
WO2014186489A2 (en) Enhanced links in curation and collaboration applications
WO2017167132A1 (en) Method and apparatus for implementing instant messaging
CN109582644A (en) File memory method, device, equipment and computer readable storage medium
CN103997452A (en) Information sharing method and apparatus among multiple platforms
WO2015035897A1 (en) Search methods, servers, and systems
KR20150032152A (en) Method and apparatus for performing edit operations between electronic devices
CN102999628A (en) Search method and information search terminal
US9876776B2 (en) Methods for generating and publishing a web site based on selected items and devices thereof
CN104079623B (en) Multistage cloud storage synchronisation control means and system
WO2017088369A1 (en) Data cross-domain request method, device and system
US10574773B2 (en) Method, device, terminal, server and storage medium of processing network request and response
CN103905496A (en) Picture downloading method and device
CN105100230A (en) File transmission method and device
CN104537022A (en) Method for sharing information through browser and browser client and device
JP6089346B2 (en) Method, apparatus and system for obtaining an object
CN104394183A (en) File uploading system and method and Nginx server
US10775966B2 (en) Customizable autocomplete option
CN116361584A (en) Page data processing method and device, computer equipment and storage medium
CN204392300U (en) File uploading system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150304