CN102790783A - Design method of intelligent cloud computing system of test paper background processing - Google Patents

Design method of intelligent cloud computing system of test paper background processing Download PDF

Info

Publication number
CN102790783A
CN102790783A CN2011101279189A CN201110127918A CN102790783A CN 102790783 A CN102790783 A CN 102790783A CN 2011101279189 A CN2011101279189 A CN 2011101279189A CN 201110127918 A CN201110127918 A CN 201110127918A CN 102790783 A CN102790783 A CN 102790783A
Authority
CN
China
Prior art keywords
layer
server
database
paper
item
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011101279189A
Other languages
Chinese (zh)
Inventor
谢晓尧
张安钰
喻国军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Education University
Original Assignee
Guizhou Education University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Education University filed Critical Guizhou Education University
Priority to CN2011101279189A priority Critical patent/CN102790783A/en
Publication of CN102790783A publication Critical patent/CN102790783A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a design method of an intelligent cloud computing system of test paper background processing. The intelligent cloud computing system is divided into a convergence layer, a whole paper processing layer, a cutting layer, a backup cutting layer, a subjective item service layer, an objective item judging layer, an objective item debugging layer and a core layer. The core layer is composed of a data base and a dispatch server, the other layers are server groups, and the dispatch server distributes server tasks of the layers in a unified mode. The intelligent cloud computing system is divided into different layers, and each layer achieves different functions, so that safety, reliability, security and popularization of the system are improved, intellectualization and automation of earlier test paper image processing are achieved, simultaneously high-standard requirements for data security and confidentiality during paperless paper inspection are achieved, and profession quality requirements for server performance and data administrators are reduced.

Description

A kind of method for designing of intelligent cloud computing platform of paper background process
Technical field
The present invention relates to the cloud computing platform designing technique, particularly relate to a kind of method for designing of intelligent cloud computing platform of paper background process.
Background technology
During network grading, paper filters out the redness on the paper after scanning according to subject, examination hall splitting; The red Option Box that comprises the objective item part; This system comprises two parts, and part is the subjective item part of marking examination papers, and another part is an objective item Intelligent Recognition part.The subjective item part of marking examination papers is accomplished to branch in the subjective item Web client of marking examination papers by the teacher that marks examination papers; Objective item is partly given computer picture recognition and is accomplished; The identification of objective item; At first define the template of paper, accurately locate the position of each full-filling frame of objective item according to location hole, the objective item that calculates the examinee is coated with dot information.Carrying out subjective item when partly going over examination papers, branch is goed over examination papers teacher's number generally 100~600 people, because concurrency is big; The speed that subjective item reads picture in real time reads the rapid speed of picture with respect to low and middle-grade server hardware platforms; If the branch paper picture of well cutting is placed on the separate unit server, can not satisfies system and normally move demand, therefore; Under the prerequisite that does not improve the server class, it is necessary that paper is disperseed to store multiple servers into.
For a long time, domestic large-scale examination has formed a series of very strict affair system and flow processs examined, and marking examination papers on the net also progressively replaces manual marking examination papers, everything all be go over examination papers whole process justice, just, secret strong assurance is provided.But present network grading system exists data administrator's manual operation more in early stage during background process, and the problem that authority is excessive because the whole cycle span of going over examination papers is longer, is unfavorable for the supervision to the data keeper, and safety of data has certain risk.In addition, whole process is to the performance of server, and data administrator's specialized capability has all proposed comparatively harsh requirement, is unfavorable for promoting.
Summary of the invention
Technical problem to be solved by this invention provides a kind of method for designing of intelligent cloud computing platform of paper background process; In the intellectuality that realizes paper image processing in early stage, automation; To data safety and the high standard demand of maintaining secrecy, reduce specialized capability requirement in the satisfied process of going over examination papers with no paper at all to server performance and data administrator.
In order to solve the problems of the technologies described above, the present invention adopts following technical scheme:
The method for designing of the intelligent cloud computing system of paper background process of the present invention: with system decomposition is convergence-level, whole volume processing layer, incised layer, backup incised layer, subjective item service layer, objective item judge layer, objective item debugging layer and core layer; Core layer is made up of database and dispatch server; All the other each layers are server farm, the server task of each layer of dispatch server unified distribution.System is divided into the different functions layer; Each layer accomplished different functions, and the calculation task of each layer is accomplished by the multiple servers cooperation of this layer through the scheduling of dispatch server, guarantees the task balance between each station server; Improved the speed of service of system, reduced the requirement of system hardware; The keeper can't be known the memory location of paper under the prerequisite of not opening database, improved the privacy degrees of paper, has reduced keeper's authority.
Preferably, in the above-mentioned method for designing, the dispatch server of core layer adopts improved roulette unified algorithms to distribute the server task of each layer, and improved roulette algorithm is following:
Dispatch server obtains the wait task amount Wi (i≤3) of next stratum server, is 0 server if there is the wait task amount, and then 1 of picked at random is one deck candidate server down;
If only have 1 server, it then is set is following one deck candidate server less than specified wait task amount 100;
If the wait task amount Wi (1≤i≤n) less than specified wait task amount of n (2≤n≤3) station server is arranged; Then count fi (1≤i≤n) according to its preceding 1 minute Processing tasks; Calculative strategy is than ω i=fi/Wi; (1≤i≤n); Make ω 0=0, Ik=<img file=" 225685DEST_PATH_IMAGE001.GIF " he=" 35 " img-content=" drawing " img-format=" jpg " inline=" no " orientation=" portrait " wi=" 51 " />/> ω i (0≤k≤n), generate the random number r between 0~1; If < < IiIn then chooses server i as candidate server to r to Ii-1/In;
If there is not the server less than specified wait task amount 100, then system gets into wait state.
Concrete, in the previous designs method, function that each layer accomplished in the system and the connected mode between each layer (promptly the network topology of this system is as shown in Figure 1) as follows:
Convergence-level: link to each other with dispatch server with scanning client, whole volume processing layer, incised layer, backup incised layer, database; The image that reception is sent from the scanning client; The paper state that upgrades the core layer database is for converge and write down the memory location of whole volume; According to the distribution of core layer dispatch server, the server to whole volume processing layer, incised layer and backup incised layer sends whole volume image simultaneously;
Whole volume processing layer: link to each other with convergence-level, database and dispatch server, hide examinee's name on the paper that sends from convergence-level, the number of examining, examination hall number, corresponding paper state is set in database is the memory location of having encrypted and recording of encrypted is rolled up;
Incised layer: link to each other with dispatch server with convergence-level, subjective item service layer, database; To cut from the paper that convergence-level is sent based on cutting board information; Corresponding paper state is set for cutting and write down the memory location of cutting volume in database; Based on the distribution of dispatch server, supply subjective item service layer to call;
Backup incised layer: link to each other with dispatch server with convergence-level, subjective item service layer, objective item judge layer, database; To cut from the paper that convergence-level is sent based on cutting board information; Corresponding paper state is set for backing up the memory location of cutting and record backup cutting volume in database, supplies objective item to pass judgment on layer and call;
Subjective item service layer: link to each other with database with incised layer, backup incised layer, transfer that the subjective item image accomplishes that the network grading subjective item is marked and submit the subjective item mark of corresponding paper to database to from incised layer;
Objective item is passed judgment on layer: link to each other with dispatch server with backup incised layer, objective item debugging layer, database; The objective item image that reception is sent from the backup incised layer; Obtain the objective item image handled well after the identification; Pass judgment on, corresponding paper state is set for passing judgment on and write down evaluation result in database, the objective item image of handling well based on the branch orientation objective item debugging layer transmission of dispatch server;
Objective item debugging layer: link to each other with dispatch server with objective item judge layer, database; Reception is passed judgment on the objective item image of handling well that layer sends from objective item; The raw information of original image also, the state that corresponding paper is set in database is for reduce and write down corresponding memory location;
Core layer: database is accomplished the record of paper data mode, memory location, objective item evaluation result, dispatch server scheduling and distribute each layer task.Database writes down the IP address and the file directory of paper storage server when the memory location of record paper.
Preferably; Except that subjective item service layer and core layer; Server in each layer is all set specified Processing tasks amount and is regularly sent the average speed of current wait task for processing amount and Processing tasks to dispatch server, and every station server is made up of a watcher thread, a m processing threads, a n transmission thread, a k thread, a pending formation, a formation to be stored, a formation to be sent.
The running of above-mentioned server is following: watcher thread is responsible for monitoring the network connection of last stratum server; Exist if connect; Then start a transmission thread corresponding client processing threads with the last layer server temporarily, add task to pending formation by client's processing threads; Processing threads is the taking-up task from pending formation; Add formation to be stored after the processing to; Thread is accomplished after the renewal of database task being added to formation to be sent based on formation to be stored; Send thread and from formation to be sent, obtain task, based on the instruction of dispatch server, the given server of waiting for or sending out one deck downward.
Compared with prior art, the present invention is divided into different layers with system, accomplishes different functions for every layer, but has improved fail safe, reliability, confidentiality and the generalization of system in following several respects:
(1) the subjective question marking teacher does not directly connect core layer through the anti-core layer database of asking of subjective item service layer, has improved the fail safe and the confidentiality of system;
(2) keeper under the prerequisite of not opening database, can't know task handle by the crowd in which station server accomplish, where paper be stored in, and improved the privacy degrees that paper is handled, and reduced keeper's authority;
(3) hide the sensitive information of paper, improved the confidentiality of system, reduced keeper's competency profiling;
(4) many are cut the concurrent throughput that server has improved reading images when subjective item service layer gos over examination papers, and have reduced the requirement of separate unit server class;
When (5) abnormal failure appearred in separate unit incised layer server, backup incised layer server zone provided the redundancy backup of going over examination papers in real time;
(6) method for designing is flexible; Can or add corresponding processing layer based on the actual conditions merging; For example convergence-level can merge with whole volume processing layer, and incised layer can be divided into a plurality of incised layer, when further having reduced subjective question marking to the concurrency requirement of individual node;
(7) on the separate unit server, use the multithreading of controlling based on the critical section formation, at utmost utilized the multi-core technology of present server or PC, improved calculated performance significantly;
(8) task scheduling is accomplished by dispatch server, when guaranteeing between each station server task balance, realizes streamlined, the automation of system job.
Description of drawings
Fig. 1 is the network topological diagram of cloud computing platform of the present invention;
Fig. 2 is the intelligent cloud computing platform examination paper marking systems sketch map of paper background process of the present invention.
Embodiment
The cloud computing platform system is divided into convergence-level 1, whole volume processing layer 2, incised layer 3, backup incised layer 4, subjective item service layer 5, objective item judge layer 6, objective item debugging layer 7 and core layer 8; Core layer 8 is made up of database 81 and dispatch server 82; Its network topological diagram is as shown in Figure 1; Below in conjunction with the paper instance of marking examination papers, specify intelligent cloud computing platform of the present invention.
As shown in Figure 2; Except that the database of core layer 81 is 1 with dispatch server 82; The server zone of other each layer is respectively 3; The quantity of scanning client is 2 covers, and the specified wait task amount of Servers-all is respectively 100, and each server that whole volume processing layer, incised layer, backup incised layer, objective item judge layer, objective item debugging layer are set whenever sends data at a distance from 1 fen clockwise dispatch server; Comprise wait task amount, preceding 1 minute Processing tasks amount etc., each station server is provided with 1 of watcher thread, 3 of pending threads, sends 1 of thread, 1 of thread, 1 of pending formation, 1 of formation to be stored, 1 of formation to be sent.The workflow of this system is following:
(1) dispatch server is through one deck candidate server under the improved roulette algorithm computation screening
Obtaining the wait task amount Wi (i≤3) of next stratum server from dispatch server, is 0 server if there is the wait task amount, and then 1 of picked at random is one deck candidate server down; If only have 1 server, it then is set is following one deck candidate server less than specified wait task amount 100; If the wait task amount Wi (1≤i≤n) less than specified wait task amount of n (2≤n≤3) station server is arranged; Then count fi (1≤i≤n) according to its preceding 1 minute Processing tasks; Calculative strategy is than ω i=fi/Wi, (1≤i≤n), make ω 0=0; Ik=<img file=" 5422DEST_PATH_IMAGE001.GIF " he=" 35 " img-content=" drawing " img-format=" jpg " inline=" no " orientation=" portrait " wi=" 51 " />/> ω i (0≤k≤n); Generate the random number r between 0~1, if < < IiIn then chooses server i as candidate server to r to Ii-1/In; If there is not the server less than specified wait task amount 100, then system gets into wait state.
(2) the scanning client 91 whole volume image that will scan at random (evenly distribution) send to convergence-level 1 a certain file server such as file server 11; File server 11 receives and stores corresponding image; Upgrade the paper state for converging and write down memory location (the IP address and the respective directories of file server 11) to database server 81, scanning client 91 writes down position and the state that paper sends in the Access of this machine database.
(3) file server 11 is to the transmission target of dispatch server 82 request next stage tasks; Dispatch server calculates the incised layer 3 (like cutting server 32) that produces the next stage needs and handle, the candidate server IP address of backing up incised layer 4 (like backup cutting server 41), whole volume processing layer 2 (as putting in order volume processing server 23) according to step 1; And mail to file server 11; File server 11 adds the transmission task to transmit queue, sends thread by the backstage and is responsible for accomplishing the transmission task.
(4) incised layer 3, backup incised layer 4, whole volume processing layer 2 receive and accomplish distributed task scheduling
(a) whole volume processing server 23 receives the whole volume image that file server 11 is sent; Store after hiding examinee's sensitive informations such as name, the number of examining, examination hall number; The state that corresponding paper is set in database 81 has been for having encrypted and the memory location (the IP address and the respective directories of whole volume processing server 23) of recording of encrypted volume, so that occur the processing of unusual volume such as identical, that answer is lack of standardization when subjective question marking is provided;
(b) cutting server 32 receives the whole volume image that file server 11 is sent; Template Information based on predefined; Carry out the paper cutting; And, upgrade corresponding paper and be cut state to the memory location (the IP address and the respective directories of cutting server 32) that database server 81 is submitted corresponding paper to;
(c) backup cutting server 41 is received the whole volume image that file server 11 is sent; Template Information according to predefined; Carry out the paper cutting; And to the memory location (the IP address and the respective directories of backup cutting server 41) that database server 81 is submitted corresponding paper to, as the standby redundancy of incised layer 3; Backup cutting server 41 is to the transmission target of dispatch server 82 request next stage tasks; Dispatch server 81 calculates the candidate server IP address that the objective item that produces the processing of next stage needs is passed judgment on layer 6 (passing judgment on server 62 like objective item) according to step 1; And mail to backup cutting server 41; Backup cutting server 41 adds the transmission task to transmit queue; Send thread by the backstage and be responsible for accomplishing transmission objective item image, upgrade corresponding paper for backing up cut state to objective item judge stratum server 62.
(5) after objective item judge server 62 obtains the objective item image; Accurately locate through the zone; Behind computed image origin, gradient, the degree of deformation, image processing such as rotation, translation, stretching, the intelligence of accomplishing objective item is coated with the dot information identification mission; Submit recognition result to database server 82, the state that upgrades corresponding paper is for pass judgment on state.
(6) objective item is passed judgment on the transmission target of server 62 to dispatch server 82 request next stage tasks; Dispatch server 82 calculates the candidate server IP address of the objective item debugging layer 7 (like objective item debugging server 73) that produces the processing of next stage needs according to step 1; And mail to objective item and pass judgment on server 62; Objective item is passed judgment on server 62 and is added the transmission task to transmit queue, sends thread by the backstage and is responsible for accomplishing the transmission task.
(7) objective item debugging server 73 reduction objective item raw informations; Comprise the edge rectangle frame of full-filling frame, the code of full-filling frame itself; Upgrade corresponding paper state for to reduce and to write down corresponding memory location (the IP address and the respective directories of objective item debugging server 73), for artificial debugging provides image to database server 81.
(8) circulation step 2 to 7, mark examination papers until all objective items of handling this examination.
(9) subjective item service layer 5 mainly serves the scene teacher that gos over examination papers; Such as: Web server 51; Obtain the paper that subjective item is not read and appraised from database server 81, the image that loads corresponding paper from cutting server 32 through Web browser is to the Web client, the mark of teacher from this paper of Web client input of marking examination papers; Subjective item service layer 5 servers obtain the final result of examinee according to the relevant flow process control law of going over examination papers such as ERROR CONTROL, until the end of marking examination papers of the subjective item of this examination.

Claims (6)

1. the method for designing of the intelligent cloud computing system of a paper background process; It is characterized in that: with system decomposition is convergence-level, whole volume processing layer, incised layer, backup incised layer, subjective item service layer, objective item judge layer, objective item debugging layer and core layer; Core layer is made up of database and dispatch server; All the other each layers are server farm, the server task of each layer of dispatch server unified distribution.
2. according to the method for designing of the intelligent cloud computing system of the said paper background process of claim 1, it is characterized in that: dispatch server adopts improved roulette unified algorithms to distribute the server task of each layer, and improved roulette algorithm is following:
Dispatch server obtains the wait task amount Wi (i≤3) of next stratum server, is 0 server if there is the wait task amount, and then 1 of picked at random is one deck candidate server down;
If only have 1 server, it then is set is following one deck candidate server less than specified wait task amount 100;
If the wait task amount Wi (1≤i≤n) less than specified wait task amount of n (2≤n≤3) station server is arranged; Then count fi (1≤i≤n) according to its preceding 1 minute Processing tasks; Calculative strategy is than ω i=fi/Wi; (1≤i≤n); Make ω 0=0, Ik=<img file=" 2011101279189100001DEST_PATH_IMAGE002.GIF " he=" 35 " id=" ifm0001 " img-content=" drawing " img-format=" jpg " inline=" no " orientation=" portrait " wi=" 51 " />/> ω i (0≤k≤n), generate the random number r between 0~1; If < < IiIn then chooses server i as candidate server to r to Ii-1/In;
If there is not the server less than specified wait task amount 100, then system gets into wait state.
3. according to the method for designing of the intelligent cloud computing system of the said paper background process of claim 1, it is characterized in that:
Convergence-level: link to each other with dispatch server with scanning client, whole volume processing layer, incised layer, backup incised layer, database; The image that reception is sent from the scanning client; The paper state that upgrades the core layer database is for converge and write down the memory location of whole volume; According to the distribution of core layer dispatch server, the server to whole volume processing layer, incised layer and backup incised layer sends whole volume image simultaneously;
Whole volume processing layer: link to each other with convergence-level, database and dispatch server, hide examinee's name on the paper that sends from convergence-level, the number of examining, examination hall number, corresponding paper state is set in database is the memory location of having encrypted and recording of encrypted is rolled up;
Incised layer: link to each other with dispatch server with convergence-level, subjective item service layer, database; To cut from the paper that convergence-level is sent based on cutting board information; Corresponding paper state is set for cutting and write down the memory location of cutting volume in database; Based on the distribution of dispatch server, supply subjective item service layer to call;
Backup incised layer: link to each other with dispatch server with convergence-level, subjective item service layer, objective item judge layer, database; To cut from the paper that convergence-level is sent based on cutting board information; Corresponding paper state is set for backing up the memory location of cutting and record backup cutting volume in database, supplies objective item to pass judgment on layer and call;
Subjective item service layer: link to each other with database with incised layer, backup incised layer, transfer that the subjective item image accomplishes that the network grading subjective item is marked and submit the subjective item mark of corresponding paper to database to from incised layer;
Objective item is passed judgment on layer: link to each other with dispatch server with backup incised layer, objective item debugging layer, database; The objective item image that reception is sent from the backup incised layer; Obtain the objective item image handled well after the identification; Pass judgment on, corresponding paper state is set for passing judgment on and write down evaluation result in database, the objective item image of handling well based on the branch orientation objective item debugging layer transmission of dispatch server;
Objective item debugging layer: link to each other with dispatch server with objective item judge layer, database; Reception is passed judgment on the objective item image of handling well that layer sends from objective item; The raw information of original image also, the state that corresponding paper is set in database is for reduce and write down corresponding memory location;
Core layer: database is accomplished the record of paper data mode, memory location, objective item evaluation result, dispatch server scheduling and distribute each layer task.
4. according to the method for designing of the intelligent cloud computing platform of the said paper background process of claim 3, it is characterized in that: the record of paper memory location in database comprises the IP address and the file directory of paper storage server.
5. according to the method for designing of the intelligent cloud computing platform of claim 1 or 2 said paper background process; It is characterized in that: except that subjective item service layer and core layer; Server in each layer is all set specified Processing tasks amount and is regularly sent the average speed of current wait task for processing amount and Processing tasks to dispatch server, and every station server is made up of a watcher thread, a m processing threads, a n transmission thread, a k thread, a pending formation, a formation to be stored, a formation to be sent.
6. according to the method for designing of the intelligent cloud computing platform of the said paper background process of claim 5; It is characterized in that: said watcher thread is responsible for monitoring the network connection of last stratum server; Exist if connect; Then start a transmission thread corresponding client processing threads with the last layer server temporarily, add task to pending formation by client's processing threads; Processing threads is the taking-up task from pending formation; Add formation to be stored after the processing to; Thread is accomplished adding task to formation to be sent after the renewal of database according to formation to be stored; Send thread and from formation to be sent, obtain task, according to the instruction of dispatch server, the given server of waiting for or sending out one deck downward.
CN2011101279189A 2011-05-18 2011-05-18 Design method of intelligent cloud computing system of test paper background processing Pending CN102790783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011101279189A CN102790783A (en) 2011-05-18 2011-05-18 Design method of intelligent cloud computing system of test paper background processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011101279189A CN102790783A (en) 2011-05-18 2011-05-18 Design method of intelligent cloud computing system of test paper background processing

Publications (1)

Publication Number Publication Date
CN102790783A true CN102790783A (en) 2012-11-21

Family

ID=47156083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011101279189A Pending CN102790783A (en) 2011-05-18 2011-05-18 Design method of intelligent cloud computing system of test paper background processing

Country Status (1)

Country Link
CN (1) CN102790783A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516356A (en) * 2016-01-13 2016-04-20 广东小天才科技有限公司 Test question correcting method, communication terminal and server
CN110134736A (en) * 2019-05-14 2019-08-16 德马吉国际展览有限公司 A kind of back-end services management system of internet cloud platform
CN113703948A (en) * 2021-09-03 2021-11-26 四川宇德中创信息科技有限公司 Test paper splitting system and splitting method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN200969110Y (en) * 2006-04-12 2007-10-31 刘文平 English network examination system based internet and having test paper secrecy system
CN201689422U (en) * 2009-08-24 2010-12-29 深圳市海云天科技股份有限公司 Public network-based computer marking device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN200969110Y (en) * 2006-04-12 2007-10-31 刘文平 English network examination system based internet and having test paper secrecy system
CN201689422U (en) * 2009-08-24 2010-12-29 深圳市海云天科技股份有限公司 Public network-based computer marking device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张安钰: "基于互联网上的无纸化阅卷系统性能分析与形式化理论", 《中国优秀硕士学位论文》 *
王军亮: "LVS集群中IP负载均衡技术的研究", 《贵州科学》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516356A (en) * 2016-01-13 2016-04-20 广东小天才科技有限公司 Test question correcting method, communication terminal and server
CN105516356B (en) * 2016-01-13 2019-03-29 广东小天才科技有限公司 Examination question corrects method, communication terminal and server
CN110134736A (en) * 2019-05-14 2019-08-16 德马吉国际展览有限公司 A kind of back-end services management system of internet cloud platform
CN113703948A (en) * 2021-09-03 2021-11-26 四川宇德中创信息科技有限公司 Test paper splitting system and splitting method thereof

Similar Documents

Publication Publication Date Title
CN103856774B (en) A kind of video monitoring intelligent checking system and method
CN103824340B (en) Unmanned plane power transmission line intelligent cruising inspection system and method for inspecting
JP2020529056A (en) Display quality detection methods, devices, electronic devices and storage media
DE112019000744T5 (en) Unsupervised cross-domain distance metric adjustment with feature transmission network
US11704189B1 (en) System and method for autonomous data center operation and healing
CN107423181A (en) The automated testing method and device of a kind of uniform storage device
WO2020038138A1 (en) Sample labeling method and device, and damage category identification method and device
CN111695744B (en) Maintenance equipment demand prediction analysis system based on big data
CN104599064A (en) Operation maintenance management system of data center
CN110458678A (en) A kind of financial data method of calibration and system based on hadoop verification
CN108346153A (en) The machine learning of defects in timber and restorative procedure, device, system, electronic equipment
CN106934507A (en) A kind of new cruising inspection system and method for oil field petrochemical field
CN102790783A (en) Design method of intelligent cloud computing system of test paper background processing
CN114817739B (en) Industrial big data processing system based on artificial intelligence algorithm
CN104504505A (en) Data acquisition system based on process
CN112506612A (en) Cluster inspection method, device and equipment and readable storage medium
KR101888637B1 (en) Analysis methodology and platform architecture system for big data based on manufacturing specialized algorithm template
CN117193088B (en) Industrial equipment monitoring method and device and server
CN105741023A (en) Informationized steel plate quality testing system
CN105681070A (en) Method and system for automatically collecting and analyzing computer cluster node information
CN108416562A (en) Logistics receipt verification method and device
CN112085221A (en) Digital pole and tower intelligent operation and detection method and system
CN115022339B (en) Block chain-based supply chain tracing method and system and electronic equipment
Flotzinger et al. Building inspection toolkit: Unified evaluation and strong baselines for damage recognition
CN115660610A (en) Decentralized cooperative office system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20121121