CN108762818A - A kind of optimization design server and maintaining method - Google Patents

A kind of optimization design server and maintaining method Download PDF

Info

Publication number
CN108762818A
CN108762818A CN201810538595.4A CN201810538595A CN108762818A CN 108762818 A CN108762818 A CN 108762818A CN 201810538595 A CN201810538595 A CN 201810538595A CN 108762818 A CN108762818 A CN 108762818A
Authority
CN
China
Prior art keywords
server
gpu
optimization design
traditional
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810538595.4A
Other languages
Chinese (zh)
Inventor
李艳
白云峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Yunhai Information Technology Co Ltd
Original Assignee
Zhengzhou Yunhai Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Yunhai Information Technology Co Ltd filed Critical Zhengzhou Yunhai Information Technology Co Ltd
Priority to CN201810538595.4A priority Critical patent/CN108762818A/en
Publication of CN108762818A publication Critical patent/CN108762818A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • G06F9/4413Plug-and-play [PnP]
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K5/00Casings, cabinets or drawers for electric apparatus
    • H05K5/02Details
    • H05K5/0217Mechanical details of casings

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Sources (AREA)

Abstract

The invention discloses a kind of optimization design server and maintaining methods, the server of optimized design, traditional server system and GPU server systems are layered according to function module and are arranged, and layering quantity is multilayer, can freely stretch stretching in the case of realization complete machine uninterrupted operation, and the side of each layer is equipped with the tank chain for pull traditional server system or GPU server systems in addition to top layer, it ensure that the arrangement continuity between each layer, it avoids in pull layer, the loss of parts, and it is equipped with fixing buckle in the other side of each layer, strengthen the structural integrity of server after optimization design.

Description

A kind of optimization design server and maintaining method
Technical field
The present invention relates to artificial intelligence servers fields, more particularly, to a kind of optimization design server and maintaining method.
Background technology
Artificial intelligence plays increasingly important role in the online service field of our times, Google, These intra-companies of Facebook, Microsoft and Baidu, GPU play huge effect in " deep learning " field.GPU is one Large-scale SIMD set, possesses a large amount of vector operation system, can be with parallel processing when carrying out the stream calculation of repeat function A large amount of trifling information, it is only necessary to set performance data constantly inward into can.The powerful of CPU is in face of control intensity It is very outstanding when operation, but for data-intensive operation, for example each pixel should show any color in screen, Comparatively the operating mechanism pressure that may be restricted to CPU can be very big.
With flourishing for artificial intelligence, GPU is largely widely used, and for large-scale data center, is had begun The server of a large amount of deployment pond classes, this kind of traditional server does not support the hot O&M of GPU in structure, once server needs It overhauls or GPU upgrades, need entirety that server is carried out power-off and carry out relevant operation, the no image of Buddha replaces hard disk, wind Fan, the such operation readiness of power supply, this can cause entire service stopping to operate, and bring great challenge to whole system, therefore anxious Need to design and develop it is a kind of can the server of hot O&M and easily upgrading GPU meet the market demand, meet entire service.
Invention content
To solve the above problems, the present invention provides a kind of optimization design server, the server GPU branch after optimization design Hold hot O&M, it can be achieved that single part maintenance, the maintaining method is convenient and efficient.
Based on this, the technical scheme is that:
A kind of optimization design server, including traditional server system, GPU server systems, tank chain, electric wire and fixation It is multilayer that button, the traditional server system and GPU server systems are layered setting and layering quantity according to function module, is removed The first side of each layer is equipped with the tank chain for pull traditional server system or GPU server systems, institute outside top layer It states electric wire to be interspersed in tank chain, fixing buckle is equipped in the second side of each layer.
Further, the traditional server system include CPU processor touch block, memory module, network module, power supply and Fan.
Further, multiple GPU processor modules are equipped in the GPU server systems.
Further, the GPU processor dies number of blocks is 6.
Further, the traditional server system and GPU server systems are independently disposed in different layers.
Further, the electric wire is power supply electric wire and data service cabling.
In addition, the present invention also provides a kind of maintaining method of optimization design server, the clothes of above-mentioned optimization design are utilized Business device, when the server carries out upgrading or O&M is replaced,
A. enter traditional server system and disable falls corresponding GPU server systems;
B. releasing fixing buckle will need to change or the GPU server systems of O&M pull out, and replacement is pushed back after successfully replacing GPU server systems afterwards;
C., fixing buckle is installed, traditional server system is again introduced into and updates driving again and identify corresponding GPU services Device system, then the server enter normal traffic pattern.
Implement the embodiment of the present invention, has the advantages that:
1, the server of the optimized design of the present invention, by traditional server system and GPU server systems according to function mould Block is layered setting, and it is multilayer to be layered quantity, and can freely stretch stretching in the case of realization complete machine uninterrupted operation, and remove top layer The side of outer each layer is equipped with the tank chain for pull traditional server system or GPU server systems, ensure that each Arrangement continuity between a layer is avoided in pull layer, the loss of parts, and is equipped with and is fixed in the other side of each layer Button, strengthens the structural integrity of server after optimization design.
2, multiple GPU processor modules are equipped in GPU server systems of the invention, are preferable over 6 GPU processors, into When row artificial intelligence, deep learning, neural network are deduced, working efficiency can be greatly promoted, efficiency can more biographies shoulder to shoulder System server.
3, the traditional server system and GPU server systems are independently disposed in different layers, are independently arranged Facilitate later maintenance;In addition electric wire is power supply electric wire and data service cabling, and by being interspersed in tank chain and traditional server System or the connection of GPU server systems, realize access of the server CPU to GPU resource.
4, the method clear and rational safeguarded using the server of the present invention is carrying out O&M or iteration to the server Upgrading, realizes the uninterrupted operation of the server, avoids loss of data.
Description of the drawings
Fig. 1 is the overall structure diagram of optimization design server described in the present embodiment.
Fig. 2 is the flow chart for carrying out maintaining method described in the present embodiment using above-mentioned optimization design server.
Reference sign:
Wherein, 1, traditional server system, 2, GPU server systems, 3, tank chain, 4, fixing buckle.
Specific implementation mode
Below in conjunction with the drawings and examples in the present invention, technical scheme of the present invention is clearly and completely retouched It states, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the present invention In embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, belongs to protection scope of the present invention.
In conjunction with shown in Fig. 1, a kind of optimization design server described in the embodiment of the present invention, including traditional server system 1, GPU server systems 2, tank chain 3, the electric wire and fixing buckle 4, traditional server system 1 and GPU server systems 2 according to Function module layering is arranged and layering quantity is multilayer, and the first side of each layer is equipped with for pull tradition in addition to top layer The tank chain 3 of server system 1 or GPU server systems 2, the electric wire are interspersed in tank chain 3, the second of each layer Side is equipped with fixing buckle 4.The server of the optimized design of the embodiment of the present invention, by traditional server system 1 and GPU servers system System 2 is layered according to function module to be arranged, and it is multilayer to be layered quantity, and specific each layer is that traditional server system 1 or GPU are serviced Device system 2 is not specific, and can freely stretch stretching, and each layer in addition to top layer in the case of realization complete machine uninterrupted operation Side is equipped with the tank chain 3 for pull traditional server system 1 or GPU server systems 2, ensure that between each layer Continuity is arranged, is avoided in pull layer, the loss of parts, and fixing buckle 4 is equipped in the other side of each layer, is strengthened The structural integrity of server after optimization design.
Wherein, traditional server system 1 touches block, memory module, network module, power supply and blower including CPU processor, It is equipped with multiple GPU processor modules in GPU server systems 2, is preferable over 6 GPU processors, carries out artificial intelligence, depth It practises, when neural network is deduced, can greatly promote working efficiency, efficiency can more traditional servers shoulder to shoulder.Traditional services Device system 1 and GPU server systems 2 are independently disposed in different layers, are independently arranged and are facilitated later maintenance;In addition the electricity Line is power supply electric wire and data service cabling, and by being interspersed in tank chain 3 and traditional server system 1 or GPU servers system 2 connection of system, realizes access of the server CPU to GPU resource.
In addition, using tank chain 3 in the side of each layer in addition to top layer, tank chain 3, which plays, to be suitable for use in back and forth The occasion of movement can play traction and protective effect to the built-in electric wire;Tank chain 3 often save by left and right carrier bar and up and down Cover board forms, and often section can be opened and be rotated freely between chain link, be easy for installation and maintenance, be convenient to mount and dismount, need not thread, and opens Cable can be conveniently installed inside after cover board, greatly improve the convenient degree of cabling;Tank chain 3 has higher pressure Power and tension load, good toughness, high resiliency and wearability are fire-retardant, and performance is stablized when high/low temperature, can use in outdoor, The speed of service is stablized.
In conjunction with shown in Fig. 2, the present invention also provides a kind of maintaining methods of optimization design server, are set using above-mentioned optimization The server of meter, when the server carries out upgrading or O&M is replaced,
A. enter traditional server system 1 and disable falls corresponding GPU server systems 2;
B. releasing fixing buckle 4 will need to change or the GPU server systems 2 of O&M pull out, and replacement is pushed back after successfully replacing GPU server systems 2 later;
C., fixing buckle is installed, traditional server system 1 is again introduced into and updates driving again and identify corresponding GPU services Device system 2, then the server enter normal traffic pattern.
The method clear and rational safeguarded using the server of the present invention is carrying out O&M or iteration liter to the server Grade, realizes the uninterrupted operation of the server, avoids loss of data.
The above is only a preferred embodiment of the present invention, it should be pointed out that for those skilled in the art For, without departing from the technical principles of the invention, several improvement and replacement can also be made, these improve and replace It should be regarded as protection scope of the present invention.

Claims (7)

1. a kind of optimization design server, which is characterized in that including traditional server system, GPU server systems, tank chain, Electric wire and fixing buckle, the traditional server system and GPU server systems are layered setting and layering quantity according to function module For multilayer, the first side of each layer is equipped with for pull traditional server system or GPU server systems in addition to top layer Tank chain, the electric wire are interspersed in tank chain, and fixing buckle is equipped in the second side of each layer.
2. optimization design server as described in claim 1, which is characterized in that the traditional server system includes CPU processing Device touches block, memory module, network module, power supply and blower.
3. optimization design server as described in claim 1, which is characterized in that be equipped with multiple GPU in the GPU server systems Processor module.
4. optimization design server as claimed in claim 3, which is characterized in that the GPU processor dies number of blocks is 6.
5. optimization design server as described in claim 1, which is characterized in that the traditional server system and GPU servers System is independently disposed in different layers.
6. optimization design server as described in claim 1, which is characterized in that the electric wire is that power supply electric wire and data service are walked Line.
7. a kind of maintaining method of optimization design server utilizes the service of any one of claim 1 to 6 optimization design Device, which is characterized in that when the server carries out upgrading or O&M is replaced,
A. enter traditional server system and disable falls corresponding GPU server systems;
B. releasing fixing buckle will need to upgrade or the GPU server systems of O&M pull out, and be pushed back after successfully replacing after replacing GPU server systems;
C., fixing buckle is installed, traditional server system is again introduced into and updates driving again and identify corresponding GPU servers system System, then the server enters normal traffic pattern.
CN201810538595.4A 2018-05-30 2018-05-30 A kind of optimization design server and maintaining method Pending CN108762818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810538595.4A CN108762818A (en) 2018-05-30 2018-05-30 A kind of optimization design server and maintaining method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810538595.4A CN108762818A (en) 2018-05-30 2018-05-30 A kind of optimization design server and maintaining method

Publications (1)

Publication Number Publication Date
CN108762818A true CN108762818A (en) 2018-11-06

Family

ID=64004001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810538595.4A Pending CN108762818A (en) 2018-05-30 2018-05-30 A kind of optimization design server and maintaining method

Country Status (1)

Country Link
CN (1) CN108762818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214295A (en) * 2020-09-23 2021-01-12 桂林理工大学 Low-energy-consumption job scheduling method for multi-CPU/GPU heterogeneous server cluster

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101896940A (en) * 2007-10-10 2010-11-24 苹果公司 Framework for dynamic configuration of hardware resources
CN103389777A (en) * 2013-06-26 2013-11-13 汉柏科技有限公司 Storage server
CN203338221U (en) * 2013-05-15 2013-12-11 汉柏科技有限公司 Server case
CN104125165A (en) * 2014-08-18 2014-10-29 浪潮电子信息产业股份有限公司 Job scheduling system and method based on heterogeneous cluster
CN106249818A (en) * 2016-08-01 2016-12-21 浪潮电子信息产业股份有限公司 A kind of server node and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101896940A (en) * 2007-10-10 2010-11-24 苹果公司 Framework for dynamic configuration of hardware resources
CN203338221U (en) * 2013-05-15 2013-12-11 汉柏科技有限公司 Server case
CN103389777A (en) * 2013-06-26 2013-11-13 汉柏科技有限公司 Storage server
CN104125165A (en) * 2014-08-18 2014-10-29 浪潮电子信息产业股份有限公司 Job scheduling system and method based on heterogeneous cluster
CN106249818A (en) * 2016-08-01 2016-12-21 浪潮电子信息产业股份有限公司 A kind of server node and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘纪红,潘学俊,梅栴等编著: "《物联网技术及应用》", 31 December 2011, 国防工业出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214295A (en) * 2020-09-23 2021-01-12 桂林理工大学 Low-energy-consumption job scheduling method for multi-CPU/GPU heterogeneous server cluster
CN112214295B (en) * 2020-09-23 2024-02-06 桂林理工大学 Low-energy-consumption job scheduling method for multi-CPU/GPU heterogeneous server cluster

Similar Documents

Publication Publication Date Title
CN101216777B (en) Rapid deployment system under multi-dummy machine environment
CN109313582A (en) Technology for dynamic remote resource allocation
US7370156B1 (en) Unity parallel processing system and method
KR20120092579A (en) High density multi node computer with integrated shared resources
CN106547592A (en) A kind of method for designing for realizing Novel cloud service device power supply software online updating
CN111327692A (en) Model training method and device and cluster system
CN108023958A (en) A kind of resource scheduling system based on cloud platform resource monitoring
CN109791493A (en) System and method for the load balance in the decoding of out-of-order clustering
CN108762818A (en) A kind of optimization design server and maintaining method
CN105007311A (en) System and method for resource management based on cloud platform and cloud computing
CN106648775A (en) Double-open switching method and system for application program and terminal equipment
CN203786606U (en) Cabinet type server device
CN107918560A (en) A kind of server apparatus management method and device
CN109032422A (en) Intelligent handwriting board and implementation method and device thereof
CN205992328U (en) A kind of cloud Classroom System based on cloud service
CN204117069U (en) A kind of server backplanes
CN109688068A (en) Network load balancing method and device based on big data analysis
CN108984471A (en) A kind of ether mill mine machine system and its dig mine method
CN109739560A (en) A kind of GPU card cluster configuration control system and method
CN102096847A (en) Information system based on multistage centralized and hierarchical management mode
CN109495484B (en) High-concurrency communication method and system for Proactor mode in intelligent lighting system
CN106294062A (en) A kind of method by computing function in server with management function separate design
CN207166346U (en) A kind of power module mounting structure
Xiong et al. A novel think tanks evaluation system based on micro service
CN109189489A (en) A method of it solving distributed memory system and restarts rear cluster network interface card confusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181106