CN108639882A - Processing chip based on LSTM network models and the arithmetic unit comprising it - Google Patents
Processing chip based on LSTM network models and the arithmetic unit comprising it Download PDFInfo
- Publication number
- CN108639882A CN108639882A CN201810413796.1A CN201810413796A CN108639882A CN 108639882 A CN108639882 A CN 108639882A CN 201810413796 A CN201810413796 A CN 201810413796A CN 108639882 A CN108639882 A CN 108639882A
- Authority
- CN
- China
- Prior art keywords
- user
- elevator
- elevator dispatching
- artificial intelligence
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B1/00—Control systems of elevators in general
- B66B1/34—Details, e.g. call counting devices, data transmission from car to control system, devices giving information to the control system
- B66B1/3476—Load weighing or car passenger counting devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B1/00—Control systems of elevators in general
- B66B1/02—Control systems without regulation, i.e. without retroactive action
- B66B1/06—Control systems without regulation, i.e. without retroactive action electric
- B66B1/14—Control systems without regulation, i.e. without retroactive action electric with devices, e.g. push-buttons, for indirect control of movements
- B66B1/18—Control systems without regulation, i.e. without retroactive action electric with devices, e.g. push-buttons, for indirect control of movements with means for storing pulses controlling the movements of several cars or cages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B5/00—Applications of checking, fault-correcting, or safety devices in elevators
- B66B5/0006—Monitoring devices or performance analysers
- B66B5/0012—Devices monitoring the users of the elevator system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B2201/00—Aspects of control systems of elevators
- B66B2201/20—Details of the evaluation method for the allocation of a call to an elevator car
- B66B2201/222—Taking into account the number of passengers present in the elevator car to be allocated
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B66—HOISTING; LIFTING; HAULING
- B66B—ELEVATORS; ESCALATORS OR MOVING WALKWAYS
- B66B2201/00—Aspects of control systems of elevators
- B66B2201/40—Details of the change of control mode
- B66B2201/403—Details of the change of control mode by real-time traffic data
Abstract
The disclosure provides a kind of artificial intelligence elevator dispatching equipment, and user's request for responding calling floor receives user's request data of at least one floor, and determine elevator dispatching scheme, it is characterised in that including:Processing chip, for receiving user's request data, and neural network computing is carried out with user's request data, the output neuron after operation includes the execution queue of active user's request, wherein user's request data includes the time ladder number data that user asks floor;Arithmetic unit determines elevator dispatching scheme according to the execution queue that at least one user asks.The artificial intelligence elevator dispatching equipment of the disclosure, can comprehensive analysis include users' request data such as waiting number, keep elevator regulation and control more accurate and efficiently.
Description
Technical field
This disclosure relates to technical field of information processing, and in particular to a kind of artificial intelligence elevator dispatching equipment.
Background technology
The existing camera technology that is related to includes:It is converted to electric signal using a variety of optics, electric signal input and is stored in storage
Tangible storage media in medium or is directly saved as, and computer software is read in by storage-medium information and carries out image recognition simultaneously
Export desired signal result.Existing intelligent video camera head technical problem is, first, image cannot be carried out directly in camera
Identification, whole system need wired or wireless connection to carry out data transmission, and system is huge, and execution efficiency is low;Moreover, main logical
It crosses software and executes image recognition, power consumption is big, and efficiency is low, can not accomplish preferable real-time.
In existing elevator dispatching system by forward car robbery (LOOK (lookup) dispatching algorithm of disk scheduling is referred to, is learned
The entitled operating system Scan algorithms of art) method, either corresponding (nearest elevator respective service) method or do not consider to wait nearby
The neural network balance dispatching method of terraced number.
The problem of existing elevator dispatching system technology, is:One, it is optimal that scheduling is difficult to realize when multiple control lift is dispatched
Change enables terraced case resource rationally to make full use of;Two, when optimization algorithm difficult to realize makes boarding ask average response
Between it is most short;It in addition, elevator dispatching mechanism is relatively fixed, can not accomplish real-time optimization, lack the study energy under practical application scene
Power and adaptability;In addition, real-time stream of people's data can not be detected, so as to adjust suitable elevator resources scheduling mechanism.
Invention content
(1) technical problems to be solved
In view of this, the disclosure is designed to provide a kind of artificial intelligence elevator dispatching equipment, at least partly to solve
Techniques discussed above problem.
(2) technical solution
To achieve the above object, the disclosure provides a kind of artificial intelligence elevator dispatching equipment, for responding calling floor
User asks, and receives user's request data of at least one floor, and determine elevator dispatching scheme, including:
Processing chip carries out neural network computing, fortune for receiving user's request data, and with user's request data
Output neuron after calculation includes the execution queue of active user's request, wherein user's request data, which includes user, asks building
The time ladder number data of layer;
Arithmetic unit determines elevator dispatching scheme according to the execution queue that at least one user asks.
In a further embodiment, in the processing chip, the model for carrying out neural network computing is LSTM nerves
Network model.
In a further embodiment, the neural network computing in the processing chip includes:By at the beginning of LSTM models
Beginningization parameter, and according to the scheduling cost of user's request data acquisition loss function, calculate user and the minimum for executing queue is added
The gradient direction of cost is exported the execution queue of active user's request group.
In a further embodiment, the scheduling cost of the loss function is to refer to elevator to execute every layer of user in queue
The weighted average of stand-by period.
In a further embodiment, loss function scheduling cost be elevator up-down total number of floor levels it
With.
In a further embodiment, user's request data of the processing chip be obtained from monitoring camera, mobile phone,
Computer, notebook or tablet computer.
In a further embodiment, further include:Request signal encoder, for being compiled to user's request data
Code, so that the processing chip is called.
In a further embodiment, further include:Memory, what the user for storing processing chip output asked holds
Row queue.
In a further embodiment, the execution queue asked according at least one user in the arithmetic unit determines
Elevator dispatching scheme, including:Count execution queue type and its number in the memory, calculate the total busy degree of elevator and
The stream of people concentrates situation, determines overall elevator dispatching scheme.
In a further embodiment, further include digital analog converter, for believing the number of the elevator dispatching scheme
Number conversion in order to control elevator motor operation analog signal.
In a further embodiment, further include:I/O unit, the letter for receiving request signal encoder
Number, and processing chip is passed to as input;The elevator for being additionally operable to receive processing chip output output executes queue, and will
It executes queue and is stored in memory;It is additionally operable to read ancestral task queue from memory, is input to arithmetic unit, and by the defeated of arithmetic unit
Go out result and is input to digital and analogue signals converter.
(3) advantageous effect
The artificial intelligence elevator dispatching equipment of the disclosure, can comprehensive analysis include users' request data such as waiting number,
Keep elevator regulation and control more accurate and efficient.
Description of the drawings
Fig. 1 is the artificial intelligence camera diagrammatic cross-section of the embodiment of the present disclosure.
Fig. 2 is a kind of processor block diagram of embodiment in Fig. 1.
Fig. 3 is the processor block diagram of the another embodiment in Fig. 1.
Fig. 4 is an elevator dispatching system application scenario diagram of the embodiment of the present disclosure.
Fig. 5 is the schematic diagram of the elevator dispatching system of the embodiment of the present disclosure.
Fig. 6 is the intelligent elevator controlling equipment block diagram of the embodiment of the present disclosure.
Fig. 7 is a kind of processing chip block diagram of embodiment in Fig. 6.
Fig. 8 is the processing chip block diagram of the another embodiment in Fig. 6.
Fig. 9 is the neural network computing schematic diagram of an embodiment in intelligent elevator controlling equipment in Fig. 6.
Figure 10 is the work flow diagram of the intelligent elevator controlling equipment of one embodiment of the disclosure.
Specific implementation mode
With reference to the attached drawing in the embodiment of the present disclosure, the technical solution in the embodiment of the present disclosure is carried out clear, complete
Ground describes, it is clear that described embodiment is only disclosure a part of the embodiment, instead of all the embodiments.Based on this
Disclosed embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, belongs to the protection domain of the disclosure.
One group of series scheme of the embodiment of the present disclosure is to provide a kind of camera shooting that can carry out artificial neural network operation
Head, and the whole elevator dispatching system comprising the camera.Pass through the artificial intelligence camera of the embodiment of the present disclosure, Neng Gou
Camera shooting head end can analyze the number in captured picture and/or video, and foundation is provided for post analysis.
Fig. 1 is the artificial intelligence camera diagrammatic cross-section of the embodiment of the present disclosure.As shown in Figure 1, the embodiment of the present disclosure
Artificial intelligence camera 100 includes:Image part 101 and processor 102, wherein camera shooting part 101 be used to absorb external image and/
Or video;Processor 102 is used to convert described image and/or video to face recognition result, and with the face recognition result
Neural network computing is carried out as at least partly input data, the output neuron after operation includes in image and/or video
Number.
Wherein, camera shooting part 101 can be the existing various camera shootings for capableing of image recording and/or video in the prior art
Head obtains external information by electromagnetism or optics or other signal sources, and structure can refer to the various cameras of the prior art,
But the camera of the prior art does not include the functional unit or module that statistical number of person after neural network computing is carried out to image data.
In the embodiment of the present disclosure, the effect of processor 102 is will to image at the image or video frame that part 101 absorbs
Reason, by the enterprising pedestrian's artificial neural networks operation of its hardware circuit, obtaining the demographics result in image or video frame.
Preferably, which is the artificial neural network chip that can carry out neural network computing.
Wherein, image and/or video are converted to face recognition result, it can be understood as be translated into and meet nerve net
The data of network input format, you can using the input neuron number evidence as input layer;In neural network computing, used net
Network model can be the existing various models of the prior art, including but not limited to RNN (Recognition with Recurrent Neural Network) (such as LSTM length
Phase memory network), CNN (convolutional neural networks) or DNN (deep neural network), and the nerve of the output layer in neural network
Include the demographics result data in image or video frame in member.
Fig. 2 is a kind of processor block diagram of embodiment in Fig. 1.As shown in Fig. 2, in some embodiments,
Processor includes storage unit, control unit and arithmetic element, wherein storage unit (can conduct for storing input data
Input neuron), neural network parameter and instruction;Control unit is used to read special instruction from the storage unit, and will
It is decoded into arithmetic element and instructs and be input to arithmetic element;Arithmetic element is used to hold the data according to arithmetic element instruction
The corresponding neural network computing of row, obtains output neuron.Wherein, storage unit can also be stored obtains after arithmetic element operation
The output neuron obtained.Here neural network parameter includes but not limited to weights, biasing and activation primitive.Preferably,
Initialization weights in parameter are trained recognition of face weights, directly can carry out artificial neural network operation (namely
Recognition of face calculates (inference)), save the process being trained to neural network.
In some embodiments, corresponding neural network computing is executed in arithmetic element includes:It will input neuron and power
Value Data is multiplied, and obtains multiplied result;Add tree operation is executed, for the multiplied result to be added step by step by add tree,
Weighted sum is obtained, weighted sum biasing is set or is not processed;The weighted sum set or be not processed to biasing executes activation primitive fortune
It calculates, obtains output neuron.Preferably, activation primitive can be sigmoid functions, tanh functions, ReLU functions or
Softmax functions.
In some embodiments, as shown in Fig. 2, processor can also include DMA (Direct MemoryAccess, directly
Memory access), the input data for being stored in storage unit, neural network parameter and instruction, for control unit and operation
Cell call;It is further additionally operable to after arithmetic element calculates output neuron, the output nerve is written to storage unit
Member.
In some embodiments, it as shown in Fig. 2, processor further includes instruction buffer, is used for from the direct memory access
DMA cache instructions are called for control unit.The instruction buffer can be that on piece caches, and processing is integrated in by preparation process
On device, processing speed can be improved when instruction is transferred, save the integral operation time.
In some embodiments, processor further includes:Neuron caching is inputted, is used for from the direct memory access
DMA caching input neurons, are called for arithmetic element;Weights cache, and are used to cache from the direct memory access DMA and weigh
Value is called for arithmetic element;Output neuron caches, and is used to store and obtains the output nerve after operation from the arithmetic element
Member, with output to direct memory access DMA.Above-mentioned input neuron caching, weights caching and output neuron caching also may be used
Think that on piece caches, be integrated on processor by semiconductor technology, processing speed can be improved when being read and write for arithmetic element,
Save the integral operation time.
Fig. 3 is the processor block diagram of the another embodiment in Fig. 1.As shown in figure 3, the place in the embodiment
It may include pretreatment unit to manage device, and the image and/or video data for being used to absorb video camera pre-process, and are converted into
Face recognition result, the face recognition result are to meet the data of neural network input format.Preferably, described pre-process includes
The image of video camera intake and/or video data cutting, gaussian filtering, binaryzation, regularization and/or normalization, to be accorded with
Close the data of neural network input format.The Effect of Pretreatment is to improve the accuracy of follow-up neural network computing, to obtain
Accurate number judges.
In some embodiments, processor can also include transmission unit, for the number data after operation to be passed through nothing
Line and/or wired mode are transmitted to external equipment.Here operation refers to the fortune carried out by arithmetic element after neural network
It calculates, it includes there is the number to image or video frame progress to judge data, which judges that data can be stored in DMA or storage
In unit, external equipment can be transmitted it to by transmission unit, to carry out further data analysis and application.
The course of work of above-mentioned artificial intelligence camera can be:
Step S1, camera shooting part obtains lift port picture signal in real time, and is translated into electric image signal.
The electric image signal of step S2, camera shooting part output are input to the input terminal of processor;
Step S3, after the preprocessed cell processing of electric image signal, it is to meet neural network to form the face recognition result
Then the data of input format are passed to the directly incoming storage unit of storage unit or electric image signal;
Step S4, DMA (Direct Memory Access, direct memory access) by storage unit store instruct, it is defeated
Enter neuron (comprising the above-mentioned data for meeting neural network input format) and weights caching, be passed to instruct in batches respectively and delay
It deposits, in input neuron caching and weights caching;
Step S5, control unit read instruction from instruction buffer, and operation is passed to after being decoded as arithmetic element instruction
Unit;
Step S6, instructs according to arithmetic element, and arithmetic element executes corresponding operation,:In each layer of neural network,
Operation can be divided into three steps:Step S6.1, corresponding input neuron is multiplied with weights;Step S6.2 executes add tree fortune
It calculates, i.e., the result of step S6.1 is added step by step by add tree, obtains weighted sum, weighted sum biasing is set as needed or not
It processes;Step S6.3 executes activation primitive operation to the result that step S6.2 is obtained, obtains output neuron, and passed
Enter in output neuron caching.
Step S7 repeats step S4-S6, and final output image face judging result is stored in phase in storage unit by DMA
The judging result storage address answered.
Above-mentioned artificial intelligence camera can be placed in various occasions known in the art (include but not limited to house,
In Office Area, elevator, elevator is outer, market, workshop and school), match with various central control systems, provided for corresponding system
The real time data of demographics is supported.
Based on identical inventive concept, the embodiment of the present disclosure also provides a kind of elevator dispatching system, including:It is multiple according to upper
Artificial intelligence camera described in embodiment is stated, is configured to be installed on the different floors in a building, for absorbing outside elevator
Waiting personnel, and export statistical number of person;And elevator dispatching equipment, user's request of response calling floor receive input number
According to the statistical number of person of the artificial intelligence camera of the input data calling floor determines elevator tune according to the input data
Degree scheme.
Fig. 4 is an elevator dispatching system application scenario diagram of the embodiment of the present disclosure, shown in Figure 4, shown in Fig. 4 big
In building, have at least elevator 402 (one is only shown in figure, it is apparent that itself and be only limitted to 1), outside the elevator of each floor
Portion be mounted on described in above-described embodiment artificial intelligence camera 401 (each layer can have it is multiple, preferably every layer one
It is a), to carrying out video acquisition outside elevator, the video of acquisition may include to wait terraced personnel 403, and the intelligence of each layer is taken the photograph
As first 401 can carry out neural network computing to each video frame of acquisition, determines and corresponding wait terraced number.
Fig. 5 is the schematic diagram of the elevator dispatching system of the embodiment of the present disclosure, as shown in figure 5, elevator dispatching equipment 502 and intelligence
Phase intercommunication can be carried out between 5011~501n of camera (n is more than 1 natural number) by wireless and/or wired mode
Letter, such as elevator dispatching equipment 502 send control signal to intelligent video camera head hair, and intelligent video camera head is to elevator dispatching equipment 502
It sends and waits terraced number data.Wherein, camera present position can be indicated, such as intelligently into line label to corresponding intelligent video camera head
Camera 5011 indicates that it is located at first layer, intake be first layer video, then 501n indicate corresponding camera positioned at n-th
Layer, intake be n-th layer video.
After elevator dispatching equipment 502 receives boarding request (such as the terraced people of time presses the elevator up or down key in corridor),
Elevator dispatching system is calculated according to the dispatching algorithm of setting, and the factor considered in the calculating process includes with ladder request floor
Time ladder number, after calculating formed elevator dispatching scheme, with control elevator group motor operating.
Hereinafter, another group of scheme of the disclosure will be introduced, a kind of artificial intelligence elevator dispatching equipment is provided (due to that can pass through
Internal circuit neural network computing, spy are named as artificial intelligence elevator dispatching equipment, and area is carried out with conventional elevator dispatching equipment
Point) equipment includes that can carry out the processing unit of artificial neural network operation, by considering in artificial neural network operation
Terraced number is waited, the scheduling scheme of formation is more efficiently and accurately.It includes the artificial intelligent elevator controlling equipment also to provide a kind of
Elevator dispatching system.
Fig. 6 is the artificial intelligence elevator dispatching equipment block diagram of the embodiment of the present disclosure.As shown in fig. 6, artificial intelligence
Elevator dispatching equipment 600 may include processing chip 601 and arithmetic unit 604, processing chip 601, for receiving user request
Data simultaneously carry out neural network computing with user's request data, and the output neuron after operation includes holding for active user's request
Row queue, wherein user's request data includes the time ladder number data that user asks floor;Arithmetic unit 4, according to multiple use
The execution queue of family request, determines elevator dispatching scheme.Wherein described user's request data includes the time ladder that user asks floor
Number data.In operation, operation is carried out by the neural network model of setting, used network model can be existing skill
The existing various models of art, including but not limited to RNN (Recognition with Recurrent Neural Network), CNN (convolutional neural networks) or DNN (depth
Neural network), it is preferred that operation is carried out using the LSTM shot and long term memory network models in RNN (Recognition with Recurrent Neural Network).
Wherein, user's request data includes but is not limited to one group of number comprising one or more request floors and uplink and downlink
Word encodes, and the floor of elevator call and uplink or downlink are also realized digitlization by encoding.Wherein, which asks
After data can be original input data, or the intelligent elevator controlling equipment 600 by the embodiment of the present disclosure is handled
Data.
May include several function modules inside the processing chip 601 of the embodiment of the present disclosure, as shown in fig. 7, processing chip
601 may include memory module, control module and computing module, wherein memory module (can be with for storing user's request data
As input neuron), neural network parameter and instruction;Control module is used to read special instruction from the memory module,
And it is decoded into computing module and instructs and be input to computing module;Computing module is used to be instructed to the number according to computing module
According to corresponding neural network computing is executed, output neuron is obtained.Wherein, memory module can also be stored through computing module operation
The output neuron obtained afterwards.Here neural network parameter includes but not limited to weights, biasing and activation primitive.As preferred
, the initialization weights in parameter are trained recognition of face weights or are instructed by user's request data before
Practice.
In some embodiments, corresponding neural network computing is executed in computing module includes:It will input neuron and power
Value Data is multiplied, and obtains multiplied result;Add tree operation is executed, for the multiplied result to be added step by step by add tree,
Weighted sum is obtained, weighted sum biasing is set or is not processed;
The weighted sum set or be not processed to biasing executes activation primitive operation, obtains output neuron.Preferably,
Activation primitive can be sigmoid functions, tanh functions, ReLU functions or softmax functions.
In some embodiments, as shown in fig. 7, processing chip 601 can also include direct memory access module, for depositing
Enter the input data in memory module, neural network parameter and instruction, so that control module and computing module are called;Further
It is additionally operable to after computing module calculates output neuron, the output neuron is written to memory module.
In some embodiments, it as shown in fig. 7, processing chip 601 further includes instruction cache module, is used for from described direct
Memory access module cache instruction is called for control module.The instruction cache module can be that on piece caches, by preparing work
Skill is integrated in processing chip 601, can be improved processing speed when instruction is transferred, be saved the integral operation time.
In some embodiments, processing chip 601 further includes:Neuron cache module is inputted, is used for from described direct
Memory access module caching input neuron, is called for computing module;Weights cache module is used to deposit from the direct memory
Modulus block caches weights, is called for computing module;Output neuron cache module is used for storage and is obtained from the computing module
Output neuron after operation, with output to direct memory access module.Above-mentioned input neuron cache module, weights cache mould
Block and output neuron cache module may be on piece caching, are integrated in processing chip 601 by semiconductor technology, can
To improve processing speed when being read and write for computing module, the integral operation time is saved.
Fig. 8 is 601 block diagram of processing chip of the another embodiment in Fig. 6.As shown in figure 8, the embodiment
In processing chip 601 may include preprocessing module, be used to pre-process user's request data, be converted into and meet god
Data through network inputs format.The Effect of Pretreatment is to improve the accuracy of follow-up neural network computing, accurate to obtain
Elevator dispatching protocol.
Fig. 9 is the neural network computing schematic diagram of an embodiment in intelligent elevator controlling equipment in Fig. 6.
Processing chip in the embodiment of the present disclosure carries out the representative network model of neural network computing as shown in figure 9, the god
Include input layer through network, several middle layers and output layer, circle indicates that neuron, the neuron of input layer are defeated wherein in figure
(data include the digital information of one or more user's request queues to access customer request data, such as after various codings
Form user's request queue), user's request data here includes to wait terraced number, after the operation of several hidden layers, output layer
Data include the execution queue of active user's request group.
Preferably, LSTM models may be used in above-mentioned neural network model, LSTM models are by multiple multi-layer perception (MLP)s
Model is constituted, each multiple perceptron model represents the LSTM states at a moment, and the state at current time is by preceding n-1
What the state at a moment and current input codetermined, when user's request data xi is continually entered in network, output ht is every
User's request queue at one moment.Moreover, it is also possible to by LSTM model initializations parameter and according to relevant user request data
The scheduling cost of loss function is obtained, the gradient direction output active user for calculating the minimum cost that execution queue is added in user asks
Ask the execution queue of group.
Moreover, as the parameter that preferably can adaptively train in above-mentioned LSTM (shot and long term memory network) (as weighed
Value, biasing etc.), and then generate training and generate new LSTM model parameters.Preferably, above-mentioned adaptive training process is to locate in real time
Reason.
Preferably, the scheduling cost of above-mentioned loss function can refer to that elevator executes every layer of period of reservation of number in queue
Weighted average, i.e. optimization aim are the statistic (stand-by period of the average latency for the user that all request elevators use
It is most short), wherein assigned weights are user's importance rate.Loss function scheduling cost can also be total building of elevator up-down
The sum of number of plies, i.e. optimization aim are that elevator runs total distance (energy consumption most saves) etc..
The scheduling cost of preferred above-mentioned loss function can carry out each elevator by the number that intelligent video camera head identifies
The weighting parameters of request, so the priority of the order is weighed by number, it is also relatively reasonable in practical applications.
In some embodiments, user's request data (including waiting terraced number) of the processing chip 601 can derive from prison
Control camera, mobile phone, computer, notebook, the continuous times image collecting device such as tablet computer.
As shown in fig. 6, in some embodiments, artificial intelligence elevator dispatching equipment 600 may include request signal coding
The input of the electric signal of elevator button is such as encoded for being encoded to user's request data, becomes nerve net by device 602
The manageable digital information of network.For example, the user in input user's request group asks to represent uplink by 1, downlink is represented by 0
(can certainly in turn, 0 represents uplink, and 1 represents downlink), knows with the binary coding of floor number and by intelligent video camera head
Not Chu Lai each floor elevator request number collectively constitute input coding binary system request.There can certainly be other coding staffs
Formula constitutes comprising request direction and asks the request of floor or other information coding as input.
As shown in fig. 6, in some embodiments, artificial intelligence elevator dispatching equipment 600 can also include memory 603,
Execution queue for the user's request for storing the output of processing chip 601, execution queue here can have and multiple (such as have more
The useful terraced demand of a floor), the later stage, which also needs to further analyze, provides integrated dispatch scheme.
As shown in fig. 6, in some embodiments, the execution queue class of arithmetic unit 604 acting as in statistical memory 603
Type and its number, calculate the total busy degree (number of instructions) of elevator and the stream of people concentrates situation (each floor stop situation), calculate
Overall elevator dispatching scheme (such as needing the elevator quantity assembled and the specific dock floor of each elevator etc.).Citing comes
It says, arithmetic unit 604 can carry out following operation:Can such as count current time for the previous period in (time is Δ t) request team
Each floor occurrence number in row, in this, as the busy degree (percentage can be normalized to by multiplication and division constant term) of each layer;Together
Shi Tongji current times for the previous period in (time can be the total number of requests of Δ t), (can in this, as the total busy degree of elevator
To be normalized to percentage by multiplication and division constant term).The total busy degree of elevator determines the work number of terraced case, such as total busy degree 0-
25%;1 terraced case operating, 25%-50%, two terraced case operatings, 50%-75%, three, 75%-100%, four.Wherein
25%, the threshold values such as 50% or 75% can be adjusted according to actual conditions.And the busy degree of each elevator layer will be used as the layer in elevator
Dispatch loss function in weights, intuitivism apprehension is busier, and weights are bigger, influenced in loss function it is bigger, so
It can be paid the utmost attention to when optimizing loss function.
As shown in fig. 6, in some embodiments, artificial intelligence elevator dispatching equipment 600 can also include digital analog converter
606, it will be for the digital signal of the elevator dispatching scheme conversion analog signal that elevator motor is run in order to control.
As shown in fig. 6, in some embodiments, artificial intelligence elevator dispatching equipment 600 may include I/O unit
605, which is used to receive the signal of request signal encoder 602, and is passed to processing chip 601 and made
For input;It is additionally operable to receive processing chip output output (output elevator executes queue), and queue will be executed and be stored in memory
603;It is additionally operable to read ancestral task queue from memory 603, is input to arithmetic unit 604, the busy degree of statistics elevator and stream of people's collection
Middle situation, and output result is input in digital and analogue signals converter 606 and controls elevator operation number and the floor that berths, with reality
When Intelligent controlled elevator run.
As shown in Figure 10, the operation method of above-mentioned artificial intelligence elevator dispatching equipment 600 may include:
The preprocessed module of user's request data is passed to memory module or directly incoming memory module by step S101;
It is passed to instruction cache module by step S102, direct memory access module in batches, inputs neuron cache module,
In weights cache module;
Step S103, control module read instruction from instruction cache module, and computing module is passed to after being decoded;
Step S104, according to instruction, computing module executes corresponding operation,:In each layer of neural network, operation master
It is divided into three steps:Step S104.1, corresponding input neuron is multiplied with weights;Step S104.2 executes add tree fortune
Calculate, i.e., the result of step S104.1 is added step by step by add tree, obtains weighted sum, weighted sum biasing is set as needed or
It is not processed;Step S104.3 executes activation primitive operation to the result that step S104.2 is obtained, obtains output neuron, and
It is passed in output neuron caching.
Step S105 repeats step S102 to step S104, until all data operations finish;
Step S106, the result that operation is finished execute ordered sequence as elevator and are stored in phase by direct memory access module
The judging result storage address answered, and be output in memory 603.
Step S107 keeps in by the elevator controlling queue of artificial intelligence elevator dispatching equipment 600 and counts elevator at this time
The busy degree of section and the stream of people concentrate situation, determine operation elevator number according to the threshold classification of busy degree, and collect by the stream of people
Middle situation determines the elevator for deactivating or enabling, and then sends out the control elevator operation of elevator controlling electric signal.
The serial scheme of another group of the embodiment of the present disclosure is to provide a kind of artificial intelligence elevator dispatching system, including multiple
The artificial intelligence camera of above-described embodiment and the artificial intelligence elevator dispatching equipment of above-described embodiment, pass through artificial intelligence
Identification number and artificial intelligence analysis's scheduling scheme are capable of the accuracy and promptness of globality raising elevator dispatching.
Can be with the processor of neural network computing built in the artificial intelligence camera of the embodiment of the present disclosure, it can be according to camera shooting
Head camera shooting differentiates that identification lift port waits for number of people boarding in real time, finally asks floor+upper and lower rower by user by neural network
Note+time ladder number is encoded as being input in the artificial intelligence elevator dispatching equipment of elevator dispatching system, is converted to
Processing chip discernible signal, which is passed in processing chip, builds elevator dispatching instruction queue, the multiple instruction queue transmission handled out
To being decoded in artificial intelligence high ladder controlling equipment, and analog electrical signal control elevator is converted to by digital analog converter
Operation.
The embodiment that the artificial intelligence camera of the embodiment of the present disclosure is referred to above-mentioned combination Fig. 1-4 descriptions is arranged
With realization corresponding function;The artificial intelligence elevator dispatching equipment of the embodiment of the present disclosure is referred to the artificial of above-mentioned combination Fig. 6-10
Intelligent elevator controlling equipment is arranged and is realized corresponding function, and it will not be described here.
In the embodiment that the disclosure is provided, it should be noted that, disclosed relevant apparatus and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the part or module division,
Only a kind of division of logic function, formula that in actual implementation, there may be another division manner, such as multiple portions or module can be with
In conjunction with being either desirably integrated into a system or some features can be ignored or does not execute.
In the disclosure, term "and/or" may have been used.As used herein, term "and/or" means one
Or other or both (for example, A and/or B mean A or B or both A and B).
In the above description, for purpose of explanation, elaborate numerous details in order to provide each reality to the disclosure
Apply the comprehensive understanding of example.However, the skilled person will be apparent that, without certain in these details
It can implement one or more other embodiments.Described specific embodiment be not limited to the disclosure but in order to illustrate.
The scope of the present disclosure is not determined by specific example provide above, is only determined by following claim.At other
In the case of, in form of a block diagram, rather than it is illustrated in detail in known circuit, structure, equipment, and operation is so as not to as making to retouching
The understanding stated thickens.Thinking suitable for place, the ending of reference numeral or reference numeral is weighed in all attached drawings
It is multiple to indicate optionally correspondence or similar element with similar characteristics or same characteristic features, unless otherwise specifying or
Obviously.
Various operations and methods have been described.Certain methods are carried out in a manner of comparative basis in way of flowchart
Description, but these operations are optionally added to these methods and/or are removed from these methods.In addition, although flow
The particular order of the operation according to each example embodiment is illustrated, it is to be understood that, which is exemplary.It replaces real
These operations can optionally be executed, combine certain operations, staggeredly certain operations etc. in different ways by applying example.Equipment is herein
Described component, feature and specific optional details can also may be optionally applied to method described herein, in each reality
It applies in example, these methods can be executed by such equipment and/or be executed in such equipment.
Each functional unit/subelement/module/submodule can be hardware in the disclosure, for example the hardware can be electricity
Road, including digital circuit, analog circuit etc..The physics realization of hardware configuration includes but is not limited to physical device, physics device
Part includes but is not limited to transistor, memristor etc..Computing module in the computing device can be any appropriate hard
Part processor, such as CPU, GPU, FPGA, DSP and ASIC etc..The storage unit can be that any magnetic storage appropriate is situated between
Matter or magnetic-optical storage medium, such as RRAM, DRAM, SRAM, EDRAM, HBM, HMC etc..
It is apparent to those skilled in the art that for convenience and simplicity of description, only with above-mentioned each function
The division progress of module, can be as needed and by above-mentioned function distribution by different function moulds for example, in practical application
Block is completed, i.e., the internal structure of device is divided into different function modules, to complete all or part of work(described above
Energy.
Particular embodiments described above has carried out further in detail the purpose, technical solution and advantageous effect of the disclosure
Describe in detail bright, it should be understood that the foregoing is merely the specific embodiment of the disclosure, be not limited to the disclosure, it is all
Within the spirit and principle of the disclosure, any modification, equivalent substitution, improvement and etc. done should be included in the protection of the disclosure
Within the scope of.
Claims (11)
1. a kind of artificial intelligence elevator dispatching equipment, user's request for responding calling floor, receive at least one floor
User's request data, and determine elevator dispatching scheme, it is characterised in that including:
Processing chip carries out neural network computing, after operation for receiving user's request data, and with user's request data
Output neuron include active user request execution queue, wherein user's request data include user ask floor
Wait terraced number data;
Arithmetic unit determines elevator dispatching scheme according to the execution queue that at least one user asks.
2. artificial intelligence elevator dispatching equipment according to claim 1, which is characterized in that in the processing chip, carry out
The model of neural network computing is LSTM neural network models.
3. artificial intelligence elevator dispatching equipment according to claim 2, which is characterized in that the nerve in the processing chip
Network operations include:
By LSTM model initialization parameters, and according to the scheduling cost of user's request data acquisition loss function, calculate user
The gradient direction for the minimum cost for executing queue is added, exports the execution queue of active user's request group.
4. artificial intelligence elevator dispatching equipment according to claim 3, which is characterized in that the scheduling generation of the loss function
Valence is the weighted average for referring to elevator and executing every layer of period of reservation of number in queue.
5. artificial intelligence elevator dispatching equipment according to claim 3, which is characterized in that the loss function dispatches cost
For the sum of total number of floor levels of elevator up-down.
6. artificial intelligence elevator dispatching equipment according to claim 1, which is characterized in that the user of the processing chip asks
Ask data acquisition from monitoring camera, mobile phone, computer, notebook or tablet computer.
7. artificial intelligence elevator dispatching equipment according to claim 1, which is characterized in that further include:
Request signal encoder, for being encoded to user's request data, so that the processing chip is called.
8. artificial intelligence elevator dispatching equipment according to claim 7, which is characterized in that further include:
Memory, the execution queue of user's request for storing processing chip output.
9. artificial intelligence elevator dispatching equipment according to claim 8, which is characterized in that according at least in the arithmetic unit
The execution queue of one user request, determines elevator dispatching scheme, including:
Execution queue type and its number in the memory are counted, the total busy degree of elevator is calculated and the stream of people concentrates situation,
Determine overall elevator dispatching scheme.
10. artificial intelligence elevator dispatching equipment according to claim 9, which is characterized in that further include digital analog converter, use
In the analog signal that the digital signal of the elevator dispatching scheme is converted to elevator motor operation in order to control.
11. artificial intelligence elevator dispatching equipment according to claim 10, which is characterized in that further include:
I/O unit, the signal for receiving request signal encoder, and processing chip is passed to as input;Also
Elevator for receiving processing chip output output executes queue, and will execute queue and be stored in memory;It is additionally operable to from storage
Device reads ancestral task queue, is input to arithmetic unit, and the output result of arithmetic unit is input to digital and analogue signals converter.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810413796.1A CN108639882B (en) | 2018-05-03 | 2018-05-03 | Processing chip based on LSTM network model and arithmetic device comprising same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810413796.1A CN108639882B (en) | 2018-05-03 | 2018-05-03 | Processing chip based on LSTM network model and arithmetic device comprising same |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108639882A true CN108639882A (en) | 2018-10-12 |
CN108639882B CN108639882B (en) | 2020-02-04 |
Family
ID=63748544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810413796.1A Active CN108639882B (en) | 2018-05-03 | 2018-05-03 | Processing chip based on LSTM network model and arithmetic device comprising same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108639882B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110127464A (en) * | 2019-05-16 | 2019-08-16 | 永大电梯设备(中国)有限公司 | A kind of multiple target elevator dispatching system and method based on dynamic optimization |
CN110127475A (en) * | 2019-03-27 | 2019-08-16 | 浙江新再灵科技股份有限公司 | A kind of method and system of elevator riding personnel classification and its boarding law-analysing |
CN110288510A (en) * | 2019-06-11 | 2019-09-27 | 清华大学 | A kind of nearly sensor vision perception processing chip and Internet of Things sensing device |
CN112396171A (en) * | 2019-08-15 | 2021-02-23 | 杭州智芯科微电子科技有限公司 | Artificial intelligence computing chip and signal processing system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1058759A (en) * | 1990-05-29 | 1992-02-19 | 三菱电机株式会社 | Elevator control gear |
JPH08165063A (en) * | 1994-12-13 | 1996-06-25 | Fujitec Co Ltd | Group supervisory operation control device for elevator |
CN202245575U (en) * | 2011-08-23 | 2012-05-30 | 江苏跨域信息科技发展有限公司 | LonWorks technology-based lift monitoring system |
CN105488565A (en) * | 2015-11-17 | 2016-04-13 | 中国科学院计算技术研究所 | Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm |
CN107578014A (en) * | 2017-09-06 | 2018-01-12 | 上海寒武纪信息科技有限公司 | Information processor and method |
-
2018
- 2018-05-03 CN CN201810413796.1A patent/CN108639882B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1058759A (en) * | 1990-05-29 | 1992-02-19 | 三菱电机株式会社 | Elevator control gear |
JPH08165063A (en) * | 1994-12-13 | 1996-06-25 | Fujitec Co Ltd | Group supervisory operation control device for elevator |
CN202245575U (en) * | 2011-08-23 | 2012-05-30 | 江苏跨域信息科技发展有限公司 | LonWorks technology-based lift monitoring system |
CN105488565A (en) * | 2015-11-17 | 2016-04-13 | 中国科学院计算技术研究所 | Calculation apparatus and method for accelerator chip accelerating deep neural network algorithm |
CN107578014A (en) * | 2017-09-06 | 2018-01-12 | 上海寒武纪信息科技有限公司 | Information processor and method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110127475A (en) * | 2019-03-27 | 2019-08-16 | 浙江新再灵科技股份有限公司 | A kind of method and system of elevator riding personnel classification and its boarding law-analysing |
CN110127464A (en) * | 2019-05-16 | 2019-08-16 | 永大电梯设备(中国)有限公司 | A kind of multiple target elevator dispatching system and method based on dynamic optimization |
CN110127464B (en) * | 2019-05-16 | 2021-09-17 | 永大电梯设备(中国)有限公司 | Multi-objective elevator dispatching system and method based on dynamic optimization |
CN110288510A (en) * | 2019-06-11 | 2019-09-27 | 清华大学 | A kind of nearly sensor vision perception processing chip and Internet of Things sensing device |
CN112396171A (en) * | 2019-08-15 | 2021-02-23 | 杭州智芯科微电子科技有限公司 | Artificial intelligence computing chip and signal processing system |
Also Published As
Publication number | Publication date |
---|---|
CN108639882B (en) | 2020-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764468A (en) | Artificial neural network processor for intelligent recognition | |
CN108639882A (en) | Processing chip based on LSTM network models and the arithmetic unit comprising it | |
Yu et al. | Intelligent edge: Leveraging deep imitation learning for mobile edge computation offloading | |
Li et al. | A deep learning method based on an attention mechanism for wireless network traffic prediction | |
CN106315319B (en) | A kind of elevator intelligent pre-scheduling method and system | |
CN110347500A (en) | For the task discharging method towards deep learning application in edge calculations environment | |
CN108675071A (en) | High in the clouds cooperative intelligent chip based on artificial neural network processor | |
CN104961009B (en) | Many elevator in parallel operation control method for coordinating based on machine vision and system | |
CN108545556B (en) | Information processing unit neural network based and method | |
CN110390246A (en) | A kind of video analysis method in side cloud environment | |
CN113391607A (en) | Hydropower station gate control method and system based on deep learning | |
CN112906859B (en) | Federal learning method for bearing fault diagnosis | |
CN112668694A (en) | Regional flow prediction method based on deep learning | |
CN116957698A (en) | Electricity price prediction method based on improved time sequence mode attention mechanism | |
CN115469627B (en) | Intelligent factory operation management system based on Internet of things | |
CN115314343A (en) | Source-load-storage resource aggregation control gateway device and load and output prediction method | |
CN117668563B (en) | Text recognition method, text recognition device, electronic equipment and readable storage medium | |
CN215813842U (en) | Hydropower station gate control system based on deep learning | |
CN114169506A (en) | Deep learning edge computing system framework based on industrial Internet of things platform | |
CN113676357A (en) | Decision method for edge data processing in power internet of things and application thereof | |
CN109542513B (en) | Convolutional neural network instruction data storage system and method | |
Lu et al. | Dynamic offloading on a hybrid edge–cloud architecture for multiobject tracking | |
CN102156408B (en) | System and method for tracking and controlling maximum power point in dynamically self-adaptive evolvement process | |
CN116720132A (en) | Power service identification system, method, device, medium and product | |
CN116109058A (en) | Substation inspection management method and device based on deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |