CN113391899A - Method and device for allocating cache way - Google Patents

Method and device for allocating cache way Download PDF

Info

Publication number
CN113391899A
CN113391899A CN202110663104.0A CN202110663104A CN113391899A CN 113391899 A CN113391899 A CN 113391899A CN 202110663104 A CN202110663104 A CN 202110663104A CN 113391899 A CN113391899 A CN 113391899A
Authority
CN
China
Prior art keywords
target
cache way
logic area
task list
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110663104.0A
Other languages
Chinese (zh)
Other versions
CN113391899B (en
Inventor
向谷春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN202110663104.0A priority Critical patent/CN113391899B/en
Publication of CN113391899A publication Critical patent/CN113391899A/en
Application granted granted Critical
Publication of CN113391899B publication Critical patent/CN113391899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Strategic Management (AREA)
  • Operations Research (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a method and a device for distributing cache channels, and relates to the field of warehouse logistics. One embodiment of the method comprises: acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not, wherein the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs; if so, determining a target logic area combination corresponding to the target task list according to the logic area corresponding to the target task list based on a preset logic area combination; and according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, determining a target cache way corresponding to the target task list, and distributing the target cache way for the target task list. The implementation mode realizes the allocation of the cache way according to the warehouse layout and can allocate the cache way more flexibly.

Description

Method and device for allocating cache way
Technical Field
The invention relates to the field of warehouse logistics, in particular to a method and a device for allocating cache channels.
Background
In the smart warehouse logistics, the allocation mode of the cache way adopts the characteristics of order dimension, such as order type, order time and the like. The specific implementation is that when the rechecking link allocates the cache way, the collection sheet can be allocated to the designated cache way according to the type and time of placing orders of the collection sheet. Considering that the types of the articles are different, the articles are stored in different logic areas in the warehouse, and the requirements for rechecking and packaging are also different, so when the buffer channels are allocated in the rechecking link, the buffer channels are expected to be allocated by combining the layout of the warehouse. However, in the prior art, the cache way is allocated according to the order type or the order time, and the allocation of the cache way in combination with the warehouse layout cannot be realized.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method and an apparatus for allocating cache ways, so as to allocate cache ways according to a warehouse layout, and allocate cache ways more flexibly.
To achieve the above object, according to an aspect of the embodiments of the present invention, a method for allocating a cache way is provided.
The method for allocating the cache way of the embodiment of the invention comprises the following steps: acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not, wherein the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs; if yes, determining a target logic area combination corresponding to the target task list according to a logic area corresponding to the target task list based on a preset logic area combination; and according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, determining the target cache way corresponding to the target task list, and distributing the target cache way for the target task list.
Optionally, the determining, according to a correspondence between a pre-established logical area combination and a cache way, a target cache way corresponding to the target task list according to the target logical area combination, and allocating the target cache way to the target task list includes: if the target task list is to allocate cache ways according to the task list dimension, acquiring a first marked cache way corresponding to the target task list and the priority order of the first marked cache way according to the corresponding relation and the target logic area combination; and selecting the target cache way from the first marked cache ways according to the priority order of the first marked cache ways and the state of the first marked cache ways, and allocating the target cache way to the target task list.
Optionally, if the target task list allocates a cache way according to the dimension of the task list, the method further includes: and if the combination of the target logic areas is empty, the first marked cache way is empty, or the first marked cache way is in a non-idle state, allocating a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
Optionally, the determining, according to a correspondence between a pre-established logical area combination and a cache way, a target cache way corresponding to the target task list according to the target logical area combination, and allocating the target cache way to the target task list includes: if the target task list is a cache way distributed according to the set single dimension, acquiring a second marked cache way corresponding to the target set list according to the corresponding relation and the target logic area combination; if the second marked cache way is in an idle state, distributing the target cache way to the target collection list according to the priority sequence of the logic area combination corresponding to the second marked cache way so as to realize the distribution of the target cache way to the target task list; the target cache way is a cache way in the second marked cache way, and the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation.
Optionally, if the target task list allocates a cache way according to a set single dimension, the method further includes: and if the second unmarked cache way is in an idle state, distributing the second unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
Optionally, the number of logical zone combinations is at least one; and determining a target logic area combination corresponding to the target task list according to the logic area corresponding to the target task list based on the preconfigured logic area combination, including: sequentially judging whether the logic area combination comprises a logic area corresponding to the target task list according to the priority order of the logic area combination; if yes, determining the logic area as a target logic area combination corresponding to the target task list; and if not, determining that the combination of the target logic areas corresponding to the target task list is empty.
Optionally, the logical area combination is determined according to the following process: acquiring at least one logic area; grouping the at least one logic area according to the article type information stored in each logic area; and determining the logic area combination corresponding to the at least one logic area according to the grouping result.
Optionally, the correspondence between the logical area combination and the cache way is established according to the following process: obtaining a marked cache way, wherein the marked cache way is a cache way of a marked logic area combination; according to the logic area combination corresponding to the cache way with the mark, obtaining the cache way corresponding to the logic area combination, and setting the priority order of the cache way corresponding to the logic area combination; and establishing a relation among the logic area combination, the cache way corresponding to the logic area combination and the priority order of the cache way corresponding to the logic area combination.
Optionally, the determining whether to allocate a cache way for the target task list includes: judging whether the target task list is a cache way distributed according to the single dimension of the task; if yes, determining to distribute a cache way for the target task list; if not, determining to distribute a cache way for the target task order under the condition that the target collection order to which the target task order belongs finishes picking.
Optionally, after determining whether the target task list is to allocate a cache way according to the task list dimension, the method further includes: if the target task list is to allocate cache channels according to the task single dimension, judging whether cache channels are already allocated to other task lists contained in the target collection list; if yes, determining that the cache way allocated to the other task sheets is the target cache way, and allocating the target cache way to the target task sheet; if not, determining to allocate a cache way for the target task list.
To achieve the above object, according to still another aspect of the embodiments of the present invention, an apparatus for allocating a cache way is provided.
The device for distributing the cache way of the embodiment of the invention comprises: the system comprises an acquisition module, a cache module and a cache module, wherein the acquisition module is used for acquiring a target task list and a logic area corresponding to the target task list and judging whether a cache way is allocated to the target task list or not, and the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs; a determining module, configured to determine, based on a preconfigured logic area combination, a target logic area combination corresponding to the target task list according to a logic area corresponding to the target task list if the target logic area combination is the preset logic area combination; and the allocation module is used for determining a target cache way corresponding to the target task list according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination and allocating the target cache way to the target task list.
Optionally, the allocation module is further configured to: if the target task list is to allocate cache ways according to the task list dimension, acquiring a first marked cache way corresponding to the target task list and the priority order of the first marked cache way according to the corresponding relation and the target logic area combination; and selecting the target cache way from the first marked cache ways according to the priority order of the first marked cache ways and the state of the first marked cache ways, and allocating the target cache way to the target task list.
Optionally, the allocation module is further configured to: and if the combination of the target logic areas is empty, the first marked cache way is empty, or the first marked cache way is in a non-idle state, allocating a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
Optionally, the allocation module is further configured to: if the target task list is a cache way distributed according to the set single dimension, acquiring a second marked cache way corresponding to the target set list according to the corresponding relation and the target logic area combination; if the second marked cache way is in an idle state, distributing the target cache way to the target collection list according to the priority sequence of the logic area combination corresponding to the second marked cache way so as to realize the distribution of the target cache way to the target task list; the target cache way is a cache way in the second marked cache way, and the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation.
Optionally, the allocation module is further configured to: and if the second unmarked cache way is in an idle state, distributing the second unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
Optionally, the number of logical zone combinations is at least one; and the determining module is further configured to: sequentially judging whether the logic area combination comprises a logic area corresponding to the target task list according to the priority order of the logic area combination; if yes, determining the logic area as a target logic area combination corresponding to the target task list; and if not, determining that the combination of the target logic areas corresponding to the target task list is empty.
Optionally, the apparatus further includes a configuration module, configured to determine the logical zone combination according to the following procedure: acquiring at least one logic area; grouping the at least one logic area according to the article type information stored in each logic area; and determining the logic area combination corresponding to the at least one logic area according to the grouping result.
Optionally, the configuration module is further configured to establish a correspondence between the logical area combination and the cache way according to the following process: obtaining a marked cache way, wherein the marked cache way is a cache way of a marked logic area combination; according to the logic area combination corresponding to the cache way with the mark, obtaining the cache way corresponding to the logic area combination, and setting the priority order of the cache way corresponding to the logic area combination; and establishing a relation among the logic area combination, the cache way corresponding to the logic area combination and the priority order of the cache way corresponding to the logic area combination.
Optionally, the obtaining module is further configured to: judging whether the target task list is a cache way distributed according to the single dimension of the task; if yes, determining to distribute a cache way for the target task list; if not, determining to distribute a cache way for the target task order under the condition that the target collection order to which the target task order belongs finishes picking.
Optionally, the obtaining module is further configured to: if the target task list is to allocate cache channels according to the task single dimension, judging whether cache channels are already allocated to other task lists contained in the target collection list; if yes, determining that the cache way allocated to the other task sheets is the target cache way, and allocating the target cache way to the target task sheet; if not, determining to allocate a cache way for the target task list.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided an electronic apparatus.
An electronic device of an embodiment of the present invention includes: one or more processors; the storage device is used for storing one or more programs, and when the one or more programs are executed by one or more processors, the one or more processors implement the method for allocating the cache way of the embodiment of the invention.
To achieve the above object, according to still another aspect of an embodiment of the present invention, there is provided a computer-readable medium.
A computer readable medium of an embodiment of the present invention stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements a method for allocating a cache way of an embodiment of the present invention.
One embodiment of the above invention has the following advantages or benefits: according to the method for allocating the cache way, the logic area corresponding to the target collection list to which the target task list belongs is obtained firstly, then the target logic area combination corresponding to the target task list is determined based on the preset logic area combination, and then the target cache way can be allocated to the target task list according to the established corresponding relation between the logic area combination and the cache way and by combining the target logic area combination, so that the cache way can be allocated according to the warehouse layout, and the cache way can be allocated more flexibly.
Further effects of the above-mentioned non-conventional alternatives will be described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a diagram illustrating the main steps of a method for allocating cache ways according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a main process of determining whether to allocate a cache way for a target task list according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the main process of determining logical zone combinations according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a main process of establishing a mapping relationship between a logical area group and a cache way according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating major blocks of an apparatus for allocating cache ways according to an embodiment of the present invention;
FIG. 6 is an exemplary system architecture diagram in which embodiments of the present invention may be employed;
fig. 7 is a schematic block diagram of a computer system suitable for use in implementing a terminal device or server of an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention are described below with reference to the accompanying drawings, in which various details of embodiments of the invention are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following are the noun explanations to which the embodiments of the present invention relate:
WMS: warehouse Management System, called Warehouse Management System.
WCS: the Warehouse Control System is called Warehouse Control System, and the WCS is a management Control System between the WMS and the PLC. The system carries out information interaction with the WMS, receives the instruction of the WMS, and sends the instruction to the PLC to drive the Shuttle and the conveying equipment to generate corresponding mechanical action.
A logic area: in the area divided in the warehouse layout dimension, a warehouse can be divided into a plurality of logical areas on the layout, and one logical area is equivalent to a set formed by a plurality of storage areas. In the embodiment of the invention, it is designated as a picking logic area.
Collection list: the orders are formed into a set according to preset rules. The preset rules may be to combine orders with the same order type, combine orders within a certain time range, combine orders with the same or similar article type, combine orders with the same or similar logic area, and the like.
A task list: and splitting the collection list into a plurality of task lists according to the information such as the logic area where the article is located, the volume of the article, the weight of the article, the order number and the like. In the embodiment of the invention, the order form is designated as a picking order form.
Shuttle: a storage device.
Caching a channel: a buffer memory device made of a conveying line can store and pick a goods box for a rechecker to take.
After receiving the order, the WMS acquires the article information contained in the order, and then combines the order according to a preset rule to generate a collection sheet. Then, the WMS splits the collection sheet into a plurality of task sheets according to the information of the logical area where the item is located, the volume of the item, the weight of the item, the order number, etc., so as to pick up the item in the logical area according to the task sheets. After the order is picked, the order picking task box (namely, the material box for storing the articles) is put to the order picking area conveying line. And, the WMS sends the order to the WCS, and the WCS allocates a buffer channel for the order so that the picking area conveyor lines can allocate picking task boxes to the buffer channel. And finally, the rechecking personnel take the boxes from the cache way for rechecking, and the rechecked boxes are delivered to packing personnel for packing, so that the warehouse production flow is completed. In the prior art, cache ways can be allocated according to the type of orders or the order time under the collection list, and the cache ways cannot be allocated according to the warehouse layout. In order to solve the above problem, an embodiment of the present invention provides a method for allocating cache ways. FIG. 1 is a diagram illustrating the main steps of a method for allocating cache ways according to an embodiment of the present invention. As shown in FIG. 1, the main steps of the method for allocating cache ways may include:
step S101, acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not;
step S102, if yes, determining a target logic area combination corresponding to the target task list according to a logic area corresponding to the target task list based on a preset logic area combination;
step S103, according to the corresponding relation between the logic area combination and the cache way which is established in advance, according to the target logic area combination, the target cache way corresponding to the target task list is determined, and the target cache way is distributed for the target task list.
The target task order is a task order for completing picking, namely the task order sent to the WCS by the WMS. The logical area corresponding to the target task list refers to the logical area corresponding to the target collection list to which the target task list belongs. For the sake of easy review, in the actual production flow, all the task lists contained in one collection list may be allocated to the same cache way. Therefore, the target cache way corresponding to the target task list, that is, the cache way corresponding to the target collection list, can be obtained by analyzing the logical area combination corresponding to the target collection list. For example, the collection sheet D includes task sheets D1 and D2, the items included in the task sheet D1 are distributed in the logical areas 1, 2, and 3, the items included in the task sheet D2 are distributed in the logical areas 3 and 4, and the logical areas spanned by the collection sheet D are 1, 2, 3, and 4, that is, the logical area combination corresponding to the collection sheet D is 1, 2, 3, and 4, and the logical areas corresponding to the task sheets D1 and D2 included in the collection sheet D are 1, 2, 3, and 4.
In step S101, after the target task list is acquired, it needs to be determined whether to allocate a cache way to the target task list. The specific judgment method may be: judging whether the target task list is a cache way distributed according to the single dimension of the task; if yes, determining to distribute a cache way for the target task list; if not, determining to distribute a cache way for the target task order under the condition that the target collection order to which the target task order belongs finishes picking.
The way in which cache ways are allocated is different for different structured collection lists. Specifically, the aggregate single data order structure may be divided into single-piece orders, multiple non-orders, and multiple orders. Wherein, the single order only comprises one order; multiple non-orders, i.e. multiple non-confluent orders, means that multiple orders are positioned in the same logic area without confluent; multiple orders, i.e., multiple flows, means that multiple orders are located in different logical zones and need to be combined. Accordingly, a cache way that processes a single order may be named a single cache way, a cache way that processes multiple non-orders may be named multiple non-cache ways, and a cache way that processes multiple orders may be named multiple cache ways. According to different confluence modes, the multi-joint cache way can be divided into a multi-joint Shuttle cache way, a multi-joint non-Shuttle cache way and a manual confluence cache way. The multiple-merging Shuttle cache way means that confluence is completed in the Shuttle and the multiple-merging Shuttle cache way is redistributed to the cache way; the multiple merging without entering the Shuttle cache way means that the multiple merging without entering the Shuttle is carried out on the cache way; the manual confluence cache way means that one cache way can simultaneously distribute a plurality of multi-combination collection lists. For a single-piece cache way, a plurality of non-cache ways, a plurality of merge non-entry Shuttle cache ways and a manual confluence cache way, the cache ways can be directly distributed; for the multi-merge Shuttle cache way, the cache way can not be directly distributed, and the merging needs to be carried out in the Shuttle firstly.
Therefore, after the target task list is obtained, whether the target task list is to allocate a cache way according to the dimension of the task list can be judged. And (I) if the target task sheet allocates the cache way according to the task sheet dimension, indicating that the cache way can be allocated to the target task sheet after the target task sheet finishes picking. That is, the cache way corresponding to the target collection list to which the target task list belongs is a single-piece cache way, multiple non-cache ways, multiple-merge-non-merge-into-shut cache ways, or a manual-merge cache way. And (II) if the target task sheet is allocated with the cache way according to the set single dimension, the cache way can be allocated to the target set sheet only after the target set sheet to which the target task sheet belongs finishes picking, and correspondingly, the cache way can be allocated to the target task sheet. That is, the cache way corresponding to the target collection sheet to which the target task sheet belongs is a multi-join stub cache way.
As an embodiment of the present invention, after determining whether the target task list is distributed according to the task single dimension, if it is determined that the target task list is distributed according to the task single dimension, it is determined whether a cache way has been distributed for other task lists included in the target set list; if yes, determining that the cache ways distributed for other task lists are target cache ways, and distributing the target cache ways for the target task list; if not, determining to allocate a cache way for the target task list.
In the above, it has been explained that, in order to facilitate the review, all the task lists contained in one collection list are allocated to the same cache way in the actual production flow. The target collection sheet may include other task sheets besides the target task sheet, so that the target task sheet and the other task sheets need to be allocated with the same cache way. Therefore, in the case that it is determined that the target task sheet allocates the cache way according to the task sheet dimension, it is necessary to determine whether the cache way has been allocated to another task sheet. If the cache way is already allocated to other task sheets, the cache way allocated to other task sheets is directly determined to be the target cache way, and the target cache way is allocated to the target task sheet. For example, the collection sheet D includes the task sheets D1 and D2, and the cache way is allocated according to the task sheet dimension, after the pick of the task sheet D1 is completed, it needs to be determined whether the cache way has been allocated for D2, if yes, D1 is allocated to the same cache way as D2, and if no, the target cache way corresponding to D1 needs to be determined. And after the d2 pick-up is completed, allocating d2 to the target cache way corresponding to d 1.
FIG. 2 is a diagram illustrating a main process of determining whether to allocate a cache way for a target task list according to an embodiment of the present invention. As shown in fig. 2, the main process of determining whether to allocate a cache way for the target task list may include:
step S201, judging whether the target task list is distributed according to the dimension of the task list, if so, executing step S202, and if not, executing step S205;
step S202, judging whether cache channels are already allocated to other task lists contained in the target collection list, if so, executing step S203, and if not, executing step S204;
step S203, determining that the cache ways allocated to other task lists are target cache ways, and directly allocating the target cache ways to the target task lists;
step S204, determining to allocate a cache way for the target task list;
step S205, determining whether the target collection sheet to which the target task sheet belongs has finished picking, if yes, executing step S204.
It should be noted that, if the target collection sheet does not include other task sheets, it is determined that no cache way is allocated to the other task sheets included in the target collection sheet, and then step S204 is executed to determine that a cache way is allocated to the target task sheet. In addition, if it is determined in step S205 that the target order sheet belongs to the target order sheet and the picking is not completed, the cache way may be allocated to the target order sheet only after the target order sheet completes the picking, that is, the cache way is allocated to the target order sheet.
In step S102, if it is determined that the cache way needs to be allocated to the target task sheet, a target logical area combination corresponding to the target task sheet is determined according to the logical area corresponding to the target task sheet based on the preconfigured logical area combination.
The logical area combination is obtained by grouping a plurality of logical areas contained in the warehouse. Fig. 3 is a schematic diagram of a main process of determining a logical zone combination according to an embodiment of the present invention. As shown in fig. 3, the logical area combination is determined according to the following procedure:
step S301, acquiring at least one logic area;
step S302, grouping at least one logic area according to the article type information stored in each logic area;
step S303, determining a logic area combination corresponding to at least one logic area according to the grouping result.
For example, the warehouse includes logic areas 1 to 8, wherein the logic areas 1 to 4 store electronic articles, and the logic areas 5 to 8 store personal articles, so that the logic areas 1 to 4 may be defined as a first logic area combination, the logic areas 5 to 8 may be defined as a second logic area combination, and the logic areas 1 to 8 may be defined as a third logic area combination. In addition, because the packaging consumables of different types of articles are different, the required rechecking packaging skills are also different, and therefore the logic areas can be grouped according to the rechecking packaging requirements. Of course, the logical zones may also be grouped according to business requirements. Obviously, the number of logical area combinations is at least one, and the number of logical area combinations can be expanded according to actual needs. In the above example, the logical areas 1, 3, and 6 may be defined as the fourth logical area combination according to actual requirements.
In step S102, a target logical area combination corresponding to the target task list may be determined according to the logical area corresponding to the target task list by using the preconfigured logical area combination. The concrete implementation is as follows: sequentially judging whether the logic area combination comprises a logic area corresponding to the target task list or not according to the priority order of the logic area combination; if yes, determining the logic area as a target logic area combination corresponding to the target task list; and if not, determining that the combination of the target logic areas corresponding to the target task list is empty.
That is, in addition to determining the logical zone combinations, the priority order of the logical zone combinations may be set. For example, the priority order of the logical area combination is set from high to low as: the first logic area combination, the second logic area combination, the third logic area combination and the fourth logic area combination. In this way, in the process of determining the target logical area combination corresponding to the target task list, whether the logical area combination covers the logical area corresponding to the target task list can be sequentially judged. For example, first, whether the first logical area combination includes all logical areas corresponding to the target task list is judged; if yes, determining that the target logic area corresponding to the target task list is combined into a first logic area combination; if not, judging whether the second logic area combination comprises all logic areas corresponding to the target task list; if so, determining that the target logic area corresponding to the target task list is combined into a second logic area combination; if not, judging whether the third logic area combination comprises all the logic areas corresponding to the target task list, and so on. And if all the logic area combinations do not comprise the logic area corresponding to the target task list, considering that the target logic area combination corresponding to the target task list is empty.
The target logical area combination corresponding to the target task list is determined through step S102, and then in step S103, the cache way may be allocated according to the target logical area combination according to the corresponding relationship between the logical area combination and the cache way. FIG. 4 is a diagram illustrating a main process of establishing a mapping relationship between a logical area group and a cache way according to an embodiment of the present invention. As shown in fig. 4, the correspondence between the logical area combination and the cache way may be established according to the following procedures:
step S401, a marked cache way is obtained, where the marked cache way is a cache way of a marked logical area combination.
If a certain cache way is marked with a certain logic area combination, the cache way is used for caching the items stored in the logic area combination. It should be noted that one or more logical zone combinations may be marked for a cache way. In addition, for one cache way, the logic area combination is not marked, and the cache way which is not marked with the logic area combination is defined as a non-marked cache way.
Step S402, according to the logic area combination corresponding to the cache way with the mark, obtaining the cache way corresponding to the logic area combination, and setting the priority order of the cache way corresponding to the logic area combination.
After the marked cache way is obtained, the logic area combination corresponding to the marked cache way can be obtained, namely the marked logic area combination is obtained. Then, the cache way corresponding to the marked logic area combination is obtained, and the priority order of the cache way corresponding to the logic area combination is set, namely the first priority cache way, the second priority cache way and the like corresponding to the logic area combination are obtained.
For example, cache way A, B and C are marked cache ways, the logic area combination corresponding to cache way A is the first logic area combination and the third logic area combination, the logic area combination corresponding to cache way B is the second logic area combination and the third logic area combination, and the logic area combination corresponding to cache way C is the first logic area combination, the second logic area combination and the third logic area combination. Thus, a first logical block combination corresponding to cache way A, C, a second logical block combination corresponding to cache way B, C, and a third logical block combination corresponding to cache way A, B, C may be obtained. And, for the first logical zone combination, set the cache way priority order to A, C; for the second logical zone combination, set the cache way priority order to B, C; for the third logical zone combination, the cache way priority order is set to C, A, B.
Step S403, establishing the relation among the logic area combination, the cache way corresponding to the logic area combination and the priority order of the cache way corresponding to the logic area combination.
Table 1 shows the correspondence between the established logical area combinations and cache ways. As shown in table 1, the first priority cache way corresponding to the first logic area combination is a, and the second priority cache way is C; the first priority cache way corresponding to the second logic area combination is B, and the second priority cache way is C; the first priority cache way corresponding to the third logic area combination is C, the second priority cache way is A, and the third priority cache way is B. Obviously, the established correspondence between the logical area combination and the cache way is the correspondence between the logical area combination and the cache way with the mark, and each priority cache way corresponding to the logical area combination can be obtained through the established correspondence.
TABLE 1 logical area combination and cache way correspondence
Logical area combination First priority cache way Second priority cache way Third priority cache way
First logic area combination Cache way A Cache way C
Second logic area combination Cache way B Cache way C
Third logic area combination Cache way C Cache way A Cache way B
In step S103, according to the established correspondence, a target cache way is determined according to the target logical area combination, and the target cache way is allocated to the target task list.
If the target task sheet allocates the cache way according to the task single dimension, determining a target cache way corresponding to the target task sheet according to a corresponding relation between a pre-established logic area combination and the cache way and the target logic area combination, and allocating the target cache way to the target task sheet, which may include: according to the corresponding relation and the combination of the target logic areas, acquiring the priority sequence of a first marked cache way and a first marked cache way corresponding to the target task list; and selecting a target cache way from the first marked cache ways according to the priority order of the first marked cache ways and the state of the first marked cache ways, and allocating the target cache way to the target task list. The first marked cache way can be a single cache way, a plurality of non-cache ways, a plurality of merge non-merge Shuttle cache ways and a manual merge cache way.
If the target task list allocates the cache way according to the dimension of the task list, firstly, according to the established corresponding relation between the logic area combination and the cache way, inquiring the priority order of the first marked cache way corresponding to the target logic area combination and the inquired first marked cache way, namely the priority order of the first marked cache way and the first marked cache way corresponding to the target task list. Then, whether the first marked cache way with the highest priority is in an idle state is judged. If yes, the first marked cache way with the best priority is distributed to the target task list. If not, whether the first marked cache way with the second priority is in an idle state or not needs to be judged. And if so, distributing the first marked cache way with the second priority to the target task list. If not, whether the first marked cache way of the third priority is in an idle state needs to be judged, and so on.
It should be noted that, if the number of the first marked cache ways of the same priority is multiple and all the first marked cache ways are in an idle state, the cache ways may be allocated to the target task list from the multiple first marked cache ways in combination with the order dimension information corresponding to the target collection list.
Further, if the combination of the target logic area is empty, the first marked cache way is empty, or the first marked cache way is in a non-idle state, the first unmarked cache way is allocated to the target task list according to the order dimension information corresponding to the target aggregate list. The first unmarked cache may be a single cache way, multiple non-cache ways, multiple merge non-entry Shuttle cache ways, or a manual merge cache way. That is, if the combination of the target logical areas is empty, the first marked cache way corresponding to the target task list cannot be queried according to the established corresponding relation. In this case, the first unmarked cache way may be allocated to the target task list according to the order dimension information corresponding to the target aggregate list. If the target logic area combination is not empty, but the first marked cache way corresponding to the target logic area combination cannot be inquired according to the established corresponding relation, the first unmarked cache way can be allocated to the target task list according to the order dimension information corresponding to the target collection list. And if the first marked cache way is in a non-idle state, distributing the target task list according to the order dimension information corresponding to the target collection list. The order dimension information may be order type information, order time information, and the like.
As described in the above example, the cache way corresponding to the target task list d1 is allocated according to the task single dimension, the logical area combinations corresponding to d1 are 1, 2, 3, and 4, and it can be seen that the target logical area combination corresponding to d1 is the first logical area combination. According to the corresponding relationship between the logical area combination and the cache way shown in table 1, the cache way corresponding to the first logical area combination is sorted according to priority as follows: cache way A, cache way C. Then, first, determine whether the cache way a is in an idle state, if yes, allocate the cache way a to d1, if no, determine whether the cache way C is in an idle state. If the cache way C is in an idle state, allocating a cache way A for D1, and if the cache way C is in a non-idle state, allocating a first unmarked cache way for D1 according to the order dimension characteristic of the collection D to which D1 belongs.
To sum up, if the cache way corresponding to the target collection list to which the target task list belongs is a single-piece cache way, multiple non-cache ways, multiple-in-parallel shutdown cache ways, or a manual merge cache way, the allocation logic is:
(1) if the target logic area combination corresponding to the target task list is empty, distributing a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list;
(2) if the first marked cache way corresponding to the target logic area combination cannot be found, distributing a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list;
(3) if the first marked cache way corresponding to the target logic area combination is found, the first marked cache way which is high in priority and is in an idle state is preferentially allocated;
(4) if a plurality of first marked cache ways which are in the same priority and are in a non-idle state exist, distributing a first unmarked cache way for the target task list by combining the order dimension information corresponding to the target collection list;
(5) and if the first marked cache way corresponding to the target logic area combination is in a non-idle state, distributing a first unmarked cache way for the target task list according to the order dimension information corresponding to the target aggregate list.
(II) if the target task list allocates the cache way according to the set single dimension, determining the target cache way corresponding to the target task list according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, and allocating the target cache way to the target task list, which may include: according to the corresponding relation and the combination of the target logic areas, acquiring a second marked cache way corresponding to the target collection list; if the second marked cache way is in an idle state, distributing a target cache way for the target set list according to the priority sequence of the logic area combination corresponding to the second marked cache way so as to realize the distribution of the target cache way for the target task list; the target cache way is a cache way in the second marked cache way, and the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation. The second marked cache way may be a multi-join shutdown cache way.
If the cache way corresponding to the target task list is distributed according to the set single dimension, firstly, according to the established corresponding relation between the logic area combination and the cache way, inquiring the priority order of the second marked cache way corresponding to the target logic area combination and the inquired second marked cache way, namely the second marked cache way corresponding to the target set list. It should be noted that if the cache way corresponding to the target task list is allocated according to the set single dimension, the cache way is allocated to the target set list. Since the cache way corresponding to the target collection list is the same as the cache way corresponding to the task list under the target collection list, it may also be considered that the cache way is allocated to the target task list.
In addition, if the target task list allocates the cache way according to the set single dimension, it indicates that the cache way corresponding to the target set list is a multi-entry shutdown cache way. And aiming at the multi-in Shuttle cache way, the cache way is distributed in a way of pulling a box or a collection list. That is, when the cache way is in an idle state, the aggregate sheet may be pulled for caching, and the bin may also be pulled for caching. The drawing of the collection list for caching can be understood as drawing the bins contained in the whole collection list for caching. Pulling a bin for caching may be understood as pulling each bin contained in the aggregate sheet in turn for caching. Since the aggregate sheet needs to be allocated to the same cache way, pulling the bin for caching mainly analyzes pulling the first bin contained in the aggregate sheet.
After the second marked cache way corresponding to the way target collection list is obtained, the target collection list waits to be pulled by the cache way in the shutdown. And if the second marked cache way is in an idle state, pulling the target set list to the target cache way by the second marked cache way according to the priority sequence of the corresponding logic area combination, namely allocating the target cache way for the target set list. And the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation.
For example, as shown in table 1, the logical zones corresponding to the cache way a are combined into a first logical zone combination and a third logical zone combination, and the priority order of the first logical zone combination is higher than that of the third logical zone combination. And if the second marked cache way corresponding to the target set single K is A, combining the target logic areas corresponding to the target set single K into a third logic area combination. If A is in idle state, A judges whether there is target logic area combination as the first logic area combination collection list waiting for buffer storage. If yes, the collection list waiting for caching is pulled to the cache way A, and if not, the target collection list K is pulled to the cache way A. In addition, if there is a certain collection sheet P, the corresponding target logical area combination is the second logical area combination, but the logical area combination corresponding to the cache way a has no second logical area combination, so that the collection sheet P is not pulled even if the cache way a is in an idle state.
It should be noted that the number of the second marked cache way may be multiple. If the number of the second marked cache ways is multiple, the second marked cache ways in the idle state are allocated preferentially. For example, the second marked cache ways corresponding to the target collection list K are A and B, and the priority of A is higher than that of B. However, when B is currently in the idle state and a is currently in the non-idle state, the target set list K may be pulled to the cache way B according to the priority order of the logical area combination corresponding to B.
In addition, if the second unmarked cache way is in an idle state, the second unmarked cache way is allocated to the target task list according to the order dimension information corresponding to the target collection list. The second unmarked cache way may be a multi-join Shuttle cache way. That is, if the second tagless cache way is idle, the second tagless cache way may request to pull any type of aggregate ticket. For example, the collection list M and the collection list N are both waiting for buffering, the second marked cache way corresponding to the collection list M is a, and the second marked cache way corresponding to the collection list N is empty. If a second unmarked cache way is in an idle state, whether M or N is currently pulled to the second unmarked cache way can be judged according to the order dimension information corresponding to M and N.
In the prior art, cache ways are distributed according to order types or order time, and the cache ways cannot be distributed according to warehouse layout. However, according to the method for allocating cache ways in the embodiments of the present invention, first, the logical area corresponding to the target collection sheet to which the target task sheet belongs is obtained, then, the target logical area combination corresponding to the target task sheet is determined based on the pre-configured logical area combination, and then, the target cache way may be allocated to the target task sheet according to the established correspondence between the logical area combination and the cache ways, in combination with the target logical area combination, so that the cache ways are allocated according to the warehouse layout, and the cache ways can be allocated more flexibly.
FIG. 5 is a diagram illustrating major blocks of an apparatus for allocating cache ways according to an embodiment of the present invention. As shown in FIG. 5, the main modules of the apparatus 500 for allocating cache ways may include: an acquisition module 501, a determination module 502 and an allocation module 503.
The obtaining module 501 may be configured to: acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not, wherein the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs; the determination module 502 may be configured to: if so, determining a target logic area combination corresponding to the target task list according to the logic area corresponding to the target task list based on a preset logic area combination; the assignment module 503 may be configured to: and according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, determining a target cache way corresponding to the target task list, and distributing the target cache way for the target task list.
As an embodiment of the present invention, the allocating module 503 may further be configured to: if the target task list is to allocate the cache way according to the dimension of the task list, acquiring the priority order of a first marked cache way and a first marked cache way corresponding to the target task list according to the corresponding relation and the target logic area combination; and selecting a target cache way from the first marked cache ways according to the priority order of the first marked cache ways and the state of the first marked cache ways, and allocating the target cache way to the target task list.
As an embodiment of the present invention, the allocating module 503 may further be configured to: and if the combination of the target logic area is empty, the first marked cache way is empty or the first marked cache way is in a non-idle state, allocating a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
As an embodiment of the present invention, the allocating module 503 may further be configured to: if the target task list is to allocate the cache way according to the set single dimension, acquiring a second marked cache way corresponding to the target set list according to the corresponding relation and the target logic area combination; if the second marked cache way is in an idle state, distributing a target cache way for the target set list according to the priority sequence of the logic area combination corresponding to the second marked cache way so as to realize the distribution of the target cache way for the target task list; the target cache way is a cache way in the second marked cache way, and the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation.
As an embodiment of the present invention, the allocating module 503 may further be configured to: and if the second unmarked cache way is in an idle state, distributing the second unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
As an embodiment of the present invention, the number of logical area combinations is at least one; and, the determination module 502 may be further operable to: sequentially judging whether the logic area combination comprises a logic area corresponding to the target task list or not according to the priority order of the logic area combination; if yes, determining the logic area as a target logic area combination corresponding to the target task list; and if not, determining that the combination of the target logic areas corresponding to the target task list is empty.
As can be seen in FIG. 5, the apparatus 500 for allocating cache ways may also include a configuration module 504. The configuration module 504 may be configured to determine the logical zone combinations as follows: acquiring at least one logic area; grouping at least one logic area according to the article type information stored in each logic area; and determining the logic area combination corresponding to at least one logic area according to the grouping result.
As an embodiment of the present invention, the configuration module 504 may further be configured to establish a corresponding relationship between the logical area combination and the cache way according to the following procedures: obtaining a marked cache way, wherein the marked cache way is a cache way of a marked logic area combination; according to the logic area combination corresponding to the cache way with the mark, obtaining the cache way corresponding to the logic area combination, and setting the priority order of the cache way corresponding to the logic area combination; and establishing a relation among the priority orders of the logic area combination, the cache tracks corresponding to the logic area combination and the cache tracks corresponding to the logic area combination.
As an embodiment of the present invention, the obtaining module 501 may further be configured to: judging whether the target task list is a cache way distributed according to the single dimension of the task; if yes, determining to distribute a cache way for the target task list; if not, determining to distribute a cache way for the target task order under the condition that the target collection order to which the target task order belongs finishes picking.
As an embodiment of the present invention, the obtaining module 501 may further be configured to: if the target task list is to allocate cache channels according to the task single dimension, judging whether cache channels are already allocated to other task lists contained in the target collection list; if yes, determining that the cache ways distributed for other task lists are target cache ways, and distributing the target cache ways for the target task list; if not, determining to allocate a cache way for the target task list.
According to the device for allocating the cache way, the logic area corresponding to the target collection list to which the target task list belongs is obtained firstly, then the target logic area combination corresponding to the target task list is determined based on the preset logic area combination, and then the target cache way can be allocated to the target task list according to the established corresponding relation between the logic area combination and the cache way and by combining the target logic area combination, so that the cache way can be allocated according to the warehouse layout, and the cache way can be allocated more flexibly.
FIG. 6 illustrates an exemplary system architecture 600 in which the method of allocating cache ways or the apparatus for allocating cache ways of embodiments of the invention may be applied.
As shown in fig. 6, the system architecture 600 may include terminal devices 601, 602, 603, a network 604, and a server 605. The network 604 serves to provide a medium for communication links between the terminal devices 601, 602, 603 and the server 605. Network 604 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
A user may use the terminal devices 601, 602, 603 to interact with the server 605 via the network 604 to receive or send messages or the like. The terminal devices 601, 602, 603 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 605 may be a server providing various services, for example, a background management server (for example only) providing support in the process of allocating cache channels by using the terminal devices 601, 602, and 603 by a user; as another example, server 605 may perform the allocating cache ways of embodiments of the present invention.
It should be noted that the method for allocating cache ways provided by the embodiment of the present invention is generally executed by the server 605, and accordingly, the apparatus for allocating cache ways is generally disposed in the server 605.
It should be understood that the number of terminal devices, networks, and servers in fig. 6 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 7, shown is a block diagram of a computer system 700 suitable for use with a terminal device implementing an embodiment of the present invention. The terminal device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU)701, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the system 700 are also stored. The CPU 701, the ROM 702, and the RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
In particular, according to the embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 709, and/or installed from the removable medium 711. The computer program performs the above-described functions defined in the system of the present invention when executed by the Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present invention may be implemented by software or hardware. The described modules may also be provided in a processor, which may be described as: a processor includes an acquisition module, a determination module, and an assignment module. For example, the obtaining module may be further described as a module that obtains the target task list and a logical area corresponding to the target task list, and determines whether to allocate a cache way to the target task list.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to comprise: acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not, wherein the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs; if so, determining a target logic area combination corresponding to the target task list according to the logic area corresponding to the target task list based on a preset logic area combination; and according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, determining a target cache way corresponding to the target task list, and distributing the target cache way for the target task list.
According to the technical scheme of the embodiment of the invention, the logical area corresponding to the target collection list to which the target task list belongs is firstly obtained, then the target logical area combination corresponding to the target task list is determined based on the preset logical area combination, and then the target cache way can be allocated to the target task list according to the established corresponding relation between the logical area combination and the cache way and by combining the target logical area combination, so that the cache way can be allocated according to the warehouse layout, and the cache way can be allocated more flexibly.
The above-described embodiments should not be construed as limiting the scope of the invention. Those skilled in the art will appreciate that various modifications, combinations, sub-combinations, and substitutions can occur, depending on design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (13)

1. A method for allocating cache ways, comprising:
acquiring a target task list and a logic area corresponding to the target task list, and judging whether a cache way is allocated to the target task list or not, wherein the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs;
if yes, determining a target logic area combination corresponding to the target task list according to a logic area corresponding to the target task list based on a preset logic area combination;
and according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination, determining the target cache way corresponding to the target task list, and distributing the target cache way for the target task list.
2. The method according to claim 1, wherein the determining a target cache way corresponding to the target task list according to a pre-established correspondence between a logical area combination and a cache way and according to the target logical area combination, and allocating the target cache way to the target task list comprises:
if the target task list is to allocate cache ways according to the task list dimension, acquiring a first marked cache way corresponding to the target task list and the priority order of the first marked cache way according to the corresponding relation and the target logic area combination;
and selecting the target cache way from the first marked cache ways according to the priority order of the first marked cache ways and the state of the first marked cache ways, and allocating the target cache way to the target task list.
3. The method of claim 2, wherein if the target task list allocates a cache way according to a task single dimension, the method further comprises:
and if the combination of the target logic areas is empty, the first marked cache way is empty, or the first marked cache way is in a non-idle state, allocating a first unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
4. The method according to claim 1, wherein the determining a target cache way corresponding to the target task list according to a pre-established correspondence between a logical area combination and a cache way and according to the target logical area combination, and allocating the target cache way to the target task list comprises:
if the target task list is a cache way distributed according to the set single dimension, acquiring a second marked cache way corresponding to the target set list according to the corresponding relation and the target logic area combination;
if the second marked cache way is in an idle state, distributing the target cache way to the target collection list according to the priority sequence of the logic area combination corresponding to the second marked cache way so as to realize the distribution of the target cache way to the target task list; wherein,
the target cache way is a cache way in the second marked cache way, and the priority order of the logic area combination corresponding to the second marked cache way is obtained according to the corresponding relation.
5. The method of claim 4, wherein if the target task list allocates cache ways according to an aggregate single dimension, the method further comprises:
and if the second unmarked cache way is in an idle state, distributing the second unmarked cache way for the target task list according to the order dimension information corresponding to the target collection list.
6. The method of claim 1, wherein the number of logical zone combinations is at least one; and the number of the first and second groups,
the determining a target logic area combination corresponding to the target task list according to the logic area corresponding to the target task list based on the preconfigured logic area combination comprises:
sequentially judging whether the logic area combination comprises a logic area corresponding to the target task list according to the priority order of the logic area combination;
if yes, determining the logic area as a target logic area combination corresponding to the target task list;
and if not, determining that the combination of the target logic areas corresponding to the target task list is empty.
7. The method according to any of claims 1 to 6, wherein the logical zone combinations are determined according to the following procedure:
acquiring at least one logic area;
grouping the at least one logic area according to the article type information stored in each logic area;
and determining the logic area combination corresponding to the at least one logic area according to the grouping result.
8. The method according to any one of claims 1 to 6, wherein the correspondence between the logical area combination and the cache way is established according to the following procedures:
obtaining a marked cache way, wherein the marked cache way is a cache way of a marked logic area combination;
according to the logic area combination corresponding to the cache way with the mark, obtaining the cache way corresponding to the logic area combination, and setting the priority order of the cache way corresponding to the logic area combination;
and establishing a relation among the logic area combination, the cache way corresponding to the logic area combination and the priority order of the cache way corresponding to the logic area combination.
9. The method of claim 1, wherein the determining whether to allocate a cache way for the target task list comprises:
judging whether the target task list is a cache way distributed according to the single dimension of the task;
if yes, determining to distribute a cache way for the target task list;
if not, determining to distribute a cache way for the target task order under the condition that the target collection order to which the target task order belongs finishes picking.
10. The method of claim 9, wherein after determining whether the target task sheet allocates cache ways according to a task sheet dimension, the method further comprises:
if the target task list is to allocate cache channels according to the task single dimension, judging whether cache channels are already allocated to other task lists contained in the target collection list;
if yes, determining that the cache way allocated to the other task sheets is the target cache way, and allocating the target cache way to the target task sheet;
if not, determining to allocate a cache way for the target task list.
11. An apparatus for allocating cache ways, comprising:
the system comprises an acquisition module, a cache module and a cache module, wherein the acquisition module is used for acquiring a target task list and a logic area corresponding to the target task list and judging whether a cache way is allocated to the target task list or not, and the logic area corresponding to the target task list is a logic area corresponding to a target collection list to which the target task list belongs;
a determining module, configured to determine, based on a preconfigured logic area combination, a target logic area combination corresponding to the target task list according to a logic area corresponding to the target task list if the target logic area combination is the preset logic area combination;
and the allocation module is used for determining a target cache way corresponding to the target task list according to the corresponding relation between the pre-established logic area combination and the cache way and the target logic area combination and allocating the target cache way to the target task list.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.
13. A computer-readable medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-10.
CN202110663104.0A 2021-06-15 2021-06-15 Method and device for allocating cache ways Active CN113391899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110663104.0A CN113391899B (en) 2021-06-15 2021-06-15 Method and device for allocating cache ways

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110663104.0A CN113391899B (en) 2021-06-15 2021-06-15 Method and device for allocating cache ways

Publications (2)

Publication Number Publication Date
CN113391899A true CN113391899A (en) 2021-09-14
CN113391899B CN113391899B (en) 2023-09-05

Family

ID=77621198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110663104.0A Active CN113391899B (en) 2021-06-15 2021-06-15 Method and device for allocating cache ways

Country Status (1)

Country Link
CN (1) CN113391899B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10338315A (en) * 1997-06-11 1998-12-22 Hitachi Ltd Parts layout system and parts layout method
US20070067200A1 (en) * 2005-09-19 2007-03-22 Oracle International Corporation Access point triangulation for task assignment of warehouse employees
CN110516986A (en) * 2018-05-21 2019-11-29 北京京东振世信息技术有限公司 Wrap up the set single group construction method and device under production model
CN110775496A (en) * 2019-10-15 2020-02-11 北京极智嘉科技有限公司 Aggregate order converging processing system, method and device
CN110796400A (en) * 2018-08-01 2020-02-14 北京京东振世信息技术有限公司 Method and device for caching goods
CN111126926A (en) * 2020-01-20 2020-05-08 安吉智能物联技术有限公司 Warehouse management method
CN111476413A (en) * 2020-04-03 2020-07-31 上海明略人工智能(集团)有限公司 Warehouse storage position distribution method and system based on big data
CN111580952A (en) * 2019-02-18 2020-08-25 北京京东尚科信息技术有限公司 Method and apparatus for assigning a multi-tasking set to cache ways
CN112700180A (en) * 2019-10-23 2021-04-23 北京京东振世信息技术有限公司 Goods picking method and goods picking device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10338315A (en) * 1997-06-11 1998-12-22 Hitachi Ltd Parts layout system and parts layout method
US20070067200A1 (en) * 2005-09-19 2007-03-22 Oracle International Corporation Access point triangulation for task assignment of warehouse employees
CN110516986A (en) * 2018-05-21 2019-11-29 北京京东振世信息技术有限公司 Wrap up the set single group construction method and device under production model
CN110796400A (en) * 2018-08-01 2020-02-14 北京京东振世信息技术有限公司 Method and device for caching goods
CN111580952A (en) * 2019-02-18 2020-08-25 北京京东尚科信息技术有限公司 Method and apparatus for assigning a multi-tasking set to cache ways
CN110775496A (en) * 2019-10-15 2020-02-11 北京极智嘉科技有限公司 Aggregate order converging processing system, method and device
CN112700180A (en) * 2019-10-23 2021-04-23 北京京东振世信息技术有限公司 Goods picking method and goods picking device
CN111126926A (en) * 2020-01-20 2020-05-08 安吉智能物联技术有限公司 Warehouse management method
CN111476413A (en) * 2020-04-03 2020-07-31 上海明略人工智能(集团)有限公司 Warehouse storage position distribution method and system based on big data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄骅;: "基于CRAFT法优化铁路物流化货场仓库布局", 铁路采购与物流, no. 01 *

Also Published As

Publication number Publication date
CN113391899B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN109772714B (en) Goods sorting method and device, storage medium and electronic equipment
CN108694637B (en) Order processing method, device, server and storage medium
CN110348771B (en) Method and device for order grouping of orders
CN110348650B (en) Order converging method and device
CN111260240B (en) Task allocation method and device
CN110880057B (en) Grouping method and device
CN113393193B (en) Warehouse-out method and device
CN111507651A (en) Order data processing method and device applied to man-machine mixed warehouse
CN110889656A (en) Warehouse rule configuration method and device
CN111832980A (en) Method and device for allocating storage positions of multi-layer warehouse
CN113391899B (en) Method and device for allocating cache ways
CN110826752B (en) Method and device for distributing collection list
CN113759890A (en) Control method and device for transport device
CN111580952B (en) Method and device for distributing multitasking set to cache way
CN113407108A (en) Data storage method and system
CN112446652A (en) Method and device for processing task set
CN111950830A (en) Task allocation method and device
CN115390958A (en) Task processing method and device
CN112801569B (en) Article sorting method and device
CN111792248B (en) Method and device for adjusting storage position of material box
CN115170026A (en) Task processing method and device
CN112907162B (en) Method and device for determining object placement mode
CN113762856B (en) Exit management method and device
CN111824667A (en) Method and device for storing goods
CN113762849A (en) Stereoscopic warehouse inventory management method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant