Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a goods acceptance method and system, wherein the method comprises the steps of acquiring voice information of a user, wherein the voice information is used for determining the acceptance type of a target goods; determining an acceptance type corresponding to the target item in response to the voice information; outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items; and obtaining the acceptance result of the user for one or more acceptance items in the acceptance content, and generating an acceptance list according to the acceptance result.
FIG. 1 schematically illustrates an exemplary system architecture to which the method and system for item acceptance may be applied, according to embodiments of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a user 101 may wear headphones to unpack and check a terminal device such as a computer 102 or a refrigerator 103, and may transmit voice information to other electronic devices through the headphones. The computer 102 or the refrigerator 103 may be placed in a different warehouse, for example, in a large warehouse or a spare part warehouse, wherein the large warehouse may be a warehouse that manages new products, and the spare part warehouse may be a warehouse that manages problem-specific products.
In accordance with embodiments of the present disclosure, during unpacking and acceptance of the computer 102 or refrigerator 103, one or more of product packaging, merchandise appearance, merchandise functionality, screen breakage, attachment integrity, attachment details, primary check report presence and absence, primary check report details, duty cycle, serial number, lot number, invoice number, and shipping notes may be accepted.
According to the embodiment of the disclosure, the user 101 can confirm the task type, the task bill and the commodity to be checked through a voice recognition technology, send out unpacking and checking instructions, can observe and detect the commodity in an all-round barrier-free manner, and complete recording of checking contents one by conveying the voice instructions to the earphone. Has outstanding advantages for large goods (home appliances, body-building equipment and the like), and mainly comprises the following advantages: 1. the two hands and eyes can be liberated, after one operation is completed, the user 101 can immediately hear the accurate position information of the next operation, and in the moving process, the detailed information of the next operation is received, and the operation can be continuously and efficiently focused, so that the operation efficiency is improved, and the training cost of pure operation of personnel is reduced through man-machine voice interaction. 2. The voice production can realize mobile operation, decompose a plurality of goods checking items (selection, text, picture photographing, uploading and the like) into voice instructions, and input voice input is automatically recorded into a system. 3. And the function of combining multiple operations supports that each collected commodity automatically corresponds to a bill task when a plurality of bills are checked and accepted simultaneously. 4. The voice equipment can have a specific anti-interference function, can implement switching, pausing and continuing on and off for voice recognition, and can judge keywords of voice recognition contents, correspondingly match characters or numbers, and the like.
It should be understood that the number of items in fig. 1 is merely illustrative. There may be any number of items as desired for the implementation.
Fig. 2 schematically illustrates a flow chart of an item acceptance method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, voice information of a user is acquired, wherein the voice information is used to determine an acceptance type of a target item.
In response to the voice information, an acceptance type corresponding to the target item is determined in operation S220.
In operation S230, acceptance contents corresponding to the acceptance type are output through voice, wherein the acceptance contents include one or more acceptance items.
In operation S240, an acceptance result of the user for one or more acceptance items in the acceptance content is acquired, and an acceptance list is generated according to the acceptance result.
According to the embodiment of the disclosure, when a user faces a target item to be checked, the corresponding check type of the target item can be determined. The acceptance types can include after-sales, large-warehouse entry, and stream transfer types, among others. After-sales refers to the fact that after the customer receives the goods, the customer is dissatisfied with initiating a return, and the goods return to the spare part library. The large warehouse entry means that the large warehouse sends damaged goods in the warehouse to the spare part warehouse. The transfer refers to the allocation and warehousing of commodities among spare part libraries, and is also called internal allocation. Under different acceptance types, the same or different acceptance contents may be included.
For the after-sales type, the verification of the state of the pick-up list/service list is effective and the state of the pick-up list/service list is to be unpacked, and for the large warehouse type, the verification of the state of the pick-up list and the unpacking list is effective. For the flow entering type, other warehouses generally need to collect original single numbers, and check the state to be unpacked, so that the method is effective, supports simultaneous collection of a plurality of warehouses, single unpacking, and avoids the phenomenon that the business manually unpacks goods according to the single in advance.
Fig. 3 and 4 schematically illustrate a flow chart of an article acceptance method under different acceptance types according to another embodiment of the present disclosure.
For the checking and accepting flows of the after-sale and large warehouse-in types, as shown in fig. 3, when the pick-up list/service list number is collected, the voice request can be output to scan the pick-up list/service list number or the 4-bit after the voice input list number, or the large warehouse-out list number can be scanned. The user inputs a scan order number; or 4 bits after the single number is input by voice, the screen can display interactive records and input and output contents.
The system can check for unpacking & determine if the only valid acquisition, e.g., output speech: single number XXXX (full), total x commodities were checked and collected one by one. Or invalid acquisition-output speech: if not, the screen may display the interaction record with the re-entry request.
And then outputting voice to collect commodity numbers, or inputting 4 digits after commodity coding by voice, commodity names, three-level classification, brands, fault information and the like, wherein the screen can be used for commodity pictures and fault information. Input: scan the commodity number or 4 bits (inertia) after voice input. Judging whether the acquisition is successful or not, and feeding back and outputting voice: if success-output: the acquisition is successful, the commodity name is checked and accepted. If failure-output: the acquisition fails, is repeated, is not in task and is invalid, and whether the spare part bar code is invalid or not is checked again.
According to an embodiment of the present disclosure, the verification term is displayed according to the warehouse-in type and the commodity-specific characteristics: outputting voice: the commodity condition is confirmed one by one, and the commodity attributes are confirmed one by one in sequence. 1. Packaging, appearance, commodity function, attachment integrity, attachment details, invoice number, serial number, responsibility duty ratio, receipt remarks, and primary inspection reports. The screen synchronously displays, supports the return re-editing of filled items and supports the preview of whole acceptance items. Input: and (5) finishing acceptance.
According to the embodiment of the disclosure, the system can check whether the voice is qualified or not, output feedback, display preview, correct errors (active+system check), 1. The voice is qualified, automatically stored and output: and after the acceptance is finished, please continue unpacking. 2. If not, the prompt is marked red, and the necessary filling item is checked. Outputting voice: acceptance is incomplete, please confirm < incomplete item > please re-verify.
According to the embodiment of the disclosure, if the acceptance is successful, outputting voice that the acceptance of the commodity is completed, automatically prompting the second commodity of the pick-up list, and so on, and increasing 1 by 1 if 1 is completed, namely 3 pieces are added, and 1/3 is completed; 2/3 completed, 3/3 completed.
According to an embodiment of the present disclosure, it is verified whether the single full physical commodity is approved. 1. All completed, then output speech: this list would require acceptance of xx pieces, approved xx pieces, … (result confirmation) or this list was approved. Pushing documents and unpacking the goods. 2. The output speech is not fully completed: please collect commodity name + three-level classification + spare part bar code..abnormal: under the condition of few parts, manual active input is needed: and (5) confirming completion. Input: and (5) confirming completion.
According to the embodiment of the disclosure, if there are no fewer pieces, the pushing and unpacking completion state is adopted, if there are fewer pieces, the bill is still in the unpacking state, the checked and accepted product is pushed as checked and accepted, and the task is not completed-suspended. Outputting voice: the single product needs to have XX pieces of commodity, XX pieces are checked and accepted, XX pieces (more than 0) are fewer, and the single unpacking is not completed. And (3) voice output: whether to continue unpacking. 1. Continuing, the input continues, requesting continued scanning of the pick/service list ….2. Ending, the input ends.
For the acceptance flow of the flow entering type, as shown in fig. 4, the collection of multiple original numbers is supported, the flow entering original numbers to be unpacked are collected one by outputting voice requests, or 4 bits after the voice input number (supporting unpacking of multiple documents together), and the user inputs: 1. scanning a single number; 2. and 4 bits after the single number is input by voice, judging whether the single number is unique, displaying interactive records on the screen, and inputting and outputting contents.
According to embodiments of the present disclosure, the system may check whether to unpack & judge unique if valid-output speech: single number XXXX (full), commonly required to accept xx parts, please collect spare part bar codes in sequence, and then input 4 bits by voice. If invalid-output: if not, please re-enter. The screen co-displays the interaction record, and the task completion (count). In the case of multiple orders-output speech: in total, x single numbers are used for checking and accepting xx commodity, and the spare part bar codes … … are sequentially collected
According to the embodiment of the disclosure, a user can scan in or input 4 bits after spare parts in a voice mode, and commodity information is displayed on the screen simultaneously. It should be noted that, the collection may be out of order, and only the commodity (multiple single-set task) in the queue is verified, and the collected product corresponds to the count in each circulation list number.
According to the embodiment of the disclosure, whether acquisition is successful or not can be judged, and feedback is carried out. If 1 success-output: and acquiring commodity information successfully, and requesting to confirm the acceptance result. 1.1 match-output speech: the acquisition is successful, and the acceptance result is confirmed. 1.2 not in task-output speech: the merchandise is not in the task, please verify the re-acquisition. The screen displays spare part bar codes and successful collection. If 2 fails-output: if the abnormality occurs, the abnormality causes (repetition, invalidation) are presented, and the input is resumed. 2.1 repeated scan-output speech: the part is scanned, please continue to scan other spare parts; 2.2 invalid acquisition-output: the spare part bar code is invalidated and the commodity is checked again. Failure of acquisition is not counted, and acquisition result is displayed on screen
According to the embodiment of the disclosure, the flow can be judged to pass or reject, the acquisition is successful, the inquiry is passed or reject, and the voice can be input to pass or reject. Collecting and counting, wherein 1 collecting and checking pass, automatically prompting to continue collecting, and so on, and automatically outputting a verification result when all checking and checking are completed; the plurality of single unpacking units are used for counting the acquired spare parts in the respective ex-warehouse original single numbers and displaying the completion condition; and (3) screen display: task 1:2/3, task 2:3/5, task 3:6/6.
According to embodiments of the present disclosure, it may be determined to verify that the single all spare parts are collected? 1. Incompletely-output: please collect the next spare part name + spare part barcode … …. Complete-output voice: (quantity checking) this list you need to accept xx pieces, collected xx pieces, through xx pieces, reject xx pieces … …. If the part spare has been scanned, it can also prompt to collect the next spare part name, input is complete. The system may output speech: (quantity check) this single you need to check xx pieces, collected xx pieces, pass xx pieces, reject xx pieces, not collect xx pieces … …
Outputting voice: the XX part is checked and accepted by a single user, the XX part is acquired, the XX part is rejected through the XX part, the XX part is not acquired, and the unpacking is completed. And outputting results respectively according to different situations. If 1, confirming the whole list, wherein the commodity and the quantity are completely matched, automatically pushing and unpacking, and putting the whole list on a shelf. 2. Rejected or not acquired: the state is still to be unpacked, the task is not completed and the process is suspended. Processing from a computer end is needed; 3. the whole list is rejected, a user operates from a computer end, and a specific scene is to be checked; if the task is not completed, the whole bill is required to be operated for rejection, and the operation is also required from the computer side.
According to the embodiment of the disclosure, the acceptance content corresponding to the acceptance type is output through voice, and the acceptance list is generated according to the acceptance result of one or more acceptance items in the acceptance content, wherein the acceptance items can be, for example, product appearance, loss, whether the functions are normal or not, and the like. The goods are checked and accepted in a man-machine voice interaction mode, so that the technical effect of improving the operation efficiency is achieved, and the training cost of pure operation of personnel is reduced.
In accordance with an embodiment of the present disclosure, the means for obtaining the acceptance result of the user for one or more of the acceptance items in the acceptance content comprises at least one of:
and acquiring acceptance results for one or more acceptance items by acquiring acceptance voice content of the user and identifying the acceptance voice content.
According to embodiments of the present disclosure, the user's acceptance voice content may include repeat commands to repeat the steps currently not completed, as well as pause and wake commands for inclusion in unpacking the acceptance job. The terminal equipment can have the functions of backing and memorizing, and after unpacking a certain commodity, the function of memorizing options of unpacking the next commodity can be achieved, and the speech speed of voice broadcasting can be set to be in multiple speeds, such as fast speed, moderate speed, slow speed and the like, so that an operator can adjust the adaptive speech speed according to the familiarity degree of the operator to the speech equipment and the operation task.
And sensing the somatosensory input of the user through the terminal equipment used by the user, and acquiring the acceptance result aiming at one or more acceptance items based on the somatosensory input.
According to the embodiment of the disclosure, the somatosensory input can be actions such as shaking equipment, and most human-machine 'dialogue' can be completed by setting actions such as shaking equipment conforming to personal habits, for example, the somatosensory shaking is set to realize operation confirmation.
An acceptance result for one or more acceptance items is obtained by obtaining text input content of a user.
According to the embodiment of the disclosure, for text input, information input, editing and displaying can be completed by combining Natural Language Processing (NLP) with voice recognition, if voice recognition errors are found, an operator can input voice to cancel to re-input the previous task until the completion of the task is confirmed, and then the next task is not entered.
According to the embodiment of the disclosure, according to the content of the warehouse instructions and in combination with the supported input types, classification marks can be performed, such as which instructions support voice, which do not support voice, which support numbers and which support words in a visual page of the terminal equipment, and the instructions are marked by colors so as to be known through a screen when the instructions are ambiguous.
According to the embodiment of the disclosure, the task instruction can be converted into voice through a TTS engine (TextToSpeech) to be broadcast to operators, and the voice confirmation of the operators is converted into a technology of actual operation by adopting a natural language recognition technology. The method breaks through the traditional limited keyword flow control technology, realizes the free input of unpacking links, namely, not only can the flow driving control be carried out by using specific keywords, but also the random natural language of the user can be input at a specific position. Through the embodiment of the disclosure, the technology such as voice technology is mainly used, somatosensory dithering, text input and the like is used as assistance, so that real multi-mode interaction is realized, the method is more efficient than the traditional single-mode interaction, and the user can flexibly switch interaction modes under a specific environment.
The method shown in fig. 2 is further described below with reference to fig. 5-7 in conjunction with the exemplary embodiment.
Fig. 5 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure.
As shown in FIG. 5, the method further includes operations S250-S270.
In operation S250, single number information of the target item is acquired.
In operation S260, it is determined whether the task list includes a task for checking the target goods according to the single number information.
In operation S270, in the case where the task list includes a task for checking the target item, voice information is acquired.
According to embodiments of the present disclosure, one or more tasks may be included in the task list, each of which may be an acceptance job with respect to one or more items.
Through the embodiment of the disclosure, before checking and accepting goods, whether the checking and accepting task exists is judged, so that repeated checking and accepting work is reduced, and the working efficiency is improved.
Fig. 6 schematically illustrates a flowchart of outputting acceptance content corresponding to an acceptance type by voice according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item. As shown in fig. 6, outputting acceptance content corresponding to the acceptance type by voice includes operations S231 to S232.
In operation S231, the first verification term is output through voice.
In operation S232, in the case where the result of acceptance of the user for the first acceptance item is acquired, the second acceptance item is output by voice.
According to the embodiment of the present disclosure, when the operator does not hear or does not execute the command for some reason, the previous broadcast instruction may also be repeated.
According to the embodiment of the disclosure, when the acceptance result of the user for the first acceptance item is obtained, the natural random language can be input and identified, the natural random language is input to an acceptance page, the last voice input is supported to be converted into text content, and then the second acceptance item is output through voice.
Through the embodiment of the disclosure, the acceptance error rate can be reduced by sequentially broadcasting the acceptance items through voice, so that the acceptance work is orderly carried out.
Fig. 7 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure.
As shown in fig. 7, the method includes operations S280 to S2100.
In operation S280, historical voice information transmitted during the process of checking and accepting goods by different users is acquired.
In operation S290, the acquired historical voice information is trained by the deep learning method, and a voice response model is established.
In operation S2100, a response is made to the corresponding user based on the voice response model.
According to the embodiment of the disclosure, common expressions of operators, namely historical voice information, can be collected in real time, and a corresponding database can be established. And establishing a corresponding word stock aiming at common keywords in the storage process. The method can be combined with machine learning methods such as deep learning and the like to perfect a voice response mechanism and improve response flexibility aiming at different operators. According to the embodiment of the disclosure, during the voice output process, autonomous association can be performed based on the voice response model, and the confirmation can be performed by an operator.
According to the embodiment of the disclosure, the voice response function in service operation can be skillfully and directly input keyword instructions from known choices, voice broadcasting content is interrupted, the flow is continuously executed downwards according to the input instructions, and the response flexibility can be improved by improving a voice response mechanism through combining machine learning methods such as deep learning and the like.
Fig. 8 schematically illustrates a block diagram of an item acceptance system according to an embodiment of the present disclosure.
As shown in fig. 8, the item acceptance system 400 includes a first acquisition module 410, a determination module 420, an output module 430, and a second acquisition module 440.
The first obtaining module 410 is configured to obtain voice information of a user, where the voice information is used to determine an acceptance type of the target item.
The determination module 420 is configured to determine an acceptance type corresponding to the target item in response to the voice information.
The output module 430 is configured to output, by voice, acceptance content corresponding to the acceptance type, where the acceptance content includes one or more acceptance items.
The second obtaining module 440 is configured to obtain an acceptance result of the user for one or more acceptance items in the acceptance content, and generate an acceptance list according to the acceptance result.
According to the embodiment of the disclosure, the acceptance content corresponding to the acceptance type is output through voice, and the acceptance list is generated according to the acceptance result of one or more acceptance items in the acceptance content, wherein the acceptance items can be, for example, product appearance, loss, whether the functions are normal or not, and the like. The goods are checked and accepted in a man-machine voice interaction mode, so that the technical effect of improving the operation efficiency is achieved, and the training cost of pure operation of personnel is reduced.
Fig. 9 schematically illustrates a block diagram of a second acquisition module according to an embodiment of the disclosure.
As shown in fig. 9, according to an embodiment of the present disclosure, the second acquisition module 440 includes a first acquisition unit 441, a second acquisition unit 442, and a third acquisition unit 443.
The first acquisition unit 441 is configured to acquire an acceptance result for one or more acceptance items by acquiring acceptance voice content of a user and identifying the acceptance voice content.
The second obtaining unit 442 is configured to sense a somatosensory input of a user through a terminal device used by the user, and obtain an acceptance result for one or more acceptance items based on the somatosensory input.
The third acquiring unit 443 is configured to acquire an acceptance result for one or more acceptance items by acquiring text input content of a user.
Through the embodiment of the disclosure, the technology such as voice technology is mainly used, somatosensory dithering, text input and the like is used as assistance, so that real multi-mode interaction is realized, the method is more efficient than the traditional single-mode interaction, and the user can flexibly switch interaction modes under a specific environment.
Fig. 10 schematically illustrates a block diagram of an item acceptance system according to another embodiment of the present disclosure.
As shown in fig. 10, the item acceptance system 400 further includes a third acquisition module 450, a determination module 460, and a fourth acquisition module 470, according to embodiments of the present disclosure.
The third obtaining module 450 is configured to obtain single number information of the target item.
The judging module 460 is configured to judge whether the task list includes a task for checking and accepting the target goods according to the single number information.
The fourth obtaining module 470 is configured to obtain the voice information when the task list includes a task for checking the target item.
Through the embodiment of the disclosure, before checking and accepting goods, whether the checking and accepting task exists is judged, so that repeated checking and accepting work is reduced, and the working efficiency is improved.
Fig. 11 schematically illustrates a block diagram of an output module according to an embodiment of the disclosure.
As shown in fig. 11, according to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item, and the output module 430 includes a first output unit 431 and a second output unit 432.
The first output unit 431 is used for outputting the first verification term through voice.
The second output unit 432 is configured to output, by voice, the second acceptance item in a case where an acceptance result of the user for the first acceptance item is acquired.
Through the embodiment of the disclosure, the acceptance error rate can be reduced by sequentially broadcasting the acceptance items through voice, so that the acceptance work is orderly carried out.
In accordance with an embodiment of the present disclosure, as shown in FIG. 10, the item acceptance system 400 further includes a fifth acquisition module 480, a training module 490, and a response module 4100.
The fifth obtaining module 480 is configured to obtain historical voice information sent during the process of checking and accepting goods by different users.
The training module 490 is configured to train the obtained historical speech information through the deep learning system, and establish a speech response model.
The response module 4100 is configured to respond to the corresponding user based on the voice response model.
According to the embodiment of the disclosure, the voice response function in service operation can be skillfully and directly input keyword instructions from known choices, voice broadcasting content is interrupted, the flow is continuously executed downwards according to the input instructions, and the response flexibility can be improved by improving a voice response mechanism through combining machine learning methods such as deep learning and the like.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the determination module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, and the third acquisition unit 443 may be combined in one module/unit/sub-unit, or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the determination module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, the third acquisition unit 443 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or any other reasonable manner of integrating or packaging the circuitry, or any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the judgment module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, and the third acquisition unit 443 may be at least partially implemented as a computer program module that, when executed, may perform the corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the portion of the article inspection and reception system corresponds to the portion of the article inspection and reception method in the embodiment of the present disclosure, and the description of the portion of the article inspection and reception system specifically refers to the portion of the article inspection and reception method, which is not described herein again.
Fig. 12 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure. The computer system illustrated in fig. 12 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 12, a computer system 500 according to an embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 501 may also include on-board memory for caching purposes. The processor 501 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 503, various programs and data required for the operation of the system 500 are stored. The processor 501, ROM 502, and RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the program may be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the system 500 may further include an input/output (I/O) interface 505, the input/output (I/O) interface 505 also being connected to the bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 501. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.