CN111160817B - Goods acceptance method and system, computer system and computer readable storage medium - Google Patents

Goods acceptance method and system, computer system and computer readable storage medium Download PDF

Info

Publication number
CN111160817B
CN111160817B CN201811323587.4A CN201811323587A CN111160817B CN 111160817 B CN111160817 B CN 111160817B CN 201811323587 A CN201811323587 A CN 201811323587A CN 111160817 B CN111160817 B CN 111160817B
Authority
CN
China
Prior art keywords
acceptance
voice
user
content
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811323587.4A
Other languages
Chinese (zh)
Other versions
CN111160817A (en
Inventor
张丽萍
任宝光
白鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN201811323587.4A priority Critical patent/CN111160817B/en
Publication of CN111160817A publication Critical patent/CN111160817A/en
Application granted granted Critical
Publication of CN111160817B publication Critical patent/CN111160817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Tourism & Hospitality (AREA)
  • General Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present disclosure provides an article acceptance method, comprising: acquiring voice information of a user, wherein the voice information is used for determining the acceptance type of a target goods; determining an acceptance type corresponding to the target item in response to the voice information; outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items; and obtaining the acceptance result of the user for one or more acceptance items in the acceptance content, and generating an acceptance list according to the acceptance result. The present disclosure also provides for a merchandise acceptance system, a computer system, and a computer readable storage medium.

Description

Goods acceptance method and system, computer system and computer readable storage medium
Technical Field
The present disclosure relates to the field of warehouse logistics, and more particularly, to a method and system for checking and accepting goods, a computer system, and a computer-readable storage medium.
Background
In the field of warehouse logistics, the sorting, stock preparation, acceptance and other works are required to be carried out on a large number of goods almost every day. Taking an inspection and acceptance of the goods as an example, after an operator arrives at a specified area for picking up or placing the goods, more complicated operations such as unpacking and acceptance of the goods, live recording of the quality condition of the product, etc. are required. In the related art, operators generally need to coordinate hands and eyes when checking and accepting goods, the operators use eyes to watch and input the goods manually, and the eyes need to concentrate on a screen to confirm the input without errors at the same time, so that in an actual working scene, the operators need to watch a real object at one glance, stare at the screen at one glance, put goods under inspection with both hands, and input information into a computer manually, and the operators cannot concentrate on checking and accepting functions of the products.
In the process of implementing the disclosed concept, the inventor finds that at least the following problems exist in the related art: in the related art, the operation flow of business operations such as acceptance unpacking is complex, and the operation of operators is inconvenient, so that the operation efficiency is low.
Disclosure of Invention
In view of the above, the present disclosure provides a method and system for acceptance of goods, a computer system, and a computer-readable storage medium.
One aspect of the present disclosure provides an article acceptance method, including acquiring voice information of a user, wherein the voice information is used for determining an acceptance type of a target article; determining an acceptance type corresponding to the target item in response to the voice information; outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items; and obtaining the acceptance result of the user for one or more acceptance items in the acceptance content, and generating an acceptance list according to the acceptance result.
According to an embodiment of the present disclosure, the means for obtaining the acceptance result of the user for one or more of the acceptance items in the acceptance content includes at least one of obtaining the acceptance voice content of the user by obtaining the acceptance voice content of the user and recognizing the acceptance voice content to obtain the acceptance result for the one or more acceptance items; sensing somatosensory input of the user through terminal equipment used by the user, and acquiring acceptance results of the one or more acceptance items based on the somatosensory input; and acquiring the acceptance result of the one or more acceptance items by acquiring text input content of the user.
According to an embodiment of the present disclosure, the method further includes obtaining single number information of the target item; judging whether a task list contains a task for checking and accepting the target goods according to the single number information; and acquiring the voice information when the task list includes a task for checking and accepting the target article.
According to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item, and outputting the acceptance content corresponding to the acceptance type by voice includes outputting the first acceptance item by voice; and outputting the second acceptance item through voice when the acceptance result of the user for the first acceptance item is acquired.
According to the embodiment of the disclosure, the method further comprises the steps of obtaining historical voice information sent in the process of checking and accepting goods by different users; training the obtained historical voice information through a deep learning method, and establishing a voice response model; and responding to the corresponding user based on the voice response model.
Another aspect of the present disclosure provides an article acceptance system, including a first obtaining module, configured to obtain voice information of a user, where the voice information is used to determine an acceptance type of a target article; the determining module is used for responding to the voice information and determining the acceptance type corresponding to the target goods; the output module is used for outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items; and a second acquisition module, configured to acquire an acceptance result of the user for one or more acceptance items in the acceptance content, and generate an acceptance list according to the acceptance result.
According to an embodiment of the present disclosure, the second obtaining module includes a first obtaining unit configured to obtain an acceptance result for the one or more acceptance items by obtaining an acceptance voice content of the user and identifying the acceptance voice content; a second obtaining unit, configured to sense a somatosensory input of the user through a terminal device used by the user, and obtain an acceptance result for the one or more acceptance items based on the somatosensory input; and a third acquisition unit for acquiring an acceptance result for the one or more acceptance items by acquiring text input contents of the user.
According to an embodiment of the present disclosure, the system further includes a third obtaining module, configured to obtain the single number information of the target item; the judging module is used for judging whether the task list contains a task for checking and accepting the target goods according to the single number information; and a fourth obtaining module, configured to obtain the voice information when the task list includes a task for checking the target article.
According to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item, and the output module includes a first output unit configured to output the first acceptance item by voice; and a second output unit configured to output the second acceptance item by voice when an acceptance result of the user for the first acceptance item is acquired.
According to an embodiment of the present disclosure, the above system further includes a fifth obtaining module, configured to obtain historical voice information sent during the process of checking and accepting goods by different users; the training module is used for training the acquired historical voice information through the deep learning system and establishing a voice response model; and the response module is used for responding to the corresponding user based on the voice response model.
Another aspect of the present disclosure provides a computer system comprising: one or more processors; and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the item acceptance method as described above.
Another aspect of the present disclosure provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to implement an item acceptance method as described above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as described above.
According to the embodiment of the disclosure, the acceptance content corresponding to the acceptance type is output through voice, and the acceptance list is generated according to the acceptance result of one or more acceptance items in the acceptance content, wherein the acceptance items can be, for example, product appearance, loss, whether the functions are normal or not, and the like. The goods are checked and accepted in a man-machine voice interaction mode, so that the technical effect of improving the operation efficiency is achieved, and the training cost of pure operation of personnel is reduced.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an exemplary system architecture to which the method and system for item acceptance may be applied, in accordance with embodiments of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an item acceptance method according to an embodiment of the present disclosure;
figures 3 and 4 schematically illustrate a flow chart of an article acceptance method under different acceptance types according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure;
FIG. 6 schematically illustrates a flow chart of outputting acceptance content corresponding to an acceptance type by voice in accordance with an embodiment of the present disclosure;
FIG. 7 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of an item acceptance system in accordance with an embodiment of the present disclosure;
FIG. 9 schematically illustrates a block diagram of a second acquisition module according to an embodiment of the disclosure;
FIG. 10 schematically illustrates a block diagram of an item acceptance system according to another embodiment of the present disclosure;
FIG. 11 schematically illustrates a block diagram of an output module according to an embodiment of the disclosure; and
fig. 12 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a goods acceptance method and system, wherein the method comprises the steps of acquiring voice information of a user, wherein the voice information is used for determining the acceptance type of a target goods; determining an acceptance type corresponding to the target item in response to the voice information; outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items; and obtaining the acceptance result of the user for one or more acceptance items in the acceptance content, and generating an acceptance list according to the acceptance result.
FIG. 1 schematically illustrates an exemplary system architecture to which the method and system for item acceptance may be applied, according to embodiments of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which embodiments of the present disclosure may be applied to assist those skilled in the art in understanding the technical content of the present disclosure, but does not mean that embodiments of the present disclosure may not be used in other devices, systems, environments, or scenarios.
As shown in fig. 1, a user 101 may wear headphones to unpack and check a terminal device such as a computer 102 or a refrigerator 103, and may transmit voice information to other electronic devices through the headphones. The computer 102 or the refrigerator 103 may be placed in a different warehouse, for example, in a large warehouse or a spare part warehouse, wherein the large warehouse may be a warehouse that manages new products, and the spare part warehouse may be a warehouse that manages problem-specific products.
In accordance with embodiments of the present disclosure, during unpacking and acceptance of the computer 102 or refrigerator 103, one or more of product packaging, merchandise appearance, merchandise functionality, screen breakage, attachment integrity, attachment details, primary check report presence and absence, primary check report details, duty cycle, serial number, lot number, invoice number, and shipping notes may be accepted.
According to the embodiment of the disclosure, the user 101 can confirm the task type, the task bill and the commodity to be checked through a voice recognition technology, send out unpacking and checking instructions, can observe and detect the commodity in an all-round barrier-free manner, and complete recording of checking contents one by conveying the voice instructions to the earphone. Has outstanding advantages for large goods (home appliances, body-building equipment and the like), and mainly comprises the following advantages: 1. the two hands and eyes can be liberated, after one operation is completed, the user 101 can immediately hear the accurate position information of the next operation, and in the moving process, the detailed information of the next operation is received, and the operation can be continuously and efficiently focused, so that the operation efficiency is improved, and the training cost of pure operation of personnel is reduced through man-machine voice interaction. 2. The voice production can realize mobile operation, decompose a plurality of goods checking items (selection, text, picture photographing, uploading and the like) into voice instructions, and input voice input is automatically recorded into a system. 3. And the function of combining multiple operations supports that each collected commodity automatically corresponds to a bill task when a plurality of bills are checked and accepted simultaneously. 4. The voice equipment can have a specific anti-interference function, can implement switching, pausing and continuing on and off for voice recognition, and can judge keywords of voice recognition contents, correspondingly match characters or numbers, and the like.
It should be understood that the number of items in fig. 1 is merely illustrative. There may be any number of items as desired for the implementation.
Fig. 2 schematically illustrates a flow chart of an item acceptance method according to an embodiment of the present disclosure.
As shown in fig. 2, the method includes operations S210 to S240.
In operation S210, voice information of a user is acquired, wherein the voice information is used to determine an acceptance type of a target item.
In response to the voice information, an acceptance type corresponding to the target item is determined in operation S220.
In operation S230, acceptance contents corresponding to the acceptance type are output through voice, wherein the acceptance contents include one or more acceptance items.
In operation S240, an acceptance result of the user for one or more acceptance items in the acceptance content is acquired, and an acceptance list is generated according to the acceptance result.
According to the embodiment of the disclosure, when a user faces a target item to be checked, the corresponding check type of the target item can be determined. The acceptance types can include after-sales, large-warehouse entry, and stream transfer types, among others. After-sales refers to the fact that after the customer receives the goods, the customer is dissatisfied with initiating a return, and the goods return to the spare part library. The large warehouse entry means that the large warehouse sends damaged goods in the warehouse to the spare part warehouse. The transfer refers to the allocation and warehousing of commodities among spare part libraries, and is also called internal allocation. Under different acceptance types, the same or different acceptance contents may be included.
For the after-sales type, the verification of the state of the pick-up list/service list is effective and the state of the pick-up list/service list is to be unpacked, and for the large warehouse type, the verification of the state of the pick-up list and the unpacking list is effective. For the flow entering type, other warehouses generally need to collect original single numbers, and check the state to be unpacked, so that the method is effective, supports simultaneous collection of a plurality of warehouses, single unpacking, and avoids the phenomenon that the business manually unpacks goods according to the single in advance.
Fig. 3 and 4 schematically illustrate a flow chart of an article acceptance method under different acceptance types according to another embodiment of the present disclosure.
For the checking and accepting flows of the after-sale and large warehouse-in types, as shown in fig. 3, when the pick-up list/service list number is collected, the voice request can be output to scan the pick-up list/service list number or the 4-bit after the voice input list number, or the large warehouse-out list number can be scanned. The user inputs a scan order number; or 4 bits after the single number is input by voice, the screen can display interactive records and input and output contents.
The system can check for unpacking & determine if the only valid acquisition, e.g., output speech: single number XXXX (full), total x commodities were checked and collected one by one. Or invalid acquisition-output speech: if not, the screen may display the interaction record with the re-entry request.
And then outputting voice to collect commodity numbers, or inputting 4 digits after commodity coding by voice, commodity names, three-level classification, brands, fault information and the like, wherein the screen can be used for commodity pictures and fault information. Input: scan the commodity number or 4 bits (inertia) after voice input. Judging whether the acquisition is successful or not, and feeding back and outputting voice: if success-output: the acquisition is successful, the commodity name is checked and accepted. If failure-output: the acquisition fails, is repeated, is not in task and is invalid, and whether the spare part bar code is invalid or not is checked again.
According to an embodiment of the present disclosure, the verification term is displayed according to the warehouse-in type and the commodity-specific characteristics: outputting voice: the commodity condition is confirmed one by one, and the commodity attributes are confirmed one by one in sequence. 1. Packaging, appearance, commodity function, attachment integrity, attachment details, invoice number, serial number, responsibility duty ratio, receipt remarks, and primary inspection reports. The screen synchronously displays, supports the return re-editing of filled items and supports the preview of whole acceptance items. Input: and (5) finishing acceptance.
According to the embodiment of the disclosure, the system can check whether the voice is qualified or not, output feedback, display preview, correct errors (active+system check), 1. The voice is qualified, automatically stored and output: and after the acceptance is finished, please continue unpacking. 2. If not, the prompt is marked red, and the necessary filling item is checked. Outputting voice: acceptance is incomplete, please confirm < incomplete item > please re-verify.
According to the embodiment of the disclosure, if the acceptance is successful, outputting voice that the acceptance of the commodity is completed, automatically prompting the second commodity of the pick-up list, and so on, and increasing 1 by 1 if 1 is completed, namely 3 pieces are added, and 1/3 is completed; 2/3 completed, 3/3 completed.
According to an embodiment of the present disclosure, it is verified whether the single full physical commodity is approved. 1. All completed, then output speech: this list would require acceptance of xx pieces, approved xx pieces, … (result confirmation) or this list was approved. Pushing documents and unpacking the goods. 2. The output speech is not fully completed: please collect commodity name + three-level classification + spare part bar code..abnormal: under the condition of few parts, manual active input is needed: and (5) confirming completion. Input: and (5) confirming completion.
According to the embodiment of the disclosure, if there are no fewer pieces, the pushing and unpacking completion state is adopted, if there are fewer pieces, the bill is still in the unpacking state, the checked and accepted product is pushed as checked and accepted, and the task is not completed-suspended. Outputting voice: the single product needs to have XX pieces of commodity, XX pieces are checked and accepted, XX pieces (more than 0) are fewer, and the single unpacking is not completed. And (3) voice output: whether to continue unpacking. 1. Continuing, the input continues, requesting continued scanning of the pick/service list ….2. Ending, the input ends.
For the acceptance flow of the flow entering type, as shown in fig. 4, the collection of multiple original numbers is supported, the flow entering original numbers to be unpacked are collected one by outputting voice requests, or 4 bits after the voice input number (supporting unpacking of multiple documents together), and the user inputs: 1. scanning a single number; 2. and 4 bits after the single number is input by voice, judging whether the single number is unique, displaying interactive records on the screen, and inputting and outputting contents.
According to embodiments of the present disclosure, the system may check whether to unpack & judge unique if valid-output speech: single number XXXX (full), commonly required to accept xx parts, please collect spare part bar codes in sequence, and then input 4 bits by voice. If invalid-output: if not, please re-enter. The screen co-displays the interaction record, and the task completion (count). In the case of multiple orders-output speech: in total, x single numbers are used for checking and accepting xx commodity, and the spare part bar codes … … are sequentially collected
According to the embodiment of the disclosure, a user can scan in or input 4 bits after spare parts in a voice mode, and commodity information is displayed on the screen simultaneously. It should be noted that, the collection may be out of order, and only the commodity (multiple single-set task) in the queue is verified, and the collected product corresponds to the count in each circulation list number.
According to the embodiment of the disclosure, whether acquisition is successful or not can be judged, and feedback is carried out. If 1 success-output: and acquiring commodity information successfully, and requesting to confirm the acceptance result. 1.1 match-output speech: the acquisition is successful, and the acceptance result is confirmed. 1.2 not in task-output speech: the merchandise is not in the task, please verify the re-acquisition. The screen displays spare part bar codes and successful collection. If 2 fails-output: if the abnormality occurs, the abnormality causes (repetition, invalidation) are presented, and the input is resumed. 2.1 repeated scan-output speech: the part is scanned, please continue to scan other spare parts; 2.2 invalid acquisition-output: the spare part bar code is invalidated and the commodity is checked again. Failure of acquisition is not counted, and acquisition result is displayed on screen
According to the embodiment of the disclosure, the flow can be judged to pass or reject, the acquisition is successful, the inquiry is passed or reject, and the voice can be input to pass or reject. Collecting and counting, wherein 1 collecting and checking pass, automatically prompting to continue collecting, and so on, and automatically outputting a verification result when all checking and checking are completed; the plurality of single unpacking units are used for counting the acquired spare parts in the respective ex-warehouse original single numbers and displaying the completion condition; and (3) screen display: task 1:2/3, task 2:3/5, task 3:6/6.
According to embodiments of the present disclosure, it may be determined to verify that the single all spare parts are collected? 1. Incompletely-output: please collect the next spare part name + spare part barcode … …. Complete-output voice: (quantity checking) this list you need to accept xx pieces, collected xx pieces, through xx pieces, reject xx pieces … …. If the part spare has been scanned, it can also prompt to collect the next spare part name, input is complete. The system may output speech: (quantity check) this single you need to check xx pieces, collected xx pieces, pass xx pieces, reject xx pieces, not collect xx pieces … …
Outputting voice: the XX part is checked and accepted by a single user, the XX part is acquired, the XX part is rejected through the XX part, the XX part is not acquired, and the unpacking is completed. And outputting results respectively according to different situations. If 1, confirming the whole list, wherein the commodity and the quantity are completely matched, automatically pushing and unpacking, and putting the whole list on a shelf. 2. Rejected or not acquired: the state is still to be unpacked, the task is not completed and the process is suspended. Processing from a computer end is needed; 3. the whole list is rejected, a user operates from a computer end, and a specific scene is to be checked; if the task is not completed, the whole bill is required to be operated for rejection, and the operation is also required from the computer side.
According to the embodiment of the disclosure, the acceptance content corresponding to the acceptance type is output through voice, and the acceptance list is generated according to the acceptance result of one or more acceptance items in the acceptance content, wherein the acceptance items can be, for example, product appearance, loss, whether the functions are normal or not, and the like. The goods are checked and accepted in a man-machine voice interaction mode, so that the technical effect of improving the operation efficiency is achieved, and the training cost of pure operation of personnel is reduced.
In accordance with an embodiment of the present disclosure, the means for obtaining the acceptance result of the user for one or more of the acceptance items in the acceptance content comprises at least one of:
and acquiring acceptance results for one or more acceptance items by acquiring acceptance voice content of the user and identifying the acceptance voice content.
According to embodiments of the present disclosure, the user's acceptance voice content may include repeat commands to repeat the steps currently not completed, as well as pause and wake commands for inclusion in unpacking the acceptance job. The terminal equipment can have the functions of backing and memorizing, and after unpacking a certain commodity, the function of memorizing options of unpacking the next commodity can be achieved, and the speech speed of voice broadcasting can be set to be in multiple speeds, such as fast speed, moderate speed, slow speed and the like, so that an operator can adjust the adaptive speech speed according to the familiarity degree of the operator to the speech equipment and the operation task.
And sensing the somatosensory input of the user through the terminal equipment used by the user, and acquiring the acceptance result aiming at one or more acceptance items based on the somatosensory input.
According to the embodiment of the disclosure, the somatosensory input can be actions such as shaking equipment, and most human-machine 'dialogue' can be completed by setting actions such as shaking equipment conforming to personal habits, for example, the somatosensory shaking is set to realize operation confirmation.
An acceptance result for one or more acceptance items is obtained by obtaining text input content of a user.
According to the embodiment of the disclosure, for text input, information input, editing and displaying can be completed by combining Natural Language Processing (NLP) with voice recognition, if voice recognition errors are found, an operator can input voice to cancel to re-input the previous task until the completion of the task is confirmed, and then the next task is not entered.
According to the embodiment of the disclosure, according to the content of the warehouse instructions and in combination with the supported input types, classification marks can be performed, such as which instructions support voice, which do not support voice, which support numbers and which support words in a visual page of the terminal equipment, and the instructions are marked by colors so as to be known through a screen when the instructions are ambiguous.
According to the embodiment of the disclosure, the task instruction can be converted into voice through a TTS engine (TextToSpeech) to be broadcast to operators, and the voice confirmation of the operators is converted into a technology of actual operation by adopting a natural language recognition technology. The method breaks through the traditional limited keyword flow control technology, realizes the free input of unpacking links, namely, not only can the flow driving control be carried out by using specific keywords, but also the random natural language of the user can be input at a specific position. Through the embodiment of the disclosure, the technology such as voice technology is mainly used, somatosensory dithering, text input and the like is used as assistance, so that real multi-mode interaction is realized, the method is more efficient than the traditional single-mode interaction, and the user can flexibly switch interaction modes under a specific environment.
The method shown in fig. 2 is further described below with reference to fig. 5-7 in conjunction with the exemplary embodiment.
Fig. 5 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure.
As shown in FIG. 5, the method further includes operations S250-S270.
In operation S250, single number information of the target item is acquired.
In operation S260, it is determined whether the task list includes a task for checking the target goods according to the single number information.
In operation S270, in the case where the task list includes a task for checking the target item, voice information is acquired.
According to embodiments of the present disclosure, one or more tasks may be included in the task list, each of which may be an acceptance job with respect to one or more items.
Through the embodiment of the disclosure, before checking and accepting goods, whether the checking and accepting task exists is judged, so that repeated checking and accepting work is reduced, and the working efficiency is improved.
Fig. 6 schematically illustrates a flowchart of outputting acceptance content corresponding to an acceptance type by voice according to an embodiment of the present disclosure.
According to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item. As shown in fig. 6, outputting acceptance content corresponding to the acceptance type by voice includes operations S231 to S232.
In operation S231, the first verification term is output through voice.
In operation S232, in the case where the result of acceptance of the user for the first acceptance item is acquired, the second acceptance item is output by voice.
According to the embodiment of the present disclosure, when the operator does not hear or does not execute the command for some reason, the previous broadcast instruction may also be repeated.
According to the embodiment of the disclosure, when the acceptance result of the user for the first acceptance item is obtained, the natural random language can be input and identified, the natural random language is input to an acceptance page, the last voice input is supported to be converted into text content, and then the second acceptance item is output through voice.
Through the embodiment of the disclosure, the acceptance error rate can be reduced by sequentially broadcasting the acceptance items through voice, so that the acceptance work is orderly carried out.
Fig. 7 schematically illustrates a flow chart of an item acceptance method according to another embodiment of the present disclosure.
As shown in fig. 7, the method includes operations S280 to S2100.
In operation S280, historical voice information transmitted during the process of checking and accepting goods by different users is acquired.
In operation S290, the acquired historical voice information is trained by the deep learning method, and a voice response model is established.
In operation S2100, a response is made to the corresponding user based on the voice response model.
According to the embodiment of the disclosure, common expressions of operators, namely historical voice information, can be collected in real time, and a corresponding database can be established. And establishing a corresponding word stock aiming at common keywords in the storage process. The method can be combined with machine learning methods such as deep learning and the like to perfect a voice response mechanism and improve response flexibility aiming at different operators. According to the embodiment of the disclosure, during the voice output process, autonomous association can be performed based on the voice response model, and the confirmation can be performed by an operator.
According to the embodiment of the disclosure, the voice response function in service operation can be skillfully and directly input keyword instructions from known choices, voice broadcasting content is interrupted, the flow is continuously executed downwards according to the input instructions, and the response flexibility can be improved by improving a voice response mechanism through combining machine learning methods such as deep learning and the like.
Fig. 8 schematically illustrates a block diagram of an item acceptance system according to an embodiment of the present disclosure.
As shown in fig. 8, the item acceptance system 400 includes a first acquisition module 410, a determination module 420, an output module 430, and a second acquisition module 440.
The first obtaining module 410 is configured to obtain voice information of a user, where the voice information is used to determine an acceptance type of the target item.
The determination module 420 is configured to determine an acceptance type corresponding to the target item in response to the voice information.
The output module 430 is configured to output, by voice, acceptance content corresponding to the acceptance type, where the acceptance content includes one or more acceptance items.
The second obtaining module 440 is configured to obtain an acceptance result of the user for one or more acceptance items in the acceptance content, and generate an acceptance list according to the acceptance result.
According to the embodiment of the disclosure, the acceptance content corresponding to the acceptance type is output through voice, and the acceptance list is generated according to the acceptance result of one or more acceptance items in the acceptance content, wherein the acceptance items can be, for example, product appearance, loss, whether the functions are normal or not, and the like. The goods are checked and accepted in a man-machine voice interaction mode, so that the technical effect of improving the operation efficiency is achieved, and the training cost of pure operation of personnel is reduced.
Fig. 9 schematically illustrates a block diagram of a second acquisition module according to an embodiment of the disclosure.
As shown in fig. 9, according to an embodiment of the present disclosure, the second acquisition module 440 includes a first acquisition unit 441, a second acquisition unit 442, and a third acquisition unit 443.
The first acquisition unit 441 is configured to acquire an acceptance result for one or more acceptance items by acquiring acceptance voice content of a user and identifying the acceptance voice content.
The second obtaining unit 442 is configured to sense a somatosensory input of a user through a terminal device used by the user, and obtain an acceptance result for one or more acceptance items based on the somatosensory input.
The third acquiring unit 443 is configured to acquire an acceptance result for one or more acceptance items by acquiring text input content of a user.
Through the embodiment of the disclosure, the technology such as voice technology is mainly used, somatosensory dithering, text input and the like is used as assistance, so that real multi-mode interaction is realized, the method is more efficient than the traditional single-mode interaction, and the user can flexibly switch interaction modes under a specific environment.
Fig. 10 schematically illustrates a block diagram of an item acceptance system according to another embodiment of the present disclosure.
As shown in fig. 10, the item acceptance system 400 further includes a third acquisition module 450, a determination module 460, and a fourth acquisition module 470, according to embodiments of the present disclosure.
The third obtaining module 450 is configured to obtain single number information of the target item.
The judging module 460 is configured to judge whether the task list includes a task for checking and accepting the target goods according to the single number information.
The fourth obtaining module 470 is configured to obtain the voice information when the task list includes a task for checking the target item.
Through the embodiment of the disclosure, before checking and accepting goods, whether the checking and accepting task exists is judged, so that repeated checking and accepting work is reduced, and the working efficiency is improved.
Fig. 11 schematically illustrates a block diagram of an output module according to an embodiment of the disclosure.
As shown in fig. 11, according to an embodiment of the present disclosure, the acceptance content includes at least a first acceptance item and a second acceptance item, and the output module 430 includes a first output unit 431 and a second output unit 432.
The first output unit 431 is used for outputting the first verification term through voice.
The second output unit 432 is configured to output, by voice, the second acceptance item in a case where an acceptance result of the user for the first acceptance item is acquired.
Through the embodiment of the disclosure, the acceptance error rate can be reduced by sequentially broadcasting the acceptance items through voice, so that the acceptance work is orderly carried out.
In accordance with an embodiment of the present disclosure, as shown in FIG. 10, the item acceptance system 400 further includes a fifth acquisition module 480, a training module 490, and a response module 4100.
The fifth obtaining module 480 is configured to obtain historical voice information sent during the process of checking and accepting goods by different users.
The training module 490 is configured to train the obtained historical speech information through the deep learning system, and establish a speech response model.
The response module 4100 is configured to respond to the corresponding user based on the voice response model.
According to the embodiment of the disclosure, the voice response function in service operation can be skillfully and directly input keyword instructions from known choices, voice broadcasting content is interrupted, the flow is continuously executed downwards according to the input instructions, and the response flexibility can be improved by improving a voice response mechanism through combining machine learning methods such as deep learning and the like.
Any number of modules, sub-modules, units, sub-units, or at least some of the functionality of any number of the sub-units according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or encapsulates the circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions.
For example, any of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the determination module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, and the third acquisition unit 443 may be combined in one module/unit/sub-unit, or any of the modules/units/sub-units may be split into a plurality of modules/units/sub-units. Alternatively, at least some of the functionality of one or more of these modules/units/sub-units may be combined with at least some of the functionality of other modules/units/sub-units and implemented in one module/unit/sub-unit. According to embodiments of the present disclosure, at least one of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the determination module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, the third acquisition unit 443 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or any other reasonable manner of integrating or packaging the circuitry, or any one of or a suitable combination of software, hardware, and firmware. Alternatively, at least one of the first acquisition module 410, the determination module 420, the output module 430, and the second acquisition module 440, the third acquisition module 450, the judgment module 460, the fourth acquisition module 470, the fifth acquisition module 480, the training module 490, and the response module 4100, the first output unit 431, the second output unit 432, the first acquisition unit 441, the second acquisition unit 442, and the third acquisition unit 443 may be at least partially implemented as a computer program module that, when executed, may perform the corresponding functions.
It should be noted that, in the embodiment of the present disclosure, the portion of the article inspection and reception system corresponds to the portion of the article inspection and reception method in the embodiment of the present disclosure, and the description of the portion of the article inspection and reception system specifically refers to the portion of the article inspection and reception method, which is not described herein again.
Fig. 12 schematically illustrates a block diagram of a computer system suitable for implementing the above-described method according to an embodiment of the present disclosure. The computer system illustrated in fig. 12 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 12, a computer system 500 according to an embodiment of the present disclosure includes a processor 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 501 may also include on-board memory for caching purposes. The processor 501 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 503, various programs and data required for the operation of the system 500 are stored. The processor 501, ROM 502, and RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 502 and/or the RAM 503. Note that the program may be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the system 500 may further include an input/output (I/O) interface 505, the input/output (I/O) interface 505 also being connected to the bus 504. The system 500 may also include one or more of the following components connected to the I/O interface 505: an input section 506 including a keyboard, a mouse, and the like; an output portion 507 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
According to embodiments of the present disclosure, the method flow according to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 509, and/or installed from the removable media 511. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 501. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium. Examples may include, but are not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.

Claims (12)

1. A method of acceptance of goods, comprising:
acquiring voice information of a user, wherein the voice information is used for determining the acceptance type of a target goods;
Determining an acceptance type corresponding to the target item in response to the voice information;
outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items, and the acceptance content is used for indicating a user to execute acceptance operation; and
obtaining an acceptance result of the user for one or more acceptance items in the acceptance content through multiple interactive operations, and generating an acceptance list according to the acceptance result;
the means for obtaining the acceptance of the user for one or more acceptance items in the acceptance content comprises:
acquiring acceptance results for the one or more acceptance items by acquiring acceptance voice content of the user and identifying the acceptance voice content;
wherein, during each of the interactive operations, the acceptance speech content includes a natural random language of the user.
2. The method of claim 1, wherein the manner of obtaining the user's acceptance results for one or more of the acceptance items in the acceptance content comprises at least one of:
sensing somatosensory input of the user through terminal equipment used by the user, and acquiring acceptance results aiming at the one or more acceptance items based on the somatosensory input;
And acquiring acceptance results for the one or more acceptance items by acquiring text input content of the user.
3. The method of claim 1, wherein the method further comprises:
acquiring single number information of the target goods;
judging whether a task list contains a task for checking and accepting the target goods according to the single number information; and
and acquiring the voice information under the condition that the task list contains the task for checking and accepting the target goods.
4. The method of claim 1, wherein the acceptance content includes at least a first acceptance item and a second acceptance item, outputting acceptance content corresponding to the acceptance type by voice includes:
outputting the first verification term by voice; and
outputting the second acceptance item through voice under the condition that the acceptance result of the user for the first acceptance item is acquired.
5. The method of claim 1, wherein the method further comprises:
acquiring historical voice information sent in the process of checking and accepting goods by different users;
training the acquired historical voice information through a deep learning method, and establishing a voice response model; and
Responding to the corresponding user based on the voice response model.
6. An item acceptance system comprising:
the first acquisition module is used for acquiring voice information of a user, wherein the voice information is used for determining the acceptance type of the target goods;
the determining module is used for responding to the voice information and determining an acceptance type corresponding to the target goods;
the output module is used for outputting acceptance content corresponding to the acceptance type through voice, wherein the acceptance content comprises one or more acceptance items, and the acceptance content is used for indicating a user to execute acceptance operation; and
the second acquisition module is used for acquiring the acceptance result of the user for one or more acceptance items in the acceptance content through multiple interactive operations, and generating an acceptance list according to the acceptance result;
the second acquisition module includes:
a first acquisition unit configured to acquire an acceptance result for the one or more acceptance items by acquiring acceptance voice content of the user and recognizing the acceptance voice content;
wherein, during each of the interactive operations, the acceptance speech content includes a natural random language of the user.
7. The system of claim 6, wherein the second acquisition module comprises:
a second obtaining unit, configured to sense a somatosensory input of the user through a terminal device used by the user, and obtain an acceptance result for the one or more acceptance items based on the somatosensory input;
and a third acquisition unit for acquiring an acceptance result for the one or more acceptance items by acquiring text input content of the user.
8. The system of claim 6, wherein the system further comprises:
the third acquisition module is used for acquiring the single number information of the target goods;
the judging module is used for judging whether the task list contains a task for checking and accepting the target goods according to the single number information; and
and the fourth acquisition module is used for acquiring the voice information under the condition that the task list contains the task for checking and accepting the target goods.
9. The system of claim 6, wherein the acceptance content includes at least a first acceptance item and a second acceptance item, the output module comprising:
a first output unit configured to output the first verification term by voice; and
And the second output unit is used for outputting the second acceptance item through voice under the condition that the acceptance result of the user for the first acceptance item is acquired.
10. The system of claim 6, wherein the system further comprises:
the fifth acquisition module is used for acquiring historical voice information sent in the process of checking and accepting goods by different users;
the training module is used for training the acquired historical voice information through the deep learning system and establishing a voice response model; and
and the response module is used for responding to the corresponding user based on the voice response model.
11. A computer system, comprising:
one or more processors;
a memory for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the item acceptance method of any one of claims 1 to 5.
12. A computer readable storage medium having stored thereon executable instructions that when executed by a processor cause the processor to implement the method of item acceptance of any one of claims 1 to 5.
CN201811323587.4A 2018-11-07 2018-11-07 Goods acceptance method and system, computer system and computer readable storage medium Active CN111160817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811323587.4A CN111160817B (en) 2018-11-07 2018-11-07 Goods acceptance method and system, computer system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811323587.4A CN111160817B (en) 2018-11-07 2018-11-07 Goods acceptance method and system, computer system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111160817A CN111160817A (en) 2020-05-15
CN111160817B true CN111160817B (en) 2024-03-05

Family

ID=70555409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811323587.4A Active CN111160817B (en) 2018-11-07 2018-11-07 Goods acceptance method and system, computer system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111160817B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768147A (en) * 2020-05-29 2020-10-13 大亚湾核电运营管理有限责任公司 Nuclear power station material acceptance method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208070A (en) * 2013-04-02 2013-07-17 上海斐讯数据通信技术有限公司 Automatic ticket processing system and processing method based on mobile phone voice
TW201528201A (en) * 2014-01-15 2015-07-16 Uni President Cold Chain Corp Product logistics acceptance system
CN104866993A (en) * 2015-06-01 2015-08-26 国药物流有限责任公司 Acceptance system
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN105741071A (en) * 2016-02-01 2016-07-06 成都市泰牛科技股份有限公司 Management system and method of commercial vehicle acceptance warehousing
CN107291822A (en) * 2017-05-24 2017-10-24 北京邮电大学 The problem of based on deep learning disaggregated model training method, sorting technique and device
CN108302703A (en) * 2018-01-18 2018-07-20 广东美的制冷设备有限公司 Air quality management method, system and computer readable storage medium
CN108346073A (en) * 2017-01-23 2018-07-31 北京京东尚科信息技术有限公司 A kind of voice purchase method and device
CN108428451A (en) * 2018-03-12 2018-08-21 联想(北京)有限公司 Sound control method, electronic equipment and speech control system
CN108612308A (en) * 2018-05-11 2018-10-02 成都澄和科技有限公司 A kind of building iron intelligence examination goods system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060074791A1 (en) * 2004-09-28 2006-04-06 Jelaco John A System, method and associated software for managing the transportation of goods
US20140211017A1 (en) * 2013-01-31 2014-07-31 Wal-Mart Stores, Inc. Linking an electronic receipt to a consumer in a retail store

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208070A (en) * 2013-04-02 2013-07-17 上海斐讯数据通信技术有限公司 Automatic ticket processing system and processing method based on mobile phone voice
TW201528201A (en) * 2014-01-15 2015-07-16 Uni President Cold Chain Corp Product logistics acceptance system
CN104866993A (en) * 2015-06-01 2015-08-26 国药物流有限责任公司 Acceptance system
CN105068661A (en) * 2015-09-07 2015-11-18 百度在线网络技术(北京)有限公司 Man-machine interaction method and system based on artificial intelligence
CN105741071A (en) * 2016-02-01 2016-07-06 成都市泰牛科技股份有限公司 Management system and method of commercial vehicle acceptance warehousing
CN108346073A (en) * 2017-01-23 2018-07-31 北京京东尚科信息技术有限公司 A kind of voice purchase method and device
CN107291822A (en) * 2017-05-24 2017-10-24 北京邮电大学 The problem of based on deep learning disaggregated model training method, sorting technique and device
CN108302703A (en) * 2018-01-18 2018-07-20 广东美的制冷设备有限公司 Air quality management method, system and computer readable storage medium
CN108428451A (en) * 2018-03-12 2018-08-21 联想(北京)有限公司 Sound control method, electronic equipment and speech control system
CN108612308A (en) * 2018-05-11 2018-10-02 成都澄和科技有限公司 A kind of building iron intelligence examination goods system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Applying RFID and GPS tracker for signal processing in a cargo security system;Ruijian Zhang;《2013 IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC 2013)》;第1-5页 *
物联网技术在运输车辆调度与监控中的应用;雷波;《流通经济》;第139-141页 *

Also Published As

Publication number Publication date
CN111160817A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
US20230237999A1 (en) Multiple inspector voice inspection
US10621538B2 (en) Dynamic check digit utilization via electronic tag
CN108537533B (en) Self-service shopping settlement method and system
US20180373503A1 (en) Application development using multiple primary user interfaces
KR102236318B1 (en) Systems and methods for managing application programming interface information
CN110689393B (en) Man-machine interaction method, device, system and storage medium
CN109992601A (en) Method for pushing, device and the computer equipment of backlog information
TWI775414B (en) System and methods for managing client request
US11216821B1 (en) Systems and methods for breaking up select requests to streamline processes and improve scalability
CN111681085A (en) Commodity pushing method and device, server and readable storage medium
TW202207036A (en) Computer-implemented systems and computer-implemented methods for generating and modifying data for module
CN111160817B (en) Goods acceptance method and system, computer system and computer readable storage medium
JP6734452B1 (en) Information processing apparatus, information processing method, and computer program
US20150095199A1 (en) Quick route queued products and services
CN110853230A (en) Self-service vending machine exception handling method, system and equipment
KR102479526B1 (en) Systems and methods for dynamic in-memory caching of mappings into partitions
CN105389333A (en) Retrieval system construction method and server framework
CN115170157A (en) Store auditing method, device, equipment and storage medium
US20190197467A1 (en) Automated zone location characterization
CN113424245B (en) Live Adaptive Training in Production Systems
CN113178032A (en) Video processing method, system and storage medium
TW202305592A (en) Computer-implemented system and computer-implemented method for eliminating perpetual application programming interface calls, and computer-implemented system for eliminating unresolved application programming interface calls
US10896403B2 (en) Systems and methods for managing dated products
US20210295349A1 (en) System for vendor correction of errors
CN113408912B (en) Audit system for television station and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210303

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

Effective date of registration: 20210303

Address after: 6 / F, 76 Zhichun Road, Haidian District, Beijing 100086

Applicant after: Beijing Jingdong Zhenshi Information Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

GR01 Patent grant
GR01 Patent grant