Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terminology appearing in the course of describing embodiments of the application are applicable to the following explanation:
(1) An image recognition technology belongs to the field of artificial intelligence, and relates to a technology for recognizing objects of different modes by recognizing the objects of the images.
(2) The AR technology, augmented Reality, augmented reality technology, is a technology that calculates the position and angle of a camera image in real time and adds corresponding images, video, and 3D models, and can complete interaction between the virtual world and the real world through a screen.
Example 1
According to the embodiment of the application, an embodiment of an object warehousing processing method is provided, and it is to be noted that the object warehousing processing method provided by the application mainly adopts an image recognition technology, wherein the object can be but is not limited to a commodity. Specifically, the object warehousing processing method mainly utilizes an image recognition technology to recognize names of warehoused and non-warehoused commodities, and determines whether the commodities are warehoused according to the names of the commodities, wherein if the commodities are not yet warehoused, a worker can finish the warehousing of the commodities by operating a display interface of a mobile terminal. In the process, the warehouse entry of the commodity is completed without manually filling in a warehouse entry bill, a warehouse entry bill and the like, so that the error risk of manual recording is reduced. In addition, the process does not need to scan the bar code or the two-dimensional code of each commodity to distinguish the commodity to be put into storage from the commodity to be put into storage, so that the consumption of manpower and material resources is reduced, and the working efficiency of staff is improved.
In addition, the object warehousing processing method provided by the application can be widely applied to information management aspects, such as commodity warehousing management aspects of supermarkets, commodity warehousing management aspects of various stores, such as shoe stores, clothes stores and electronic stores. The object warehouse management method provided by the application is not limited to warehouse management of supermarket commodities, and can be applied to warehouse management of other various commodities. Specifically, before the commodity purchased in the supermarket is put in storage, a worker in the supermarket photographs the commodity through a mobile terminal (for example, a smart phone), the mobile terminal performs image recognition processing on the photographed picture of the commodity, recognizes the name of the commodity in the picture, and then displays the storage condition of the commodity in the picture on a display screen of the mobile terminal in the form of a label through an AR technology. At the moment, the staff can finish the warehousing operation of the goods to be warehoused after clicking and determining by clicking the goods with the label of 'to be warehoused' and inputting the warehousing quantity of the goods.
From the above, the present application automatically completes the warehouse entry processing of the commodity by adopting the image recognition technology. Fig. 1 shows a flowchart of the object warehousing processing method provided by the present application, and as can be seen from fig. 1, the object warehousing processing method specifically includes the following steps:
step S102, an image is acquired.
It should be noted that the image capturing apparatus may capture an image by at least one of the following means: collecting an image in a photographing mode; and acquiring images by a video recording mode. The image capturing device is a device with a photographing or image capturing function, and may be, but not limited to, a video camera, a video recorder, a camera, etc., and any mobile terminal with an image capturing function, for example, a smart phone, a smart tablet, and other wearable devices (for example, smart glasses), etc.
In an alternative embodiment, a worker photographs merchandise purchased in a supermarket by holding an image acquisition device, and sends the obtained image to a server, and the server processes the acquired image, for example, pre-processes the acquired image, wherein the pre-process includes, but is not limited to, filtering, enhancing, sharpening, etc. the acquired image to enhance the identification of the image and enhance the identification of the merchandise in the image. Wherein, the image acquisition equipment can obtain the schematic diagram of the image of the purchased commodity shown in fig. 2, A represents a Master Kangdi iced black tea, B represents a Wang Zai Xuemi cake, C represents illi chocolate milk, and D represents arrow Yida chewing gum.
In another alternative embodiment, after the image acquisition device acquires the image of the commodity purchased in the supermarket, the background processor of the image acquisition device performs preprocessing on the image, so that the acquired image can still be preprocessed in the case that the image acquisition device is not connected with a network or the network is interrupted.
Step S104, an object in the image is identified.
In an alternative embodiment, the image acquisition device sends the acquired image to the cloud server after the image is acquired, the server compares the acquired image with commodity images in a 360-degree picture library of the server after receiving the image acquired by the image acquisition device, determines names of commodities in the image according to comparison results, and sends names of commodities obtained through recognition to the image acquisition device, so that recognition of objects in the image is completed.
In another alternative embodiment, the image acquisition device is provided with a 360-degree picture library, and after the image acquisition device acquires the image, the background processor of the image acquisition device compares the acquired image with the image in the 360-degree picture library, so as to determine the names of commodities in the acquired image, thereby completing the identification process of objects in the image.
It should be noted that, the name of the object in the collected image may be obtained in step S104, and further, the warehouse-in state of the object may be further determined according to the name of the object.
Step S106, the warehouse-in state of the identified object is obtained.
It should be noted that, the warehouse-in state of the object may be a warehouse-in state of a commodity, where the warehouse-in state of the commodity at least includes: and (5) putting in storage and waiting for putting in storage.
In an alternative embodiment, after obtaining the name of the commodity, the image acquisition device queries a database in the server based on the name of the commodity, and determines the warehousing status of the commodity through the database. Meanwhile, the server sends the obtained warehouse-in state of the commodities to the image acquisition equipment, and the image acquisition equipment displays the warehouse-in state of each commodity on a display interface of the image acquisition equipment in a label mode, as shown in fig. 3, wherein the Kangshi iced black tea and the illite chocolate milk are in a warehouse-in state, and the Wang zai rice snow cakes and the arrow Yida chewing gum are in a warehouse-in state.
It should be noted that, the warehouse-in state of the commodity can be automatically obtained through the name of the commodity in step S106, and the warehouse-in state of the commodity is not required to be distinguished by manually moving the commodity position, so that the labor and the working time of the staff are saved, the working efficiency of the staff is improved, and the risk of data loss is reduced.
Step S108, outputting a warehousing processing result when the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
It should be noted that, the information of the warehousing process may be, but not limited to, the name of the object, the number of delivered objects, and the number of warehouses used for inputting the objects.
In an alternative embodiment, after determining the warehousing status of the merchandise in the image, the warehousing status of each merchandise in the image will be displayed on the display screen of the image capture device, as shown in FIG. 3. When the staff clicks the label corresponding to the commodity to be put in storage in the image, the interface for inputting the commodity is entered, for example, after the staff clicks the label corresponding to the illite chocolate milk to be put in storage, the interface for inputting the commodity is entered as shown in fig. 4. As can be seen from fig. 4, on the interface, the name of the commodity (such as "illite chocolate milk"), the specification of the commodity (such as "250 ml"), the model (such as "L") and the unit (such as "bottle") are displayed, and the delivery quantity and the warehouse-in quantity are also displayed, and after the staff inputs the warehouse-in quantity and clicks the "complete input" button, the warehouse-in of the commodity is completed.
Based on the above-mentioned schemes defined in step S102 to step S108, it can be known that, by collecting an image, identifying an object in the image, then obtaining a warehouse entry state of the identified object, and finally outputting a warehouse entry processing result in a case that the warehouse entry state is to be warehoused, where the warehouse entry processing result includes: and (5) warehousing the object.
It is easy to notice that the image acquisition device can identify the names of the commodities in the image through an image identification technology, and then the warehousing state of the commodities is determined according to the names of the commodities, and staff is not required to distinguish the warehousing state of the commodities by moving the positions of the commodities in the whole process, so that the technical effect of automatically determining the warehousing state of the commodities is realized, and the workload of the staff is reduced. In addition, after the warehouse-in state of the commodity is obtained, the warehouse-in state of the commodity is displayed in a display screen of the image acquisition equipment, and a worker can finish warehouse-in of the commodity only by operating a label corresponding to the commodity with the warehouse-in state to be in the warehouse-in state, so that the risk of data loss caused by a traditional paper registration method is reduced, and unnecessary economic loss is avoided.
From the above, the object warehousing processing method provided by the application can achieve the purpose of automatically acquiring the warehousing state of the object, thereby realizing the technical effect of improving the warehousing efficiency of the object and further solving the technical problem of low object warehousing efficiency in the prior art.
Before acquiring the warehouse-in state of the identified object, it is necessary to identify whether the quality of the object meets the requirement, and the warehouse-in state of the object is further identified only if the quality of the object meets the requirement, which specifically includes the following steps:
step S10, judging whether the quality of the object meets the preset quality requirement according to the acquired image;
step S12, when the determination result is yes, a step of acquiring the warehouse entry state of the identified object is performed.
In an alternative embodiment, after obtaining the image collected by the image collecting device, the mobile device (i.e. the image collecting device with the image processing function) sends the collected image to the server, the server further scores the quality of the commodity according to the information such as the package, the shape and the characters on the outer package of the commodity after identifying the image to determine the name of the commodity, and determines whether the quality of the commodity meets the predetermined quality requirement according to the scoring result, for example, the server determines whether the commodity is an old commodity according to the characteristics such as the color, the shape and the like of the outer package of the commodity, determines whether the outer package of the commodity has broken, determines whether the commodity has approaching the quality guarantee date and the like according to the characters on the outer package of the commodity, then scores the commodity, and weights and sums the scores to obtain the quality score of the commodity. Then, the server compares the quality score of the commodity with the score corresponding to the preset quality requirement, and if the quality score of the commodity is greater than or equal to the score corresponding to the preset quality requirement, the warehousing state of the commodity is further determined; if the quality score of the commodity is smaller than the score corresponding to the preset quality requirement, the scoring result is sent to the image acquisition equipment, and early warning information is sent out to prompt staff that the commodity has quality problems.
In addition, it should be noted that, when it is determined that the quality of the commodity meets the predetermined quality requirement, the warehouse-in state of the commodity is further obtained, where, when the warehouse-in state is to be warehoused, the output warehouse-in processing result specifically includes the following steps:
step S1080, displaying the name of the object, the delivery quantity of the object and an input box for inputting the warehouse-in quantity of the object;
step S1082, receiving the warehouse-in quantity input through the input box according to the delivery quantity;
step S1084, when receiving a binning instruction for confirming the input binning number, outputting a processing result, where the processing result includes: and (5) warehousing the object.
In an alternative embodiment, the worker clicks the label corresponding to the illite chocolate milk in fig. 3, and enters the entry interface shown in fig. 4, where the worker checks whether the name, specification, model, delivery number, etc. of the commodity are correct, and enters the stock quantity in the entry box corresponding to the stock quantity. After a worker clicks a 'finishing input' button in an input interface, the image acquisition equipment receives a warehousing instruction, stores commodity names, specifications, models, warehousing quantity and the like into a supermarket commodity sales-in database, and further finishes warehousing operation of commodities. The commodity stock and sales database stores databases of commodity stock quantity, commodity sales quantity and commodity stock quantity.
It should be noted that, besides inputting the number of the warehouses through the input box, the staff can also input the number of the warehouses through a voice form, specifically, the mobile device receives the first voice command, determines the object needing to be subjected to the warehouse-in operation according to the received first voice command, then receives the second voice command, and determines the warehouse-in number corresponding to the object. And outputting a processing result under the condition that the third voice instruction is received, wherein the processing result comprises: the method comprises the steps that information for warehousing objects is processed, names of the objects are carried in first voice instructions, and an image acquisition device can identify the objects needing warehousing operation through the first voice instructions; the second voice command carries the warehousing quantity, and the image acquisition equipment displays the warehousing quantity on the input interface under the condition of receiving the second voice command; the third voice command carries a warehousing command for confirming the input warehousing quantity, and the image acquisition equipment confirms commodity warehousing under the condition of receiving the third voice command.
In an alternative embodiment, after obtaining the image acquired by the image acquisition device, the image acquisition device may identify the object in the image according to the acquired image, wherein identifying the object in the image specifically includes the following steps:
Step S1060, comparing the image with the image of the object in the 360-degree picture library to obtain a comparison result, wherein the image of at least two faces of the object is stored in the 360-degree picture library;
step S1062, identifying the object in the image according to the obtained comparison result.
It should be noted that, to obtain a more accurate image recognition result, images of four sides of the object in the 360-degree image library may be used for comparison, where the images of four sides of the commodity are a front image, a side image, a top image, and a bottom image of the commodity. As shown in fig. 5, 6, 7, 8 and 9, in which fig. 5 is an image of a certain milk collected by the image collecting device, fig. 6 is a front image of X milk stored in the 360 degree picture library, fig. 7 is a side image of X milk stored in the 360 degree picture library, fig. 8 is a top image of X milk stored in the 360 degree picture library, and fig. 9 is a bottom image of X milk stored in the 360 degree picture library.
In addition, it should be noted that, in order to improve the recognition rate of the image, the shooting angle of the image acquisition device may be further adjusted, so that the image acquisition device may shoot the best commodity image, so as to avoid the occurrence of shooting dead angles and influence the recognition rate of the image. In addition, the resolution of the camera can be enhanced to achieve the purpose of improving the quality of the image acquired by the image acquisition equipment.
In addition, comparing the image with the image of the object in the 360-degree picture library to obtain a comparison result, wherein the comparison result comprises the following steps:
step S10602, comparing the image with the front image, the side image, the top image and the bottom image of the 360-degree picture library object to obtain a front result, a side result, a top result and a bottom result;
step S10604, counting the obtained front results, side results, top results and bottom results to obtain a comparison result.
In an alternative embodiment, taking fig. 5-9 as an example, comparing fig. 5 with fig. 6, fig. 7, fig. 8 and fig. 9 respectively to obtain comparison results of fig. 5 and other four images, namely obtaining four matching degrees, and then weighting and summing the obtained four matching degrees by the commodity replenishment device to obtain a final comparison result. Similarly, the front image, the side image, the top image and the bottom image of all the commodities in the 360-degree photo library commodity are respectively compared to obtain a comparison result. Then the image acquisition device determines that the name of the commodity with the largest matching degree or the matching degree larger than the preset matching degree in the comparison result is the name of the image acquired by the image acquisition device, for example, the matching degree of the X milk in the picture library of fig. 5 and 360 degrees is the highest, and the name of the commodity in fig. 5 is the X milk.
In the process of comparing the images with the commodity images in the 360-degree image library, all the commodities in the 360-degree image library can not be matched, specifically, after the image acquired by the image acquisition device is obtained, the types of the commodities can be roughly determined according to the shape, the color and other characteristics of the commodities in the image, and then only the commodity images with the same commodity type in the 360-degree image library can be compared. For example, if the commodity in the image acquired by the image acquisition device is bottle-shaped and white, the commodity can be roughly determined to be food, and then the image acquisition device matches the commodity image of the food in the 360-degree picture library.
In addition, it should be noted that, due to the continuous increase of the types of the commodities and the continuous update of the types of the commodities, in order to ensure that all the commodities in the acquired image can be identified, the 360-degree picture library needs to be updated periodically. The 360-degree picture library can be updated in a manual periodical manual updating mode, and the 360-degree picture library can be updated in an automatic updating mode.
In an alternative embodiment, if the matching degree corresponding to the comparison result of the collected image and all the commodity images in the 360-degree picture library is smaller than the preset threshold, it is indicated that the commodity may not exist in the 360-degree picture library, at this time, the image collecting device will save the image of the commodity, and obtain the name of the commodity, the front result, the side result, the top result and the bottom result of the commodity on the internet through the server. After the information is acquired, the 360-degree picture library stores the information, so that the process of automatically updating the 360-degree picture library is completed.
In an alternative embodiment, the object in the image can be identified by a machine learning method, which specifically includes the following steps:
in step S106a, an object recognition model is determined, where the object recognition model is obtained through machine learning training using multiple sets of data, and each set of data in the multiple sets of data includes: the image to be identified and the identification object used for identifying the identification from the image to be identified;
step S106b, identifying the object in the image according to the determined object identification model.
It should be noted that the object recognition model may be, but is not limited to, a commodity recognition model.
Specifically, after a plurality of images to be identified are acquired, the images to be identified may be analyzed by using a commodity identification model obtained through machine learning training in advance, the identified commodity in the images to be identified may be determined, for example, the shape, color, text and other image features of the commodity in the images to be identified may be determined, and the identified commodity in the images to be identified may be identified according to the image features. In order to be able to identify the commodity from the image to be identified, a neural network model may be established, a plurality of groups of identified commodities for identifying the identified commodity from the image to be identified are obtained in advance, and are identified by means of manual labeling, and then training is performed by using the identified commodity identified from the image to be identified, so as to obtain a commodity identification model. After the commodity identification model is obtained, the image acquired by the image acquisition device can be used as the input of the commodity identification model, and the output of the commodity identification model is the commodity in the image.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
From the above description of the embodiments, it will be clear to those skilled in the art that the object warehousing processing method according to the above embodiment may be implemented by means of software plus a necessary general hardware platform, or may be implemented by hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
According to an embodiment of the present invention, there is further provided an embodiment of an object warehousing processing method, where fig. 10 shows a flowchart of the object warehousing processing method, and as shown in fig. 10, the method includes the following steps:
in step S602, the name of the object and the delivery quantity of the object are displayed, wherein the object is obtained by identifying the acquired image.
It should be noted that the object may be, but not limited to, a commodity, where the name of the object and the delivery number of the object are the name of the commodity and the delivery number of the commodity. In addition, the name of the commodity and the delivery quantity of the commodity obtained by identifying the acquired image may be displayed on the mobile terminal. In which fig. 4 shows a schematic view of an alternative entry interface on which the commodity name illite chocolate milk is displayed, the commodity delivery quantity being 285.
In addition, it should be noted that the mobile terminal may be, but not limited to, a device with a photographing or image capturing function, a camera, a video recorder, a camera, and any mobile terminal with an image capturing function, for example, a smart phone, a smart tablet, and other wearable devices (for example, smart glasses), and the like.
Step S604, a binning instruction for confirming binning of the object is received.
Specifically, the staff may input a warehouse entry command of the object warehouse entry to the mobile terminal by clicking the "complete entry" button in fig. 4. In addition, the staff can also input a warehousing instruction to the mobile terminal in the form of a voice instruction, and at the moment, the mobile terminal analyzes the voice instruction sent by the staff, performs voice recognition on the voice instruction and further obtains the warehousing instruction.
Step S606, outputting a processing result according to the warehousing instruction, wherein the processing result comprises: and (5) warehousing the object.
In an alternative embodiment, the worker clicks the label corresponding to the illite chocolate milk in fig. 3, and enters the entry interface shown in fig. 4, where the worker checks whether the name, specification, model, delivery number, etc. of the commodity are correct, and enters the stock quantity in the entry box corresponding to the stock quantity. After a worker clicks a 'finishing input' button in an input interface, the image acquisition equipment receives a warehousing instruction, stores commodity names, specifications, models, warehousing quantity and the like into a supermarket commodity sales-in database, and further finishes warehousing operation of commodities. The commodity stock and sales database stores databases of commodity stock quantity, commodity sales quantity and commodity stock quantity.
Based on the above-mentioned scheme defined in step S602 to step S606, it can be known that, by displaying the name of the object and the delivery quantity of the object, a warehousing instruction for confirming warehousing the object is received, and then a processing result is output according to the warehousing instruction, where the object is obtained by identifying the acquired image, and the processing result includes: and (5) warehousing the object.
It is easy to notice that the mobile terminal can identify the names of the commodities in the image through an image identification technology, and then the warehouse-in state of the commodities is determined according to the names of the commodities, and staff is not required to distinguish the warehouse-in state of the commodities by moving the positions of the commodities in the whole process, so that the technical effect of automatically determining the warehouse-in state of the commodities is realized, and the workload of the staff is reduced. In addition, after the warehouse-in state of the commodity is obtained, the warehouse-in state of the commodity is displayed in a display screen of the mobile terminal, and a worker can finish warehouse-in of the commodity only by operating a label corresponding to the commodity with the warehouse-in state to be in the warehouse-in state, so that the risk of data loss caused by a traditional paper registration method is reduced, and unnecessary economic loss is avoided.
From the above, the object warehousing processing method provided by the application can achieve the purpose of automatically acquiring the warehousing state of the object, thereby realizing the technical effect of improving the warehousing efficiency of the object and further solving the technical problem of low object warehousing efficiency in the prior art.
In an alternative embodiment, receiving a binning instruction to acknowledge binning an object comprises the steps of:
step S6040a, displaying a button for identifying whether to put an object in storage;
in step S6042a, a binning instruction for instructing to confirm binning of the object is received, which is input through a button.
Specifically, after the mobile terminal displays the names of the commodities and the delivery quantity of the commodities, the mobile terminal checks whether the commodities are put in storage through the in-stock database, and if the in-stock database is used for determining that the commodities are put in storage, only parameter information of the commodities, such as the names of the commodities, the delivery quantity, the put quantity and the like, is displayed on a display interface of the mobile terminal; if the commodity is determined to not be put in storage through the in-stock database, an input button (such as the 'finish input' button in fig. 4) is displayed on the display interface of the mobile terminal, and a worker can send a put instruction to the mobile terminal by clicking the input button.
It should be noted that, besides obtaining the input command through the input button, the input command can also be obtained through a mode of sending voice information, and the specific steps are as follows:
step S6040b, sending out voice information for prompting whether to put in storage the object;
step S6042b, collecting a voice command sent according to the voice information, wherein the voice command carries a warehousing command for confirming warehousing of the object.
In an alternative embodiment, the mobile terminal sends a voice message to the staff, for example, "please confirm whether to put in warehouse", and the staff sends a voice command to the mobile terminal if receiving the voice message. The mobile terminal has voice receiving and voice recognition functions, a worker can send a voice command to the mobile terminal, and after the voice command is received by the mobile terminal, the voice command is recognized to obtain a keyword command for prompting whether to put goods in storage. Because the voice command contains a warehousing command for confirming the warehousing of the commodity, after the warehousing command is received, the mobile terminal performs automatic warehousing operation according to the warehousing command. The entry instruction may be, but is not limited to, a voice keyword, for example, keywords such as "enter", "confirm", etc.
It should be noted that, the mobile terminal can determine whether the commodity is put in storage according to the voice of the user, and manual operation of the staff is not needed, so that the operation flow of the staff is simplified, and the working efficiency of the staff is further improved.
In addition, before displaying the name of the object and the delivery quantity of the object, the image containing the object needs to be acquired, which specifically includes the following steps:
step S60, displaying an input box for inputting the warehousing quantity of the object;
step S62, receiving the warehouse-in quantity input through an input box;
step S64, outputting a processing result according to the warehousing instruction, wherein the processing result comprises: and carrying out warehousing processing information on the objects with the warehousing quantity.
In an alternative embodiment, after the staff clicks the label corresponding to the commodity to be put in storage, the label enters an input interface shown in fig. 4, where the staff can input the number of storage in the input box of the number of storage in the input interface, and can also input remark information of the commodity in the remark box. After the operator clicks the "complete enter" button, the entry result is displayed on the display interface of the mobile terminal, for example, a dialog box popping up that prompts that the merchandise has been successfully entered.
It should be noted that, after obtaining the image collected by the mobile terminal, further processing is required to be performed on the image, which specifically includes the following steps:
step S6020, receiving a selection instruction, wherein the selection instruction is used for selecting an image for identifying an object from the acquired images;
step S6022, displaying a recognition result, wherein the recognition result is an object recognized by recognizing the image selected according to the selection instruction;
step S6024, determining the object to be displayed according to the displayed recognition result.
It should be noted that, the mobile terminal may collect images of a plurality of commodities, and the staff may select an image with the best quality from the plurality of images to identify the commodity in the image.
In an alternative embodiment, a worker can generate a selection instruction by clicking a selection button in the mobile terminal, and the mobile terminal compares the selected image with the image of the commodity in the 360-degree picture library according to the selection instruction to obtain a comparison result; and then identifying the commodity in the image according to the obtained comparison result. The 360-degree picture library stores images of at least two faces of the commodity.
In another alternative embodiment, the operator may generate the selection instruction by clicking a selection button in the mobile terminal, the mobile terminal determining the article identification model based on the selection instruction, and identifying the article in the selected image based on the determined article identification model. The commodity identification model is obtained through machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: the image to be identified and the identification device are used for identifying the identified commodity identified from the image to be identified.
There is another alternative embodiment in which a worker issues a voice command to a mobile terminal, the mobile terminal processes the received voice command, extracts keywords in the voice command, and generates a selection instruction according to the keywords, for example, the worker issues a "select T image", and the mobile terminal extracts keywords "M" or "M image" from the "select M image" and generates a selection instruction. After receiving the selection instruction, the commodity replenishment equipment compares the selected image with the images of the commodities in the 360-degree picture library to obtain a comparison result; and then identifying the commodity in the image according to the obtained comparison result. The 360-degree picture library stores images of at least two faces of the commodity.
There is another alternative embodiment in which a worker issues a voice command to a mobile terminal, the mobile terminal processes the received voice command, extracts keywords in the voice command, and generates a selection instruction according to the keywords, for example, the worker issues a "select M image", and the mobile terminal extracts the keywords "M" or "M image" from the "select M image" and generates the selection instruction. After receiving the selection instruction, the mobile terminal determines a commodity identification model, and identifies the commodity in the selected image according to the determined commodity identification model. The commodity identification model is obtained through machine learning training by using a plurality of groups of data, and each group of data in the plurality of groups of data comprises: the image to be identified and the identification device are used for identifying the identified commodity identified from the image to be identified.
Example 3
According to an embodiment of the present invention, there is further provided an embodiment of an object warehousing processing method, where fig. 11 shows a flowchart of a commodity warehousing processing method, and as shown in fig. 11, the method includes the following steps:
step S1102, an image of goods delivered to a store is acquired.
It should be noted that, the image capturing device may capture an image of goods delivered to a store by photographing or video recording, where the image capturing device may be a device installed at a fixed location, for example, a camera, or may be a mobile terminal with an image capturing function, for example, a smart phone, a tablet, a wearable device (for example, smart glasses), or the like.
In an alternative embodiment, a worker photographs the merchandise purchased in the supermarket by holding the image acquisition device, and sends the obtained image to the server, the server processes the acquired image, or after the image acquisition device acquires the image of the merchandise, the image acquisition device processes the acquired image, for example, performs preprocessing on the acquired image, where the preprocessing includes, but is not limited to, filtering, enhancing, sharpening the acquired image to enhance the identification degree of the image and improve the identification degree of the merchandise in the image.
Step S1104, identifying the commodity in the image.
In an alternative embodiment, the image acquisition device sends the acquired image to the cloud server after the image is acquired, the server compares the acquired image with commodity images in a 360-degree picture library of the server after receiving the image acquired by the image acquisition device, determines names of commodities in the image according to comparison results, and sends names of commodities obtained through recognition to the image acquisition device, so that recognition of objects in the image is completed.
In another alternative embodiment, the image acquisition device is provided with a 360-degree picture library, and after the image acquisition device acquires the image, the background processor of the image acquisition device compares the acquired image with the image in the 360-degree picture library, so as to determine the names of commodities in the acquired image, thereby completing the identification process of objects in the image.
Step S1106, a warehouse-in status of the identified commodity is obtained.
It should be noted that the warehouse-in state of the commodity includes two states of warehouse-in and warehouse-in to be performed.
In an alternative embodiment, after obtaining the name of the commodity, the image acquisition device queries a database in the server based on the name of the commodity, and determines the warehousing status of the commodity through the database. Meanwhile, the server sends the obtained warehouse-in state of the commodities to the image acquisition equipment, and the image acquisition equipment displays the warehouse-in state of each commodity on a display interface of the image acquisition equipment in a label mode, for example, in fig. 3, kangshi iced black tea and illite chocolate milk are displayed as the warehouse-in state, and Wang zai rice snow cakes and arrow Yida chewing gums are displayed as the warehouse-in state.
It should be noted that, the warehouse-in state of the commodity can be automatically obtained through the name of the commodity in step S1106, and the warehouse-in state of the commodity is not required to be distinguished by manually moving the commodity position, so that the labor and the working time of the staff are saved, the working efficiency of the staff is improved, and the risk of data loss is reduced.
Step S1108, outputting a warehouse-in processing result of the commodity under the condition that the warehouse-in state is to be warehouse-in, wherein the warehouse-in processing result comprises: and (5) warehousing the commodity.
It should be noted that, the information of the warehousing process may be, but not limited to, the name of the object, the number of delivered objects, and the number of warehouses used for inputting the objects.
In an alternative embodiment, after determining the warehousing status of the merchandise in the image, the warehousing status of each merchandise in the image will be displayed on the display screen of the image capture device, as shown in FIG. 3. When the staff clicks the label corresponding to the commodity to be put in storage in the image, the interface for inputting the commodity is entered, for example, after the staff clicks the label corresponding to the illite chocolate milk to be put in storage, the interface for inputting the commodity is entered as shown in fig. 4. As can be seen from fig. 4, on the interface, the name of the commodity (such as "illite chocolate milk"), the specification of the commodity (such as "250 ml"), the model (such as "L") and the unit (such as "bottle") are displayed, and the delivery quantity and the warehouse-in quantity are also displayed, and after the staff inputs the warehouse-in quantity and clicks the "complete input" button, the warehouse-in of the commodity is completed.
Based on the steps defined in the steps S1102 to S1108, it may be known that, by collecting an image of the goods delivered to the store, identifying the goods in the image, then obtaining the warehouse-in state of the identified goods, and finally outputting the warehouse-in processing result of the goods when the warehouse-in state is to be warehoused, where the warehouse-in processing result includes: and (5) warehousing the commodity.
It is easy to notice that the image acquisition device can identify the names of the commodities in the image through an image identification technology, and then the warehousing state of the commodities is determined according to the names of the commodities, and staff is not required to distinguish the warehousing state of the commodities by moving the positions of the commodities in the whole process, so that the technical effect of automatically determining the warehousing state of the commodities is realized, and the workload of the staff is reduced. In addition, after the warehouse-in state of the commodity is obtained, the warehouse-in state of the commodity is displayed in a display screen of the image acquisition equipment, and a worker can finish warehouse-in of the commodity only by operating a label corresponding to the commodity with the warehouse-in state to be in the warehouse-in state, so that the risk of data loss caused by a traditional paper registration method is reduced, and unnecessary economic loss is avoided.
From the above, the commodity warehouse-in processing method provided by the application can achieve the purpose of automatically acquiring the warehouse-in state of the commodity, thereby realizing the technical effect of improving the warehouse-in efficiency of the commodity and further solving the technical problem of low commodity warehouse-in efficiency in the prior art.
In an alternative embodiment, before outputting the warehousing processing result of the commodity, the mobile device (e.g., mobile phone) determines whether the obtained warehousing processing result is correct, which specifically includes the following steps:
step S11060, acquiring an order of a commodity to be put in storage;
step S11062, determining commodity information of the commodity from the image;
step S11064, comparing the commodity information of the commodity determined from the image with the commodity information of the commodity in the order;
step S11066, determining and outputting a warehouse-in processing result of the commodity under the condition that the comparison results are matched.
Specifically, the mobile device may obtain an order for the commodity to be put in storage through the internet, or obtain an order for the commodity to be put in storage from an application program installed in the mobile device, where a worker purchases the commodity through the application program. And when the order of the commodity to be put in storage is obtained, the mobile equipment identifies the commodity in the image to obtain commodity information of the commodity, for example, the commodity in the image is identified according to a comparison structure by comparing the image with the image of the commodity in the 360-degree picture library to obtain information of the commodity, or the mobile equipment identifies the commodity in the image through the object identification model to obtain information of the commodity. After determining the merchandise information of the merchandise in the image, the mobile device further determines whether the merchandise information matches the merchandise information of the merchandise in the order, for example, whether the name, model, number of the identified merchandise matches the name, model, number of the merchandise in the order. If the two match, the commodity is determined to be in stock, and the commodity is displayed on a display screen of the mobile device, as shown in fig. 3.
In an alternative embodiment, after determining the warehousing status of the commodity, the mobile device outputs the warehousing processing result of the commodity, which specifically includes the following steps:
step S110660, acquiring a warehouse-in partition and partition positions of a commodity warehouse-in;
and step S110662, outputting a commodity warehousing processing result according to the acquired warehousing partition and the partition position.
Specifically, after the commodity enters the warehouse, a worker can shoot an image of the commodity through the mobile terminal, and the warehouse-in partition and the partition position of the commodity in the warehouse are determined through commodity identification. Then, the mobile device automatically determines whether the commodity belongs to the subarea, and after determining that the commodity belongs to the subarea, determines whether the commodity is at a specific position of the subarea, for example, the mobile device determines that milk is in a beverage area of a warehouse through image recognition, but is positioned in an area with higher temperature in the beverage area, and then the mobile device sends out prompt information to prompt that the position of the milk is wrong.
Example 4
According to an embodiment of the present invention, there is also provided an object warehousing processing device for implementing the above embodiment 1, as shown in fig. 12, which includes: acquisition module 701, identification module 703, acquisition module 705, and output module 707.
Wherein, the acquisition module 701 is configured to acquire an image; an identification module 703 for identifying an object in the image; an obtaining module 705, configured to obtain a warehouse-in state of the identified object; the output module 707 is configured to output a warehousing processing result when the warehousing state is to be warehoused, where the warehousing processing result includes: and (5) warehousing the object.
Here, it should be noted that the acquisition module 701, the identification module 703, the acquisition module 705, and the output module 707 correspond to steps S102 to S108 in embodiment 1, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the first embodiment.
In an alternative embodiment, the object warehousing processing device further includes: and the judging module and the executing module. The judging module is used for judging whether the quality of the object meets the preset quality requirement according to the acquired image; and the execution module is used for executing the step of acquiring the warehouse-in state of the identified object under the condition that the judgment result is yes.
Here, it should be noted that the above-mentioned judging module and executing module correspond to step S10 to step S12 in embodiment 1, and the two modules are the same as the example and application scenario implemented by the corresponding steps, but are not limited to the disclosure of the above-mentioned embodiment one.
In an alternative embodiment, the output module includes: the display module, the input module and the first output module. The display module is used for displaying the names of the objects, the delivery quantity of the objects and the input boxes for inputting the warehousing quantity of the objects; the input module is used for receiving the warehouse-in quantity input through the input box according to the delivery quantity; the first output module is used for outputting a processing result under the condition of receiving a warehousing instruction for confirming the input warehousing quantity, wherein the processing result comprises the following steps: and (5) warehousing the object.
Here, it should be noted that the display module, the input module, and the first output module correspond to steps S1080 to S1084 in embodiment 1, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure in the first embodiment.
In an alternative embodiment, the image is acquired by at least one of the following: collecting an image in a photographing mode; and acquiring images by a video recording mode.
In an alternative embodiment, the identification module comprises: the comparison module and the first identification module. The comparison module is used for comparing the image with the image of the object in the 360-degree picture library to obtain a comparison result, wherein the image of at least two faces of the object is stored in the 360-degree picture library; and the first identification module is used for identifying the object in the image according to the obtained comparison result.
It should be noted that the comparison module and the first recognition module correspond to steps S1060 to S1062 in embodiment 1, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the first embodiment.
In an alternative embodiment, the comparison module includes: the first processing module and the second processing module. The first processing module is used for comparing the image with a front image, a side image, a top image and a bottom image of the 360-degree picture library object respectively to obtain a front result, a side result, a top result and a bottom result respectively; and the second processing module is used for counting the obtained front-side result, side-side result, top-side result and bottom-side result to obtain a comparison result.
It should be noted that, the first processing module and the second processing module correspond to steps S10602 to S10604 in embodiment 1, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the first embodiment.
In an alternative embodiment, the identification module comprises: the determining module and the second identifying module. The determining module is configured to determine an object recognition model, where the object recognition model is obtained by using multiple sets of data through machine learning training, and each set of data in the multiple sets of data includes: the image to be identified and the identification object used for identifying the identification from the image to be identified; and the second recognition module is used for recognizing the object in the image according to the determined object recognition model.
It should be noted that the determining module and the second identifying module correspond to step S106a to step S106b in embodiment 1, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the first embodiment.
Example 5
According to an embodiment of the present invention, there is also provided an object warehousing processing device for implementing the above embodiment 2, as shown in fig. 13, which includes: a display module 1301, a receiving module 1303, and an output module 1305.
The display module 1301 is configured to display a name of an object and a delivery quantity of the object, where the object is obtained by identifying an acquired image; a receiving module 1303, configured to receive a warehousing instruction for confirming warehousing of the object; the output module 1305 is configured to output a processing result according to the binning instruction, where the processing result includes: and (5) warehousing the object.
Here, the display module 1301, the receiving module 1303, and the output module 1305 correspond to steps S602 to S606 in embodiment 2, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 2.
In an alternative embodiment, the receiving module includes: the first display module and the first receiving module. The first display module is used for displaying buttons for identifying whether the object is put in storage or not; the first receiving module is used for receiving a warehousing instruction which is input through the button and used for indicating to confirm to put the object in a warehouse.
Here, it should be noted that the first display module and the first receiving module correspond to step S6040a to step S6042a in embodiment 2, and the two modules are the same as the corresponding steps in implementation examples and application scenarios, but are not limited to those disclosed in embodiment 2.
In an alternative embodiment, the receiving module includes: the processing module and the collecting module. The processing module is used for sending out voice information for prompting whether to put an object in storage or not; the collection module is used for collecting voice instructions sent according to the voice information, wherein the voice instructions carry a warehousing instruction for confirming warehousing of the objects.
Here, the processing module and the collecting module correspond to steps S6040b to S6042b in embodiment 2, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 2.
In an alternative embodiment, the output module includes: the device comprises a second display module, a second receiving module and a first output module. The second display module is used for displaying an input box for inputting the warehousing quantity of the object; the second receiving module is used for receiving the warehouse-in quantity input through the input box; the first output module is used for outputting a processing result according to the warehousing instruction, wherein the processing result comprises: and carrying out warehousing processing information on the objects with the warehousing quantity.
Here, the second display module, the second receiving module, and the first output module correspond to steps S60 to S64 in embodiment 2, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 2.
In an alternative embodiment, the object warehousing processing device further includes: the device comprises a third receiving module, a third display module and a determining module. The third receiving module is used for receiving a selection instruction, wherein the selection instruction is used for selecting an image for identifying an object from the acquired images; the third display module is used for displaying a recognition result, wherein the recognition result is an object recognized by recognizing the image selected according to the selection instruction; and the determining module is used for determining the object to be displayed according to the displayed identification result.
Here, it should be noted that the third receiving module, the third display module, and the determining module correspond to steps S6020 to S6024 in embodiment 2, and the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 2.
Example 6
According to an embodiment of the present invention, there is also provided a commodity warehouse entry processing apparatus for implementing the foregoing embodiment 3, as shown in fig. 14, which includes: acquisition module 1401, identification module 1403, acquisition module 1405, and output module 1407.
Wherein, the acquisition module 1401 is used for acquiring an image of goods delivered to a store; an identification module 1403 for identifying the commodity in the image; an obtaining module 1405, configured to obtain a warehouse-in status of the identified commodity; an output module 1407, configured to output a warehousing processing result of the commodity when the warehousing state is to be warehoused, where the warehousing processing result includes: and (5) warehousing the commodity.
Here, it should be noted that the acquisition module 1401, the identification module 1403, the acquisition module 1405, and the output module 1407 correspond to steps S1102 to S1108 in embodiment 3, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 3.
In an alternative embodiment, the commodity warehousing processing device further includes: the device comprises a first acquisition module, a first determination module, a comparison module and a second determination module. The first acquisition module is used for acquiring orders of commodities to be put in storage; the first determining module is used for determining commodity information of the commodity from the image; the comparison module is used for comparing the commodity information of the commodity determined from the image with the commodity information of the commodity in the order; and the second determining module is used for determining and outputting a warehouse-in processing result of the commodity under the condition that the comparison results are matched.
Here, it should be noted that the first obtaining module, the first determining module, the comparing module, and the second determining module correspond to steps S11060 to S11066 in embodiment 3, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 3.
In an alternative embodiment, the second determining module includes: the second acquisition module and the first output module. The second acquisition module is used for acquiring a warehouse-in partition and a partition position of the commodity warehouse-in; the first output module is used for outputting the warehouse-in processing result of the commodity according to the obtained warehouse-in partition and the partition position.
Here, it should be noted that the second obtaining module and the first output module correspond to steps S110660 to S110662 in embodiment 3, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to those disclosed in embodiment 3.
Example 7
According to another aspect of the embodiment of the present invention, there is also provided an object warehousing processing system, including: a processor; and a memory, coupled to the processor, for providing instructions to the processor for processing the steps of: collecting an image; identifying an object in the image; acquiring a warehouse-in state of the identified object; outputting a warehousing processing result under the condition that the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
Example 5
Embodiments of the present invention may provide a computer terminal, which may be any one of a group of computer terminals. Alternatively, in the present embodiment, the above-described computer terminal may be replaced with a terminal device such as a mobile terminal.
Alternatively, in this embodiment, the above-mentioned computer terminal may be located in at least one network device among a plurality of network devices of the computer network.
Fig. 15 shows a hardware configuration block diagram of a computer terminal. As shown in fig. 15, the computer terminal a may include one or more processors 802 (shown as 802a, 802b, … …,802n in the figures) (the processor 802 may include, but is not limited to, a microprocessor MCU or a programmable logic device FPGA or the like processing means), a memory 804 for storing data, and a transmission means 806 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 15 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal a may also include more or fewer components than shown in fig. 15, or have a different configuration than shown in fig. 15.
It should be noted that the one or more processors 802 and/or other data processing circuits described above may be referred to herein generally as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuit may be a single stand-alone processing module or incorporated, in whole or in part, into any of the other elements in computer terminal a. As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The processor 802 may call the information stored in the memory and the application program through the transmission device to perform the following steps: collecting an image; identifying an object in the image; acquiring a warehouse-in state of the identified object; outputting a warehousing processing result under the condition that the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
The memory 804 may be used to store software programs and modules of application software, such as a program instruction/data storage device corresponding to the object warehousing processing method in the embodiment of the present application, and the processor 802 executes various functional applications and data processing by running the software programs and modules stored in the memory 804, that is, implements the object warehousing processing method described above. The memory 804 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 804 may further include memory located remotely from the processor 802, which may be connected to the computer terminal a through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 806 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of the computer terminal a. In one example, the transmission means 806 includes a network adapter (Network Interface Controller, NIC) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device 806 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with the user interface of computer terminal a.
It should be noted here that, in some alternative embodiments, the computer terminal a shown in fig. 15 may include hardware elements (including circuits), software elements (including computer code stored on a computer readable medium), or a combination of both hardware elements and software elements. It should be noted that fig. 15 is only one example of a specific example, and is intended to show the types of components that may be present in the computer terminal a described above.
In this embodiment, the computer terminal a may execute the program code of the following steps in the method for acquiring the verification code of the application program: collecting an image; identifying an object in the image; acquiring a warehouse-in state of the identified object; outputting a warehousing processing result under the condition that the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
The processor may call the information and the application program stored in the memory through the transmission device to perform the following steps: collecting an image; identifying an object in the image; acquiring a warehouse-in state of the identified object; outputting a warehousing processing result under the condition that the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
Optionally, the above processor may further execute program code for: judging whether the quality of the object meets the preset quality requirement or not according to the acquired image; and if the judgment result is yes, executing the step of acquiring the warehouse-in state of the identified object.
Optionally, the above processor may further execute program code for: displaying the name of the object, the delivery quantity of the object and an input box for inputting the warehousing quantity of the object; receiving the warehouse-in quantity input through an input box according to the delivery quantity; outputting a processing result under the condition of receiving a warehousing instruction for confirming the input warehousing quantity, wherein the processing result comprises the following steps: and (5) warehousing the object.
Optionally, the above processor may further execute program code for: acquiring an image by at least one of: collecting an image in a photographing mode; and acquiring images by a video recording mode.
Optionally, the above processor may further execute program code for: comparing the image with the image of the object in the 360-degree picture library to obtain a comparison result, wherein the images of at least two sides of the object are stored in the 360-degree picture library; and identifying the object in the image according to the obtained comparison result.
Optionally, the above processor may further execute program code for: comparing the image with a front image, a side image, a top image and a bottom image of the 360-degree picture library object respectively to obtain a front result, a side result, a top result and a bottom result respectively; and counting the obtained front results, side results, top results and bottom results to obtain comparison results.
Optionally, the above processor may further execute program code for: determining an object recognition model, wherein the object recognition model is obtained through machine learning training by using a plurality of sets of data, and each set of data in the plurality of sets of data comprises: the image to be identified and the identification object used for identifying the identification from the image to be identified; and identifying the object in the image according to the determined object identification model.
It will be appreciated by those skilled in the art that the configuration shown in fig. 15 is only illustrative, and the computer terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a palm-phone computer, a mobile internet device (Mobile Internet Devices, MID), a PAD, etc. Fig. 15 is not limited to the structure of the electronic device. For example, the computer terminal a may further include more or less components (such as a network interface, a display device, etc.) than those shown in fig. 15, or have a different configuration from that shown in fig. 15.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program for instructing a terminal device to execute in association with hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: flash disk, read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), magnetic or optical disk, and the like.
Example 6
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be used to store the program code executed by the object warehousing processing method provided in the first embodiment.
Alternatively, in this embodiment, the storage medium may be located in any one of the computer terminals in the computer terminal group in the computer network, or in any one of the mobile terminals in the mobile terminal group.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: collecting an image; identifying an object in the image; acquiring a warehouse-in state of the identified object; outputting a warehousing processing result under the condition that the warehousing state is to-be-warehoused, wherein the warehousing processing result comprises: and (5) warehousing the object.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: judging whether the quality of the object meets the preset quality requirement or not according to the acquired image; and if the judgment result is yes, executing the step of acquiring the warehouse-in state of the identified object.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: displaying the name of the object, the delivery quantity of the object and an input box for inputting the warehousing quantity of the object; receiving the warehouse-in quantity input through an input box according to the delivery quantity; outputting a processing result under the condition of receiving a warehousing instruction for confirming the input warehousing quantity, wherein the processing result comprises the following steps: and (5) warehousing the object.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: acquiring an image by at least one of: collecting an image in a photographing mode; and acquiring images by a video recording mode.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: comparing the image with the image of the object in the 360-degree picture library to obtain a comparison result, wherein the images of at least two sides of the object are stored in the 360-degree picture library; and identifying the object in the image according to the obtained comparison result.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: comparing the image with a front image, a side image, a top image and a bottom image of the 360-degree picture library object respectively to obtain a front result, a side result, a top result and a bottom result respectively; and counting the obtained front results, side results, top results and bottom results to obtain comparison results.
Alternatively, in the present embodiment, the storage medium is configured to store program code for performing the steps of: determining an object recognition model, wherein the object recognition model is obtained through machine learning training by using a plurality of sets of data, and each set of data in the plurality of sets of data comprises: the image to be identified and the identification object used for identifying the identification from the image to be identified; and identifying the object in the image according to the determined object identification model.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, such as the division of the units, is merely a logical function division, and may be implemented in another manner, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.