Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The drawings are merely schematic illustrations of the present invention, in which the same reference numerals denote the same or similar parts, and thus, a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and steps, nor do they necessarily have to be performed in the order described. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The following detailed description of exemplary embodiments of the invention refers to the accompanying drawings.
Fig. 2 is a system block diagram illustrating a method and apparatus for article identification according to an example embodiment.
The server 205 may be a server that provides various services, such as a backend management server (for example only) that provides support for a product identification system operated by a user using the terminal devices 201, 202, 203. The backend management server may analyze and otherwise process data such as the received product identification request, and feed back a processing result (for example, a center position and a posture — just an example) to the terminal device.
The server 205 may, for example, obtain two-dimensional feature information and three-dimensional point cloud information of a commodity; the server 205 may identify the commodity using a two-dimensional matching algorithm, for example, when it is determined that the two-dimensional feature information satisfies the first condition; the server 205 may identify the commodity using a three-dimensional matching algorithm, for example, when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition. The server 205 may identify the commodity using a frame matching algorithm, for example, when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the third condition.
The server 205 may be a server of one entity, and may also be composed of a plurality of servers, for example, a part of the server 205 may be, for example, used as a product identification task submitting system in the present disclosure, and is configured to obtain a task to be executed with a product identification command; and a part of the server 205 may also be used, for example, as a product identification system in the present disclosure, for obtaining two-dimensional feature information and three-dimensional point cloud information of a product; when the two-dimensional characteristic information is judged to meet the first condition, identifying the commodity by using a two-dimensional matching algorithm; when the two-dimensional characteristic information and the three-dimensional point cloud information meet the second condition, identifying the commodity by using a three-dimensional matching algorithm; and when the two-dimensional characteristic information and the three-dimensional point cloud information meet the third condition, identifying the commodity by using a frame matching algorithm.
It should be noted that the method for product identification provided by the embodiment of the present disclosure may be executed by the server 205, and accordingly, a device for product identification may be disposed in the server 105. And the request end provided for the user to submit the goods identification task and obtain the goods identification result is generally located in the terminal equipment 201, 202, 203.
According to the commodity identification method and the commodity identification device, the identification algorithm can be adaptively matched, and the accuracy of commodity identification is improved.
FIG. 3 is a flow chart illustrating a method of article identification according to an exemplary embodiment. The article identification method 30 includes at least steps S302 to S308.
As shown in fig. 3, in S302, two-dimensional feature information and three-dimensional point cloud information of the product are acquired. The two-dimensional characteristic information can be obtained by extracting from an image shot by a two-dimensional camera. The two-dimensional feature information is image feature information representing commodity feature information. The three-dimensional point cloud information may be acquired by a three-dimensional camera, which may be, for example, but not limited to, a binocular camera.
In one embodiment, acquiring two-dimensional characteristic information of the commodity comprises: acquiring two-dimensional template information of a commodity; and extracting the commodity according to the two-dimensional template information to generate two-dimensional characteristic information. The two-dimensional template information is template information of a single commodity, for example, but not limited to, image information of each side of the commodity, and specifically, may be feature information of a point, a line, and a side of each side. When the commodity is extracted, the commodity image can be extracted according to two-dimensional template information to obtain two-dimensional characteristic information of the commodity. The two-dimensional feature information may be, for example, but not limited to, color features, texture features, shape features, and spatial location features, among others.
In S304, when it is determined that the two-dimensional feature information satisfies the first condition, the commodity is identified using a two-dimensional matching algorithm. The first condition is used for judging whether the two-dimensional matching algorithm is suitable for commodity identification of the current scene. For example, the first condition may be used to determine whether the feature point number of the two-dimensional feature information reaches a threshold condition, and when the threshold condition is reached, it is considered that a good recognition effect can be obtained when the two-dimensional feature information is used for product recognition. The two-dimensional matching algorithm is an algorithm for matching the commodities according to the two-dimensional characteristic information and the two-dimensional template information to identify the commodities. When the two-dimensional matching algorithm is suitable for commodity identification of the current scene, the identification accuracy rate of the two-dimensional matching algorithm is higher than that of other matching algorithms.
In one embodiment, when the two-dimensional feature information is judged to meet the feature point threshold condition, the two-dimensional feature information is confirmed to meet a first condition; and identifying the commodity according to the two-dimensional characteristic information and a two-dimensional matching algorithm. And when the number of the feature points in the two-dimensional feature information is greater than the feature point threshold value, the two-dimensional feature information is considered to meet the feature point threshold value condition. The specific numerical value of the feature point threshold is obtained from experience or test, and is the minimum feature point number for which the recognition result is effective when the two-dimensional matching algorithm is used for recognizing the commodity.
In one embodiment, further comprising: and when the commodity identification by using the two-dimensional matching algorithm fails, judging whether the two-dimensional characteristic information and the three-dimensional point cloud information meet a second condition. The case of failed product identification is the case of 0 number of matched products, and when the matching fails or the products do not exist, the identification fails. Under the condition, the condition judgment is carried out on the two-dimensional characteristic information and the three-dimensional point cloud information again, and the commodity is identified again by using the three-dimensional matching algorithm or the frame matching algorithm according to the condition judgment result so as to improve the identification accuracy, wherein the condition judgment process is shown in S306 and S308. The frame matching algorithm is an algorithm which only depends on the size information of the commodity for matching, does not depend on the feature information of the feature template, and only depends on the external size of the matched object. In the frame matching algorithm, the purpose of identifying the object is achieved by matching lines with the same size as the template in the two-dimensional image. The frame matching algorithm does not depend on the texture features of the outer surface of the template, and in comparison with algorithms which depend on feature information, such as a two-dimensional matching algorithm, a three-dimensional matching algorithm and the like, the frame matching algorithm depends on less prior knowledge, so that the frame matching algorithm has a better recognition effect on the recognition of objects with poor texture or easily-reflective materials.
In S306, when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition, the product is identified using a three-dimensional matching algorithm. Wherein the second condition is used for judging that the current scene is not suitable for the two-dimensional matching algorithm but is suitable for the three-dimensional matching algorithm. For example, when the two-dimensional feature information does not satisfy the first condition and the three-dimensional point cloud information is judged to satisfy the condition of using the three-dimensional matching algorithm, it is determined that the current two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition. The three-dimensional matching algorithm is to use three-dimensional template information of the commodity to detect a target object in the three-dimensional point cloud information according to the similarity between the three-dimensional point cloud information and the template.
In one embodiment, three-dimensional template information of a commodity is acquired; when the two-dimensional characteristic information does not meet the first condition, judging whether the commodity meets the placing condition or not according to the three-dimensional point cloud information; when the commodity is judged to meet the non-placement condition, confirming that the two-dimensional characteristic information and the three-dimensional point cloud information meet a second condition; and identifying the commodity according to the three-dimensional template information and a three-dimensional matching algorithm. And the placing condition is whether the goods are placed densely, namely when the three-dimensional point cloud information is analyzed to confirm that the goods are not placed densely, namely the goods are placed in a scattered manner, the current scene is considered not to meet the placing condition. When the commodities are identified, three-dimensional matching is carried out in the three-dimensional point cloud information according to the three-dimensional template information so as to identify the commodities.
In one embodiment, further comprising: when the commodity identification fails, a commodity absence signal is transmitted. The case that the commodity identification fails is the case that the number of the matched commodities is 0, and the commodity absence signal is sent to indicate that the current turnover box is empty. Further, the conveyor belt can be excited to operate according to the commodity absence signal so as to identify the commodities in the next turnover box.
In S308, when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the third condition, the commodity is identified using a frame matching algorithm. The third condition is used for judging that the current scene is not suitable for the two-dimensional matching algorithm, and is more suitable for the frame matching algorithm compared with the three-dimensional matching algorithm. For example, when the two-dimensional feature information does not satisfy the first condition and the three-dimensional point cloud information is determined to satisfy the placement condition, it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the third condition. The frame matching algorithm only depends on the size information of the commodity, namely three-dimensional template information. For goods without textures or weak textures, compared with a two-dimensional matching algorithm and a three-dimensional matching algorithm, the algorithm has better identification accuracy and depends on less prior information. The frame matching algorithm has a better recognition effect on picking operation in the square commodity box.
In one embodiment, when the two-dimensional characteristic information does not meet the first condition and the commodity is judged to not meet the placing condition according to the three-dimensional point cloud information, the two-dimensional characteristic information and the three-dimensional point cloud information are confirmed to meet a third condition; and identifying the commodity according to the three-dimensional template information and a frame matching algorithm. The placing condition is whether the placing is scattered or not, namely the placing condition is not satisfied when the three-dimensional point cloud information is analyzed to confirm that the placing is not scattered. The schematic placing diagram of the square commodity when the placing condition is not met is shown in fig. 4. When the commodity is identified, the frame of the commodity is identified in the two-dimensional characteristic information according to the three-dimensional template information so as to identify the commodity.
In one embodiment, further comprising: and when the commodity identification by using the frame matching algorithm fails, identifying the commodity by using the three-dimensional matching algorithm. When the turnover box is empty or the frame matching fails, the commodity identification fails by using the frame matching algorithm.
In one embodiment, further comprising: after the commodity is successfully identified, calculating the central position and the posture of the commodity; and sending the central position and the gesture to a picking robot to pick the goods. The center position and the posture of each commodity can be calculated according to the recognition result, so that the robot can pick the commodities according to the center position and the posture.
According to the commodity identification method, the two-dimensional characteristic information and the three-dimensional point cloud information are analyzed and judged, and a commodity identification algorithm suitable for the current scene is selected according to the judgment result. The commodity identification method can adaptively match the identification algorithm and improve the accuracy of commodity identification.
FIG. 5 is a flow chart illustrating a method of article identification according to an exemplary embodiment. The article identification method 50 includes at least steps S502 to S508.
As shown in fig. 5, in S502, three-dimensional template information of the product is acquired. The three-dimensional template information may be, for example, but not limited to, size information of the product.
In S504, when the two-dimensional feature information does not satisfy the first condition, it is determined whether the commodity satisfies the placement condition according to the three-dimensional point cloud information. The first condition is used for judging whether the current scene is suitable for the two-dimensional matching algorithm. The placing condition is whether the commodities are placed densely or not.
In S506, when it is determined that the commodity does not satisfy the placement condition, it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition. When the commodity is judged to meet the placing condition, analysis can be carried out according to the three-dimensional point cloud information so as to confirm whether the commodity is placed densely.
In S508, the product is identified according to the three-dimensional template information and the three-dimensional matching algorithm. And matching the three-dimensional point cloud information by using a three-dimensional matching algorithm according to the three-dimensional template information so as to identify the commodity.
According to the commodity identification method, the commodity is identified by using the frame matching algorithm when the third condition is met, and the identification efficiency and the accuracy can be improved when the regularly placed square commodity is identified.
According to the commodity identification method, the commodities are identified by adopting different identification algorithms according to different scenes, and the identification accuracy of the commodities can be improved.
It should be clearly understood that this disclosure describes how to make and use particular examples, but the principles of this disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
Fig. 6 is a flowchart illustrating a method of article identification according to another exemplary embodiment. As shown in fig. 6, first, two-dimensional template information (for example, a two-dimensional image of six surfaces of a commodity) and three-dimensional template information (for example, commodity size information: length, width, and height) of the commodity are collected, and the two-dimensional template information and the three-dimensional template information may be collected collectively, for example, when the commodity is put in storage.
When the commodity identification method is realized, firstly, two-dimensional characteristic information is extracted from a two-dimensional image according to two-dimensional template information of a commodity, and if the number of characteristic points in the two-dimensional characteristic information exceeds a set characteristic point threshold value, a target object to be captured is identified by matching the two-dimensional template information on a scene two-dimensional image, as shown in ① in a flow chart.
In this case, the distribution condition of the three-dimensional point cloud information of the scene is analyzed to judge whether the commodity placement scene is randomly placed or densely placed, and if the commodity placement scene is randomly placed, the commodity to be picked is directly identified on the three-dimensional point cloud information of the scene by using the three-dimensional template information of the commodity, as shown by ③ in the flow chart.
If the number of the feature points of the two-dimensional template information does not reach the threshold value and the commodities are judged to be densely arranged according to the distribution of the three-dimensional point cloud information of the scene, identifying the commodities to be picked by adopting a frame matching algorithm, as shown by ② in the flow chart, and if the commodities cannot be identified by frame matching, identifying the commodities to be picked again on the three-dimensional point cloud of the scene, as shown by ③ in the flow chart.
If the three algorithms do not identify the commodities, the empty container is judged, and the commodities to be sorted do not exist. If the commodity is identified according to the three algorithms, the central position and the posture of the commodity are calculated and sent to the robot to be sorted.
According to the commodity identification method, the two-dimensional characteristic information and the three-dimensional point cloud information are analyzed and judged, and a commodity identification algorithm suitable for the current scene is selected according to the judgment result. The commodity identification method can adaptively match the identification algorithm and improve the accuracy of commodity identification.
According to the commodity identification method, when the two-dimensional characteristic information does not meet the threshold condition and the commodities are densely arranged, the commodities are identified by using the frame matching algorithm, and the identification efficiency and the accuracy can be improved when the square commodities which are placed orderly are identified.
Those skilled in the art will appreciate that all or part of the steps implementing the above embodiments are implemented as computer programs executed by a CPU. When executed by the CPU, performs the functions defined by the above-described methods provided by the present disclosure. The program may be stored in a computer readable storage medium, which may be a read-only memory, a magnetic or optical disk, or the like.
Furthermore, it should be noted that the above-mentioned figures are only schematic illustrations of the processes involved in the methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 7 is a block diagram illustrating an article identification device according to an exemplary embodiment. Referring to fig. 7, the article recognition device 70 includes at least: an item information module 702, a first identification module 704, a second identification module 706, and a third identification module 708.
In the commodity identification device, a commodity information module 702 is used for acquiring two-dimensional feature information and three-dimensional point cloud information of a commodity. The two-dimensional characteristic information can be obtained by extracting from an image shot by a two-dimensional camera. The two-dimensional feature information is image feature information representing commodity feature information. The three-dimensional point cloud information may be acquired by a three-dimensional camera, which may be, for example, but not limited to, a binocular camera.
In one embodiment, the merchandise information module 702 is configured to obtain two-dimensional template information of merchandise; and extracting the commodity according to the two-dimensional template information to generate two-dimensional characteristic information.
The first identification module 704 is configured to identify the commodity by using a two-dimensional matching algorithm when it is determined that the two-dimensional feature information satisfies a first condition. The first condition is used for judging whether the two-dimensional matching algorithm is suitable for commodity identification of the current scene. For example, the first condition may be used to determine whether the feature point number of the two-dimensional feature information reaches a threshold condition, and when the threshold condition is reached, it is considered that a good recognition effect can be obtained when the two-dimensional feature information is used for product recognition. The two-dimensional matching algorithm is an algorithm for matching the commodities according to the two-dimensional characteristic information and the two-dimensional template information to identify the commodities. When the two-dimensional matching algorithm is suitable for commodity identification of the current scene, the identification accuracy rate of the two-dimensional matching algorithm is higher than that of other matching algorithms.
In one embodiment, the first identification module 704 is configured to confirm that the two-dimensional feature information satisfies the first condition when the two-dimensional feature information is determined to satisfy the feature point threshold condition; and identifying the commodity according to the two-dimensional characteristic information and a two-dimensional matching algorithm.
In one embodiment, the first identification module 704 is configured to determine whether the two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition when the two-dimensional matching algorithm fails to identify the commodity.
The second identification module 706 is configured to identify the commodity by using a three-dimensional matching algorithm when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition. Wherein the second condition is used for judging that the current scene is not suitable for the two-dimensional matching algorithm but is suitable for the three-dimensional matching algorithm. For example, when the two-dimensional feature information does not satisfy the first condition and the three-dimensional point cloud information is judged to satisfy the condition of using the three-dimensional matching algorithm, it is determined that the current two-dimensional feature information and the three-dimensional point cloud information satisfy the second condition. The three-dimensional matching algorithm is to use three-dimensional template information of the commodity to detect a target object in the three-dimensional point cloud information according to the similarity between the three-dimensional point cloud information and the template.
In one embodiment, the second identification module 706 is configured to obtain three-dimensional template information of the product; when the two-dimensional characteristic information does not meet the first condition, judging whether the commodity meets the placing condition or not according to the three-dimensional point cloud information; when the commodity is judged not to meet the placing condition, the two-dimensional characteristic information and the three-dimensional point cloud information are confirmed to meet a second condition; and identifying the commodity according to the three-dimensional template information and a three-dimensional matching algorithm.
In one embodiment, the second identification module 706 is further configured to send an article absence signal when the article identification fails.
The third identifying module 708 is configured to identify the commodity by using a frame matching algorithm when it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy a third condition.
The third condition is used for judging that the current scene is not suitable for the two-dimensional matching algorithm, and is more suitable for the frame matching algorithm compared with the three-dimensional matching algorithm. For example, when the two-dimensional feature information does not satisfy the first condition and the three-dimensional point cloud information is determined to satisfy the placement condition, it is determined that the two-dimensional feature information and the three-dimensional point cloud information satisfy the third condition. The frame matching algorithm only depends on the size information of the commodity, namely three-dimensional template information. For goods without textures or weak textures, compared with a two-dimensional matching algorithm and a three-dimensional matching algorithm, the algorithm has better identification accuracy and depends on less prior information. The frame matching algorithm has a better recognition effect on picking operation in the square commodity box.
In one embodiment, the third identifying module 708 is configured to confirm that the two-dimensional feature information and the three-dimensional point cloud information satisfy the third condition when the two-dimensional feature information does not satisfy the first condition and the commodity is determined to satisfy the placing condition according to the three-dimensional point cloud information; and identifying the commodity according to the three-dimensional template information and a frame matching algorithm.
In one embodiment, the third identification module 708 is further configured to identify the product using a three-dimensional matching algorithm when the product fails to be identified using the border matching algorithm.
In one embodiment, further comprising: after the commodity is successfully identified, calculating the central position and the posture of the commodity; and sending the central position and the gesture to a picking robot to pick the goods.
According to the commodity identification device disclosed by the invention, the two-dimensional characteristic information and the three-dimensional point cloud information are analyzed and judged, and a commodity identification algorithm suitable for the current scene is selected according to the judgment result. The commodity identification method can adaptively match the identification algorithm and improve the accuracy of commodity identification.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
An electronic device 200 according to this embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 200 shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, the electronic device 200 is embodied in the form of a general purpose computing device. The components of the electronic device 200 may include, but are not limited to: at least one processing unit 210, at least one memory unit 220, a bus 230 connecting different system components (including the memory unit 220 and the processing unit 210), a display unit 240, and the like.
Wherein the storage unit stores program code executable by the processing unit 210 to cause the processing unit 210 to perform the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned electronic prescription flow processing method section of the present specification. For example, the processing unit 210 may perform the steps as shown in fig. 3, 5, 6.
The memory unit 220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)2201 and/or a cache memory unit 2202, and may further include a read only memory unit (ROM) 2203.
The storage unit 220 may also include a program/utility 2204 having a set (at least one) of program modules 2205, such program modules 2205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 230 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 200 may also communicate with one or more external devices 300 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present disclosure.
Fig. 9 schematically illustrates a computer-readable storage medium in an exemplary embodiment of the disclosure.
Referring to fig. 9, a program product 400 for implementing the above method according to an embodiment of the present disclosure is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of: acquiring verification information on an article to be received through a scanning device of the intelligent express box; when the verification information meets a preset condition, generating a cabin door opening signal and a camera opening signal of the intelligent express box; acquiring a cabin door closing signal according to a first user operation; and generating a video recording closing signal of the intelligent express box according to the cabin door closing signal.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.