CN111737509A - Information processing method, device, system, computer readable storage medium - Google Patents

Information processing method, device, system, computer readable storage medium Download PDF

Info

Publication number
CN111737509A
CN111737509A CN201910311639.4A CN201910311639A CN111737509A CN 111737509 A CN111737509 A CN 111737509A CN 201910311639 A CN201910311639 A CN 201910311639A CN 111737509 A CN111737509 A CN 111737509A
Authority
CN
China
Prior art keywords
image
location
areas
information
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910311639.4A
Other languages
Chinese (zh)
Inventor
向彪
王彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910311639.4A priority Critical patent/CN111737509A/en
Publication of CN111737509A publication Critical patent/CN111737509A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Abstract

The present disclosure provides an information processing method, including: acquiring a plurality of images related to an accommodating space, wherein each of the images has an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating a specific object, and the image identifiers represent areas where the specific object in the images is located; acquiring a location file, wherein the location file comprises location information of the plurality of areas; and generating a plurality of position subfiles respectively corresponding to the plurality of images based on the plurality of image identifications and the position files, wherein each position subfile comprises position information of an area where a specific object in the corresponding image is located.

Description

Information processing method, device, system, computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and more particularly, to an information processing method, an information processing apparatus, an information processing system, and a computer-readable storage medium.
Background
In a commodity sales physical store, store clerks usually need to periodically check and patrol commodities on a shelf so as to ensure that enough commodities can be sold on the shelf, the commodities are placed orderly and regularly, and the like, and further shopping experience and sales volume of customers are improved. With the development of artificial intelligence technology and image recognition technology, the goods on the goods shelf can be shot by means of the camera and recognized by means of object detection, so that solutions such as checking the goods on the goods shelf and inspecting the goods shelf are more and more emphasized. Usually, a plurality of images of the shelf can be collected through the camera, manual image annotation is performed on the images at a later stage, and a corresponding annotation file is generated for each image, wherein the annotation file comprises position information of a target object to be identified in the image. The manual labeling requires a significant amount of labor and time cost. Therefore, how to improve the efficiency of image annotation to save labor and time cost is a problem that needs to be solved at present.
In implementing the disclosed concept, the inventors found that there is at least the following problem in the prior art, which requires a great deal of labor and time cost by manually labeling images.
Disclosure of Invention
In view of the above, the present disclosure provides an optimized information processing method and an information processing system.
One aspect of the present disclosure provides an information processing method, including: the method comprises the steps of obtaining a plurality of images related to an accommodating space, wherein each of the images is provided with an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating specific objects, the image identifiers represent the areas where the specific objects in the images are located, obtaining a position file, the position file comprises position information of the areas, generating a plurality of position subfiles respectively corresponding to the images based on the image identifiers and the position file, and each position subfile comprises the position information of the area where the specific object in the corresponding image is located.
According to an embodiment of the present disclosure, the generating, based on the plurality of image identifiers and the location file, a plurality of location subfiles respectively corresponding to the plurality of images includes: determining the area of a specific object in the image to which the image identifier belongs based on the image identifier, determining the position information of the area of the specific object from the position file, and generating a position subfile corresponding to the image based on the position information of the area of the specific object.
According to an embodiment of the present disclosure, the accommodating space includes: a plurality of sub-receiving spaces, each sub-receiving space including a plurality of regions. The acquiring the location file comprises: acquiring the position information of a plurality of areas in each sub-accommodation space, and generating the position file based on the position information of the plurality of areas in each sub-accommodation space.
According to an embodiment of the present disclosure, the acquiring the position information of the plurality of regions of each sub-accommodation space includes: and acquiring the position information of a specific area in the sub-accommodation space, and determining the position information of other areas in the corresponding sub-accommodation space based on the position information of the specific area.
According to the embodiment of the disclosure, the plurality of images of the accommodating space are obtained from different viewpoints by the plurality of cameras, and the image identifier of each image comprises corresponding camera information.
According to an embodiment of the present disclosure, the location file includes: a plurality of location files, each location file including the camera information. The acquiring the location file comprises: and acquiring corresponding position files from the plurality of position files based on the camera information in the image identification.
Another aspect of the present disclosure provides an information processing apparatus including: the device comprises a first acquisition module, a second acquisition module and a generation module. The first obtaining module obtains a plurality of images related to an accommodating space, wherein each of the images has an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating a specific object, the image identifier represents an area where the specific object in the image is located, the second obtaining module obtains a location file, the location file comprises location information of the areas, and the generating module generates a plurality of location subfiles respectively corresponding to the images based on the image identifiers and the location file, wherein each location subfile comprises the location information of the area where the specific object in the image corresponding to the location subfile is located.
According to an embodiment of the present disclosure, the generating, based on the plurality of image identifiers and the location file, a plurality of location subfiles respectively corresponding to the plurality of images includes: determining the area of a specific object in the image to which the image identifier belongs based on the image identifier, determining the position information of the area of the specific object from the position file, and generating a position subfile corresponding to the image based on the position information of the area of the specific object.
According to an embodiment of the present disclosure, the accommodating space includes: a plurality of sub-receiving spaces, each sub-receiving space including a plurality of regions. The acquiring the location file comprises: acquiring the position information of a plurality of areas in each sub-accommodation space, and generating the position file based on the position information of the plurality of areas in each sub-accommodation space.
According to an embodiment of the present disclosure, the acquiring the position information of the plurality of regions of each sub-accommodation space includes: and acquiring the position information of a specific area in the sub-accommodation space, and determining the position information of other areas in the corresponding sub-accommodation space based on the position information of the specific area.
According to the embodiment of the disclosure, the plurality of images of the accommodating space are obtained from different viewpoints by the plurality of cameras, and the image identifier of each image comprises corresponding camera information.
According to an embodiment of the present disclosure, the location file includes: a plurality of location files, each location file including the camera information. The acquiring the location file comprises: and acquiring corresponding position files from the plurality of position files based on the camera information in the image identification.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as described above when executed.
Another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the method as described above when executed.
According to the embodiment of the disclosure, the problem that the manual image labeling method in the prior art requires a large amount of labor and time cost can be at least partially solved, and therefore, the technical effect of improving the efficiency of image labeling to save labor and time cost can be achieved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a system architecture of an information processing method and processing system according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates an application scenario of an information processing method and processing system according to an embodiment of the present disclosure;
FIG. 3 schematically shows a flow chart of an information processing method according to an embodiment of the present disclosure;
FIG. 4 schematically shows a schematic diagram of a plurality of images according to an embodiment of the disclosure;
FIG. 5 schematically illustrates a schematic diagram of a location file according to an embodiment of the disclosure;
FIG. 6 schematically illustrates a schematic diagram of generating a location subfile according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a schematic view of a receiving space according to an embodiment of the disclosure;
fig. 8 schematically shows a block diagram of an information processing apparatus according to an embodiment of the present disclosure; and
FIG. 9 schematically shows a block diagram of a computer system suitable for information processing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
An embodiment of the present disclosure provides an information processing method, including: the method comprises the steps of obtaining a plurality of images related to an accommodating space, wherein each of the images is provided with an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating specific objects, the image identifiers represent the areas where the specific objects in the images are located, obtaining a position file, wherein the position file comprises position information of the areas, generating a plurality of position subfiles respectively corresponding to the images based on the image identifiers and the position file, and each position subfile comprises the position information of the area where the specific object in the corresponding image is located.
Fig. 1 schematically shows a system architecture of an information processing method and an information processing system according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104 and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (for example only) providing support for websites browsed by users using the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the user request, and feed back a processing result (e.g., a webpage, information, or data obtained or generated according to the user request) to the terminal device.
It should be noted that the information processing method provided by the embodiment of the present disclosure may be generally executed by the server 105. Accordingly, the information processing apparatus provided by the embodiment of the present disclosure may be generally provided in the server 105. The information processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the information processing apparatus provided in the embodiment of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
For example, a plurality of images and location files regarding the accommodating space acquired by the embodiment of the present disclosure may be stored in the terminal devices 101, 102, and 103, and the plurality of images and location files regarding the accommodating space are transmitted to the server 105 through the terminal devices 101, 102, and 103, the server 105 may generate a plurality of location subfiles corresponding to the plurality of images, respectively, based on the plurality of image identifications and location files, or the terminal devices 101, 102, and 103 may also generate a plurality of location subfiles corresponding to the plurality of images, respectively, directly based on the plurality of image identifications and location files. In addition, the plurality of images and the location file regarding the accommodating space may also be directly stored in the server 105, and the plurality of location subfiles respectively corresponding to the plurality of images may be generated by the server 105 directly based on the plurality of image identifications and the location file.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically shows an application scenario of the information processing method and the information processing system according to the embodiment of the present disclosure. It should be noted that fig. 2 is only an example of a scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 2, the application scenario 200 may include, for example, a plurality of images 211, 212, 213.
According to an embodiment of the present disclosure, the plurality of images 211, 212, 213 may be, for example, images captured by a camera about a shelf, for example, for placing items. Wherein the shelf for example comprises a plurality of areas for placing items. For example, fig. 2 schematically shows that the shelf has three areas for placing items, namely a first area a, a second area B and a third area C.
According to an embodiment of the present disclosure, a specific object is included in each image, for example. For example, the specific object in the image 211 is located in the first area a, the specific object in the image 212 is located in the second area B, and the specific object in the image 213 is located in the third area C.
Wherein each image has an image identifier, which may be the file name of the image. For example, image 211 is identified as fig. a, image 212 is identified as fig. b, and image 213 is identified as fig. c.
In embodiments of the present disclosure, the image identification can, for example, characterize the region in which a particular object in the image is located. For example, fig. a can characterize that a specific object in the image 211 is located in the first region a, fig. B can characterize that a specific object in the image 212 is located in the second region B, and fig. C can characterize that a specific object in the image 213 is located in the third region C.
For example, the plurality of images 211, 212, 213 are collected in the same manner, that is, the positions of the shelves in different images are the same. Therefore, the positional information of each region of the shelf in different images is consistent. For example, the position information of the first area a in each of the plurality of images 211, 212, 213 is [ (a)1,a2),(a3,a4)]Wherein (a)1,a2) For example, the position coordinate of the upper left corner of the first area A, (a)3,a4) For example, the lower right corner position coordinate of the first area a. Similarly, the position information of the second region B is, for example, [ (B)1,b2),(b3,b4)]The positional information of the third region C is, for example, [ (C)1,c2),(c3,c4)]。
In the embodiment of the present disclosure, the location information of the plurality of areas is combined into the location file 220, for example, that is, the location file 220 includes the location information of each area. From each image and the location file 220, a location subfile corresponding to each image may be automatically generated, the location subfile including region location information of a region in each image in which a specific object is located. For example, a location subfile 231 of the image 211 is automatically generated from the location file 220, the location subfile 231 including location information of a first area A in which a specific object in the image 211 is located [ (a)1,a2),(a3,a4)]. Similarly, embodiments of the present disclosure may automatically generate location subfile 232 corresponding to image 212 and location subfile 233 corresponding to image 213.
The position subfile of the image can be automatically generated through the embodiment of the disclosure, the efficiency of image labeling is improved, and manpower and time cost are saved. In addition, the labeled images can be used as a training set for machine learning, for example, a training set for a deep learning target detection model, and the trained model can be used for automatically detecting and recognizing specific object information on a shelf.
Fig. 3 schematically shows a flow chart of an information processing method according to an embodiment of the present disclosure.
Fig. 4 schematically shows a schematic diagram of a plurality of images according to an embodiment of the present disclosure.
As shown in fig. 3, the method includes operations S310 to S330.
In operation S310, a plurality of images about an accommodating space are acquired, wherein each of the plurality of images has an image identifier, the accommodating space includes a plurality of regions, the regions can be used for accommodating a specific object, and the image identifier characterizes a region in which the specific object is located in the image.
According to an embodiment of the present disclosure, the accommodating space is used for placing an item, for example, and the accommodating space may be a shelf for placing an item, for example. Wherein the receiving space comprises a plurality of areas, each area being capable of being used for placing an item.
As shown in fig. 4, for example, a plurality of images 410, 420, 430 about the accommodation space are acquired. Each image has an image identification, which may be the file name of the corresponding image. For example, image 410 is identified as FIG. 1, image 420 is identified as FIG. 2, and image 430 is identified as FIG. 3.
In the embodiment of the present disclosure, for example, the accommodating space includes 6 regions (as shown in fig. 4, the accommodating space includes upper 3 regions and lower 3 regions). In each image, a specific object (e.g., a large-volume cylinder in fig. 4) is placed in a different region, for example, and the regions of the different images where the specific object is located have a certain regularity. For example, a particular object in the image 410 may be located in all regions. Based on the image 410, starting from the specific object located in the left area, each time the specific object located at an even number position is removed, for example, the specific object located at an even number 2 position is removed for the first time (the upper and lower areas are consistent), the image 420 is obtained, and the area where the specific object is removed can be used for placing other interfering objects (for example, the small-sized object in fig. 4). Next, the specific object located at the position with even number 2 is removed for the second time based on the image 420 (the area where the interfering item in the image 420 is located is not within the position parity calculation range), and an image 430 is obtained. In this way, a plurality of images of a specific object in different areas can be acquired.
In embodiments of the present disclosure, the image identification is capable of characterizing the region in which a particular object is located in the respective image. For example, the image identifier Fig _1 of the image 410 can represent that a specific object in the image 410 is located in all areas, the image identifier Fig _2 of the image 420 can represent that a specific object in the image 420 is located in the left and right areas, and the image identifier Fig _3 of the image 430 can represent that a specific object in the image 430 is located in the left area.
Or, the plurality of images are arranged in the folder in which the images are stored according to the image identifiers in a certain order, for example, the images are arranged in the folder according to the image identifier order of Fig _1, Fig _2, and Fig _3, and the rule of the area where the specific object is located in the plurality of images corresponds to the arrangement order of the images in the folder, and the area where the specific object is located in each image can also be known based on the arrangement mode of the images in the folder. That is, when the images are arranged in the folder according to the image identification sequence of fig. 1, fig. 2, and fig. 3, it can be known that the area where the specific object is located in the image 410 is all the areas based on the arrangement mode, the area where the specific object is located in the image 420 is the left and right areas, and the area where the specific object is located in the image 430 is the left area.
It is to be understood that the above illustrated rules of the regions in which the specific object is located in the different images are only exemplary examples provided for facilitating understanding of the present disclosure, and a person skilled in the art may specifically set the rules of the regions in which the specific object is located according to the actual application situation, as long as it is ensured that the regions in which the specific object is located in the corresponding images can be determined based on the image identifiers.
In operation S320, a location file is acquired, wherein the location file includes location information of a plurality of areas.
According to an embodiment of the present disclosure, the location file includes, for example, location information of all areas of the accommodating space in the image. The multiple images 410, 420, and 430 are collected in a consistent manner, for example, so as to ensure that the position information of the same region in the accommodating space is consistent in different images.
FIG. 5 schematically shows a schematic diagram of a location file according to an embodiment of the disclosure.
As shown in fig. 5, the position file 510 includes, for example, position information of all regions in an image, and the position information of the upper region of the storage space is represented by [ (a) from left to right1,a2),(a3,a4)]、[(b1,b2),(b3,b4)]、[(c1,a2),(c3,c4)](ii) a The position information of the lower region of the accommodating space is [ (d) from left to right1,d2),(d3,d4)]、[(e1,e2),(e3,e4)]、[(f1,f2),(f3,f4)]。
In operation S330, a plurality of location subfiles respectively corresponding to the plurality of images are generated based on the plurality of image identifications and the location file, wherein each location subfile includes location information of an area in which a specific object in the image corresponding thereto is located.
According to the embodiment of the disclosure, since the image identifier can represent the area where the specific object is located in the image, the location subfile corresponding to each image can be automatically generated based on the image identifier and the location file.
FIG. 6 schematically shows a schematic diagram of generating a location subfile according to an embodiment of the present disclosure.
First, a region where a specific object is located in an image to which an image tag belongs is determined based on the image tag.
As shown in fig. 4-6, for example, the image identification characterizes the region in the image where a particular object is located. Taking the image 410 as an example, according to the image identifier Fig _1, it is determined that the specific object in the image 410 to which the image identifier Fig _1 belongs is in all the areas (for example, the upper 3 areas and the lower 3 areas of the accommodating space where the specific object is located).
Next, position information of an area where the specific object is located is determined from the position file. For example, the location file 510 includes location information of all areas, and after the area where the specific object is located in the image 410 is determined, the location information of the area where the specific object is located is determined from the location file 510.
Finally, a position subfile corresponding to the image is generated based on the position information of the region where the specific object is located. For example, a location subfile 610 corresponding to the image 410 is derived from the location file 510. The location subfile 610 includes, for example, location information of the area in which the particular object is located in the image 410. For example, if a specific object in the image 410 is in all areas of the accommodating space, a location subfile 610 corresponding to the image 410 is automatically generated based on the location file 510, where the location subfile 610 includes, for example, the following location information: [ (a)1,a2),(a3,a4)]、[(b1,b2),(b3,b4)]、[(c1,a2),(c3,c4)]、[(d1,d2),(d3,d4)]、[(e1,e2),(e3,e4)]、[(f1,f2),(f3,f4)]。
Similarly, the image identifier Fig _2 of the image 420 represents that the specific object in the image 420 is in the left and right areas, and at this time, a location subfile 620 corresponding to the image 420 is automatically generated according to the location file 510, where the location subfile 620 includes, for example, the location information of the area in which the specific object in the image 420 is located. Similarly, the location subfile 630 of the image 430 is automatically generated according to the location file 510, and the detailed process is not described herein.
The position subfile corresponding to each image is automatically generated based on the image identifications of the multiple images and the position file comprising the position information of the multiple regions, and the position subfile comprises the position information of the region where the specific object in the corresponding image is located, so that the position subfile for automatically generating the image is realized, the image labeling efficiency is improved, and the labor and the time cost are saved.
The following describes a specific manner of acquiring the location file.
Fig. 7 schematically shows a schematic view of an accommodation space according to an embodiment of the present disclosure.
As shown in fig. 7, the accommodating space includes a plurality of sub-accommodating spaces, each of which includes a plurality of regions. For example, the accommodating space includes sub-accommodating spaces 710, 720, and the sub-accommodating space 710 includes a plurality of regions 711, 712, 713, for example. The sub-receiving space 720 includes, for example, a plurality of regions 721, 722, 723.
Wherein, obtaining the location file comprises: position information of the plurality of areas in each sub-accommodation space is acquired, and a position file is generated based on the position information of the plurality of areas in each sub-accommodation space.
According to the embodiment of the disclosure, in the process of acquiring the image through the camera, due to the view angle factor of the camera, the positions of different sub-accommodation spaces in the image are deformed (not only the distance difference exists). Therefore, in order to ensure the accuracy of the region location information in the obtained location file, the location information of the plurality of sub-accommodation spaces is obtained respectively, and the combination of the location information of the plurality of sub-accommodation spaces is the location file.
In the following, the position information of a plurality of areas in one sub-accommodation space is obtained as an example.
First, position information of a specific area in the sub-accommodation space is acquired.
According to the embodiment of the present disclosure, the specific region in the sub-accommodation space is, for example, one of a plurality of regions in the sub-accommodation space. Taking the sub-accommodation space 710 as an example, the specific region may be, for example, a region 711, wherein the position information of the region 711 is, for example, [ (a)1,a2),(a3,a4)]. Wherein, a1For example, the upper left-hand x-axis coordinate of region 711, a2E.g., the upper left-hand y-axis coordinate of region 711, a3For example, the lower right x-axis coordinate of region 711, and a4, for example, the lower right y-axis coordinate of region 711.
Second, the location information of the other areas in the corresponding sub-accommodation space is determined based on the location information of the specific area.
For example, the position information of other areas in the sub-accommodation space is automatically generated based on the position information of a specific area. For example, position information [ (a) based on the region 7111,a2),(a3,a4)]Position information [ (a) of the automatically generated area 7121+m,a2),(a3+m,a4)]Where each of the plurality of regions 711, 712, 713 is, for example, the same size, and m is, for example, the length of each region in the x direction. Similarly, position information [ (a) of the region 713 is automatically generated1+2*m,a2),(a3+2*m,a4)]。
Similarly, for example, the position information of the specific region (for example, the specific region is the region 721) of the sub accommodation space 720 is acquired, and the position information of the regions 722 and 723 is automatically generated according to the position information of the region 721.
Of course, within the range where the accuracy of the position information of each region in the position file meets the accuracy requirement, the position file may also be obtained without separately obtaining the position information of the sub-accommodation spaces. For example, it is also possible to obtain a specific region in the accommodation spaceAnd automatically acquiring the position information of other areas in the accommodating space through the position information of the specific area. For example, the specific region in the housing space is, for example, the region 711, and the positional information of the region 711 is [ (a)1,a2),(a3,a4)]Position information of 712 is automatically generated based on the position information of the area 711 [ (a)1+m,a2),(a3+m,a4)]Position information of region 713 [ (a)1+2*m,a2),(a3+2*m,a4)]Position information of the region 721 [ (a)1,a2+n),(a3,a4+n)]Position information of the region 722 [ (a)1+m,a2+n),(a3+m,a4+n)]Location information of region 723 [ (a)1+2*m,a2+n),(a3+2*m,a4+n)]. Where n is, for example, the width of each region in the y direction.
According to the embodiment of the disclosure, a plurality of images of the accommodating space are obtained from different viewpoints by a plurality of cameras, and the image identifier of each image comprises corresponding camera information.
That is, a plurality of images are acquired from different viewpoints by a plurality of cameras, and a rich image sample can be acquired. The image identifier of each image can also comprise camera information for acquiring the image.
The location file includes, for example, a plurality of location files, each including camera information. That is, each location file includes a plurality of region location information in images acquired by the same camera based on the same angle of view. Thus, each location file includes corresponding camera information, such that the corresponding location file is obtained from the plurality of location files based on the camera information in the image identification. That is, when a sub-location file of a plurality of images is automatically generated based on a location file, a camera corresponding to the location file coincides with a camera that has acquired the plurality of images. For example, the images are acquired by a camera, and the location file is location information of a plurality of areas in the images acquired by the camera.
Fig. 8 schematically shows a block diagram of an information processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 8, the information processing apparatus 800 includes a first acquisition module 810, a second acquisition module 820, and a generation module 830.
The first obtaining module 810 may be configured to obtain a plurality of images of an accommodating space, where each of the plurality of images has an image identifier, the accommodating space includes a plurality of areas, the areas can be used for accommodating a specific object, and the image identifier represents an area in the image where the specific object is located. According to an embodiment of the present disclosure, the first obtaining module 810 may perform, for example, the operation S310 described above with reference to fig. 3, which is not described herein again.
The second obtaining module 820 may be configured to obtain a location file, where the location file includes location information of a plurality of areas. According to an embodiment of the present disclosure, the second obtaining module 820 may perform, for example, the operation S320 described above with reference to fig. 3, which is not described herein again.
The generating module 830 may be configured to generate a plurality of location subfiles corresponding to the plurality of images respectively based on the plurality of image identifications and the location files, where each location subfile includes location information of a region where a specific object in the corresponding image is located. According to the embodiment of the present disclosure, the generating module 830 may perform the operation S330 described above with reference to fig. 3, for example, and is not described herein again.
According to an embodiment of the present disclosure, the generating a plurality of location subfiles respectively corresponding to a plurality of images based on a plurality of image identifiers and location files includes: the method comprises the steps of determining the area where a specific object is located in an image to which an image identifier belongs based on the image identifier, determining the position information of the area where the specific object is located from a position file, and generating a position subfile corresponding to the image based on the position information of the area where the specific object is located.
According to an embodiment of the present disclosure, the accommodating space includes: a plurality of sub-receiving spaces, each sub-receiving space including a plurality of regions. Obtaining a location file, comprising: position information of the plurality of areas in each sub-accommodation space is acquired, and a position file is generated based on the position information of the plurality of areas in each sub-accommodation space.
According to an embodiment of the present disclosure, the acquiring the position information of the plurality of regions of each sub-accommodation space includes: the position information of a specific area in the sub-accommodation space is acquired, and the position information of other areas in the corresponding sub-accommodation space is determined based on the position information of the specific area.
According to the embodiment of the disclosure, the plurality of images of the accommodating space are obtained from different viewpoints by the plurality of cameras, and the image identifier of each image comprises corresponding camera information.
According to an embodiment of the present disclosure, the location file includes: a plurality of location files, each location file including camera information. Obtaining a location file, comprising: and acquiring a corresponding position file from the plurality of position files based on the camera information in the image identification.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the first obtaining module 810, the second obtaining module 820 and the generating module 830 may be combined and implemented in one module, or any one of them may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the first obtaining module 810, the second obtaining module 820 and the generating module 830 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or may be implemented by any one of three implementations of software, hardware and firmware, or any suitable combination of any of the three. Alternatively, at least one of the first obtaining module 810, the second obtaining module 820 and the generating module 830 may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
FIG. 9 schematically shows a block diagram of a computer system suitable for information processing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 9 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 9, a computer system 900 according to an embodiment of the present disclosure includes a processor 901 which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. Processor 901 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 901 may also include on-board memory for caching purposes. The processor 901 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
In the RAM 903, various programs and data necessary for the operation of the system 900 are stored. The processor 901, the ROM 902, and the RAM 903 are connected to each other through a bus 904. The processor 901 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 902 and/or the RAM 903. Note that the programs may also be stored in one or more memories other than the ROM 902 and the RAM 903. The processor 901 may also perform various operations of the method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
System 900 may also include an input/output (I/O) interface 905, input/output (I/O) interface 905 also connected to bus 904, according to an embodiment of the present disclosure. The system 900 may also include one or more of the following components connected to the I/O interface 905: an input portion 906 including a keyboard, a mouse, and the like; an output section 907 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 908 including a hard disk and the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as necessary. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 910 as necessary, so that a computer program read out therefrom is mounted into the storage section 908 as necessary.
According to embodiments of the present disclosure, method flows according to embodiments of the present disclosure may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable storage medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 909, and/or installed from the removable medium 911. The computer program, when executed by the processor 901, performs the above-described functions defined in the system of the embodiment of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a computer-non-volatile computer-readable storage medium, which may include, for example and without limitation: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
For example, according to embodiments of the present disclosure, a computer-readable storage medium may include the ROM 902 and/or the RAM 903 described above and/or one or more memories other than the ROM 902 and the RAM 903.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (14)

1. An information processing method comprising:
acquiring a plurality of images related to an accommodating space, wherein each of the images has an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating a specific object, and the image identifiers represent areas where the specific object in the images is located;
acquiring a location file, wherein the location file comprises location information of the plurality of areas; and
and generating a plurality of position subfiles respectively corresponding to the plurality of images based on the plurality of image identifications and the position files, wherein each position subfile comprises position information of an area where a specific object in the corresponding image is located.
2. The method of claim 1, wherein the generating a plurality of location subfiles corresponding to the plurality of images, respectively, based on the plurality of image identifications and the location file comprises:
determining the area of a specific object in the image to which the image identifier belongs based on the image identifier;
determining the position information of the area where the specific object is located from the position file;
and generating a position subfile corresponding to the image based on the position information of the area where the specific object is located.
3. The method of claim 1, wherein:
the accommodating space includes: a plurality of sub-receiving spaces, each of which includes a plurality of regions;
the acquiring the location file comprises:
acquiring position information of a plurality of areas in each sub-accommodation space;
and generating the position file based on the position information of the plurality of areas in each sub accommodation space.
4. The method of claim 3, wherein said obtaining location information for a plurality of regions of each sub-accommodation space comprises:
acquiring position information of a specific area in the sub-accommodation space;
and determining the position information of other areas in the corresponding sub-accommodation spaces based on the position information of the specific area.
5. The method of claim 1, wherein the plurality of images of the accommodation space are obtained from different perspectives by a plurality of cameras, the image identification of each image comprising corresponding camera information.
6. The method of claim 5, wherein:
the location file includes: a plurality of location files, each location file including the camera information;
the acquiring the location file comprises: and acquiring corresponding position files from the plurality of position files based on the camera information in the image identification.
7. An information processing apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module acquires a plurality of images related to an accommodating space, each image of the plurality of images has an image identifier, the accommodating space comprises a plurality of areas, the areas can be used for accommodating a specific object, and the image identifiers represent the areas where the specific object is located in the images;
a second obtaining module, configured to obtain a location file, where the location file includes location information of the plurality of areas; and
and the generating module is used for generating a plurality of position subfiles respectively corresponding to the images based on the image identifications and the position files, wherein each position subfile comprises position information of an area where a specific object in the corresponding image is located.
8. The apparatus of claim 7, wherein the generating a plurality of location subfiles corresponding to the plurality of images, respectively, based on the plurality of image identifications and the location file comprises:
determining the area of a specific object in the image to which the image identifier belongs based on the image identifier;
determining the position information of the area where the specific object is located from the position file;
and generating a position subfile corresponding to the image based on the position information of the area where the specific object is located.
9. The apparatus of claim 7, wherein:
the accommodating space includes: a plurality of sub-receiving spaces, each of which includes a plurality of regions;
the acquiring the location file comprises:
acquiring position information of a plurality of areas in each sub-accommodation space;
and generating the position file based on the position information of the plurality of areas in each sub accommodation space.
10. The apparatus of claim 9, wherein said obtaining location information for a plurality of regions of each sub-accommodation space comprises:
acquiring position information of a specific area in the sub-accommodation space;
and determining the position information of other areas in the corresponding sub-accommodation spaces based on the position information of the specific area.
11. The apparatus of claim 7, wherein the plurality of images of the receiving space are obtained from different perspectives by a plurality of cameras, the image identification of each image comprising corresponding camera information.
12. The apparatus of claim 11, wherein:
the location file includes: a plurality of location files, each location file including the camera information;
the acquiring the location file comprises: and acquiring corresponding position files from the plurality of position files based on the camera information in the image identification.
13. An information processing system comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-6.
14. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 6.
CN201910311639.4A 2019-04-17 2019-04-17 Information processing method, device, system, computer readable storage medium Pending CN111737509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910311639.4A CN111737509A (en) 2019-04-17 2019-04-17 Information processing method, device, system, computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910311639.4A CN111737509A (en) 2019-04-17 2019-04-17 Information processing method, device, system, computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111737509A true CN111737509A (en) 2020-10-02

Family

ID=72645850

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910311639.4A Pending CN111737509A (en) 2019-04-17 2019-04-17 Information processing method, device, system, computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111737509A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010035525A1 (en) * 2008-09-25 2010-04-01 ブラザー工業株式会社 Goods management system
CN108922028A (en) * 2018-08-03 2018-11-30 济南每日优鲜便利购网络科技有限公司 Automatic vending equipment, the control method and system of automatic vending equipment
CN109145901A (en) * 2018-08-14 2019-01-04 腾讯科技(深圳)有限公司 Item identification method, device, computer readable storage medium and computer equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010035525A1 (en) * 2008-09-25 2010-04-01 ブラザー工業株式会社 Goods management system
CN108922028A (en) * 2018-08-03 2018-11-30 济南每日优鲜便利购网络科技有限公司 Automatic vending equipment, the control method and system of automatic vending equipment
CN109145901A (en) * 2018-08-14 2019-01-04 腾讯科技(深圳)有限公司 Item identification method, device, computer readable storage medium and computer equipment

Similar Documents

Publication Publication Date Title
CN109308681B (en) Image processing method and device
CN111523977B (en) Method, device, computing equipment and medium for creating wave order set
US11321822B2 (en) Determining image defects using image comparisons
US10614621B2 (en) Method and apparatus for presenting information
US20200193372A1 (en) Information processing method and apparatus
CN109255767B (en) Image processing method and device
CN110619807B (en) Method and device for generating global thermodynamic diagram
US20210200971A1 (en) Image processing method and apparatus
US20210264198A1 (en) Positioning method and apparatus
CN112329762A (en) Image processing method, model training method, device, computer device and medium
CN111768258A (en) Method, device, electronic equipment and medium for identifying abnormal order
JP6249579B1 (en) Warehouse management method and warehouse management system
CN111857674A (en) Business product generation method and device, electronic equipment and readable storage medium
US20230123879A1 (en) Method and apparatus for positioning express parcel
CN112965916B (en) Page testing method, page testing device, electronic equipment and readable storage medium
CN107329981B (en) Page detection method and device
CN110489326B (en) IDS-based HTTPAPI debugging method device, medium and equipment
CN112348427A (en) Order processing method, device, computer system and readable storage medium
CN111160410A (en) Object detection method and device
US20150292890A1 (en) Method and apparatus for displaying road map
CN111401182B (en) Image detection method and device for feeding rail
CN111737509A (en) Information processing method, device, system, computer readable storage medium
CN111369624B (en) Positioning method and device
CN111488890B (en) Training method and device for object detection model
CN110532186B (en) Method, device, electronic equipment and storage medium for testing by using verification code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination