WO2021018019A1 - 图像采集方法、装置、电子设备及计算机存储介质 - Google Patents

图像采集方法、装置、电子设备及计算机存储介质 Download PDF

Info

Publication number
WO2021018019A1
WO2021018019A1 PCT/CN2020/104014 CN2020104014W WO2021018019A1 WO 2021018019 A1 WO2021018019 A1 WO 2021018019A1 CN 2020104014 W CN2020104014 W CN 2020104014W WO 2021018019 A1 WO2021018019 A1 WO 2021018019A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
shelf
collection
target object
Prior art date
Application number
PCT/CN2020/104014
Other languages
English (en)
French (fr)
Inventor
毛璐娜
宫晨
周立
周士天
Original Assignee
阿里巴巴集团控股有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2021018019A1 publication Critical patent/WO2021018019A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Definitions

  • the embodiments of the present invention relate to the field of computer technology, in particular to an image acquisition method, device, electronic equipment, and computer storage medium.
  • embodiments of the present invention provide an image acquisition solution to solve some or all of the above-mentioned problems.
  • a shelf image collection method which includes: acquiring shelf images collected according to instructions of first guide information, wherein the shelf is used to carry commodities, and the first guide The information is used to indicate the image collection path of the shelf; to obtain the edge detection result of the shelf edge detection of the shelf image; if the edge detection result indicates that the shelf image includes the shelf edge, then obtain a new indication
  • the second guide information of the image acquisition path or the third guide information indicating the end of acquisition is acquired.
  • a commodity information processing method which includes: collecting shelf image data according to acquired first guidance information, wherein the first guidance information is used for The image acquisition path is instructed; the image data is identified, and the product information on the shelf and the information about whether the shelf edge is included; if it is determined that the image data contains the information about the shelf edge, then according to the product information It is judged whether all the product information is included in all the collected image data, and the second guide information indicating a new image collection path or the third guide information indicating the end of collection is obtained according to the judgment result.
  • a shelf image collection method which includes: displaying first collection prompt information for the shelf commodities, wherein the first collection prompt information is used to indicate the pairing along the image collection path The collection position of the shelf commodity when the image is collected; the image collected according to the first collection prompt information is obtained, and the obtained image is recognized; if the recognition result indicates that the image includes the shelf edge, the display is used to indicate The new image acquisition path and the second acquisition prompt message instructing to continue image acquisition.
  • a client terminal which includes: a display interface for displaying first collection prompt information, and the first collection prompt information is used to indicate an image acquisition path Image acquisition of the target object; the display interface is also used to display second acquisition prompt information, which indicates that when the edge of the target object is included in the acquired image, it indicates to follow a new image acquisition path Information about image collection of the target object.
  • a commodity information processing method which includes: collecting shelf image data; processing the image data to identify the commodity information on the shelf; The obtained product information determines the product statistical information of the shelf.
  • a method for processing commodity information which includes: in response to a shooting operation initiated by a user, invoking an image acquisition device of a client to shoot image data of a shelf; and processing the image data , The product information on the shelf is identified; and the product statistical information of the shelf is determined according to the identified product information.
  • a method for processing merchandise replenishment which includes: in response to a replenishment operation initiated by a user, calling an image acquisition device to capture image data of a shelf; and identifying the image data Processing, identifying the product information on the shelf; and determining the product to be replenished according to the product information on the shelf.
  • an image acquisition method which includes: obtaining a detection result of real-time target object edge detection on a captured image, wherein the captured image contains a partial image of the target object Information; if the detection result indicates that the edge of the target object is detected in the image, obtain the posture data of the image capture device that collects the image; generate corresponding guidance information based on the posture data, and pass the The guide information guides the user to perform continuous image collection of the target object, so as to use the collected multiple images to form complete image information of the target object.
  • an image acquisition method which includes: acquiring the posture data of the image acquisition device in the process of image acquisition of the target object; and generating corresponding guidance information according to the posture data , Guiding the user to perform continuous image collection of the target object through the guidance information.
  • an image acquisition device which includes: a detection module for acquiring a detection result of real-time target object edge detection on the collected image, wherein the collected image contains Part of the image information of the target object; a first acquisition module for acquiring the posture data of the image acquisition device that acquired the image if the detection result indicates that the edge of the target object is detected in the image; generation module , For generating corresponding guidance information according to the posture data, and guiding the user to perform continuous image collection of the target object through the guidance information, so as to use the collected multiple images to form complete image information of the target object.
  • an electronic device including: a processor, a memory, a communication interface, and a communication bus.
  • the processor, the memory, and the communication interface are completed through the communication bus.
  • Mutual communication the memory is used to store at least one executable instruction, the executable instruction causes the processor to execute the method described in any one of the first aspect to the third aspect and the fifth to ninth aspect The corresponding operation.
  • a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the implementation is as in the first to third aspects and the fifth to ninth aspects. Any of the methods described.
  • Fig. 1 is a flowchart of the steps of an image acquisition method according to the first embodiment of the present invention
  • FIG. 2 is a flowchart of steps of an image acquisition method according to the second embodiment of the present invention.
  • Fig. 3 is a flowchart of steps of an image acquisition method according to the third embodiment of the present invention.
  • FIG. 4 is a flowchart of steps of an image acquisition method according to the fourth embodiment of the present invention.
  • Figure 5a is a flow chart of the steps of use scenario 1 of the present invention.
  • Figure 5b is a flow chart of the steps of the second use scenario of the present invention.
  • Fig. 5c is a schematic diagram of the segmentation path of the second use scenario of the present invention.
  • FIG. 5d is a schematic diagram of the shooting interface of the second use scene of the present invention.
  • Figure 5e is a flow chart of the steps of the third use scenario of the present invention.
  • Figure 5f is a flow chart of the steps of the fourth use scenario of the present invention.
  • FIG. 5g is a schematic diagram of a display interface of a client terminal in use scenario 5 of the present invention.
  • Figure 5h is a flow chart of the steps of the sixth use scenario of the present invention.
  • Figure 5i is a flow chart of the steps of the seventh use scenario of the present invention.
  • Figure 5j is a flow chart of the steps of the eighth use scenario of the present invention.
  • Fig. 5k is an information interaction diagram of a user, an image acquisition device, and a server in use scenario 9 of the present invention
  • FIG. 6 is a flowchart of steps of an image acquisition method according to the fifth embodiment of the present invention.
  • FIG. 7 is a flowchart of steps of an image acquisition method according to the sixth embodiment of the present invention.
  • Fig. 8 is a structural block diagram of an image acquisition device according to the seventh embodiment of the present invention.
  • Fig. 9 is a structural block diagram of an image acquisition device according to the eighth embodiment of the present invention.
  • Fig. 10 is a schematic structural diagram of an electronic device according to the ninth embodiment of the present invention.
  • Step S102 Obtain a detection result of real-time target object edge detection on the collected image.
  • the collected image contains part of the image information of the target object
  • the target object edge detection is used to detect whether the collected image contains the edge of the target object.
  • the detection of the edge of the target object can be performed on the client, or the detection result can be sent to the client after the detection on the server. It can be implemented using any appropriate model or algorithm or other methods. For example, a trained neural network model capable of edge detection of target objects, such as Convolutional Neural Network (CNN), is used to perform processing on the collected images. Edge detection.
  • CNN Convolutional Neural Network
  • a feature extraction algorithm is used to perform feature extraction on the collected image, and based on the extracted features, it is determined whether the image contains the edge of the target object, and then the detection result is generated.
  • the detection result it can be judged whether the user has collected the image of the edge of the target object, and then provide a reference for the subsequent generation of appropriate guidance information to guide the user, avoid the wrong operation when the user collects the image, and ensure that the complete image information can be collected.
  • step S104 is executed; otherwise, the guidance information for instructing the user to continue moving and shooting in the current moving direction can be directly generated.
  • Step S104 If the detection result indicates that the edge of the target object is detected in the image, acquire posture data of the image capture device that captured the image.
  • the posture data of the image capture device is used to characterize its current state, and the posture data of the image capture device includes, but is not limited to, acceleration information and/or angular velocity information in the spatial coordinate system.
  • the acceleration information and/or angular velocity information are used to determine that the image acquisition device is currently in a 45-degree up-tilt state, and so on.
  • the posture data it can be determined whether the user has the intention of successively shooting different positions of the target object, and then the generated guidance information matches the intention to guide the user to accurately image collection and ensure that the target object can be collected completely Image information.
  • Step S106 Generate corresponding guidance information according to the posture data, and guide the user to perform continuous image collection of the target object through the guidance information, so as to use the collected multiple images to form complete image information of the target object.
  • the image acquisition device may be provided with a corresponding relationship between the posture data and the guidance information or guidance keywords.
  • the corresponding guidance information can be determined and generated according to the posture data, or the corresponding guidance can be determined according to the posture data.
  • corresponding guide information is generated according to the guide keywords. For example, if the posture data corresponds to the guiding keyword "move up”, guiding information such as "please move up one space to shoot” can be generated. Through the guide information, the user can be effectively guided to continue image collection, so that the complete image information of the target object can be formed according to the collected multiple images.
  • steps S102 to S106 can be repeated multiple times, until it is determined that the user has no intention to continue collection according to the posture data of the image capture device obtained in step S104, an instruction can be generated Guidance information for the user to end image collection. After the collection is finished, multiple images collected can be used to form complete image information of the target object.
  • the image acquisition method in this embodiment is particularly suitable for use scenarios where goods on the shelf are not standardly placed.
  • the shelves in a small retail store have problems such as tightly placed shelves and messy and irregular products.
  • the image collection method of this embodiment can overcome these problems and realize the collection of complete images of the shelves, and Clear goods information can be obtained for subsequent identification of the complete image to determine the goods on the shelf.
  • drones may be used, but the technical means of drone shooting are not suitable for the use scenes of this application, and its technical means are difficult to transfer to the use scenes of this application. Similarly, the technical means in the electronic price tag scenario are also difficult to implement in this use scenario.
  • the target object edge detection is performed on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data to pass the guidance
  • the information guides users to conduct image collection in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object.
  • the image acquisition method of this embodiment can be executed by any appropriate electronic device with data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones, etc.), and PCs.
  • the image acquisition method of this embodiment includes the aforementioned steps S102 to S106.
  • step S102 the method further includes:
  • Step S100 Obtain a lightweight neural network model dynamically issued to the image acquisition device for edge detection of the target object.
  • this step is optional. If this step is executed, it can be executed at any appropriate timing before step S102.
  • the image capture device in order to ensure timely detection of whether the captured image contains the edge of the target object, and to ensure the accuracy of the generated guidance information, the image capture device locally has a lightweight neural network model that is dynamically issued, using The lightweight neural network model can perform real-time detection of images locally on the image acquisition device, without transmitting to the server, which greatly improves the speed and efficiency of detection.
  • the lightweight neural network model is also called a miniature neural network model, which refers to a neural network model that requires a small number of parameters and a small computational cost. Because of its low computing overhead, it can be deployed on image acquisition equipment with limited computing resources.
  • the lightweight neural network model may be a pre-trained lightweight convolutional neural network model.
  • the convolutional neural network model has an input layer, a hidden layer and an output layer.
  • target objects such as shelves, large machinery equipment, large containers, etc.
  • These labeled images train the convolutional neural network model.
  • the trained convolutional neural network model can correctly identify whether the image contains the edge of the shelf. After that, the trained convolutional neural network model can be dynamically sent to the image acquisition device.
  • step S102 can be implemented as: using the lightweight neural network model to perform real-time edge detection of the target object on the collected image to obtain the detection result.
  • the detection result indicates whether the current image contains the edge of the target object.
  • the edge detection of the target object can be performed quickly, efficiently, and accurately locally on the image acquisition device, thereby ensuring the timeliness of generating the guidance information.
  • the target object edge detection is performed on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data to pass the guidance
  • the information guides users to conduct image collection in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object.
  • the lightweight neural network model by dynamically sending the lightweight neural network model to the image acquisition device, it can perform real-time edge detection of the target object locally on the acquired image, which improves the detection timeliness under the premise of ensuring detectability. This in turn ensures the timeliness of subsequent guidance information generation.
  • the detection is performed locally on the image acquisition device without being affected by the network transmission speed. Limitations, better reliability, and higher speed and efficiency.
  • the image acquisition method of this embodiment can be executed by any appropriate electronic device with data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones, etc.), and PCs.
  • FIG. 3 there is shown a flow chart of the steps of an image acquisition method according to the third embodiment of the present invention.
  • the image acquisition method of this embodiment includes the aforementioned steps S102 to S106.
  • step S100 may include or not include step S102.
  • step S102 may be implemented in the implementation manner in the second embodiment.
  • the step S104 that is, the acquiring the posture data of the image acquisition device that acquires the image may be implemented as: acquiring the acceleration information and/or angular velocity information of the image acquisition device in the spatial coordinate system.
  • the spatial coordinate system includes X-axis, Y-axis, and Z-axis, and the acceleration information on these three axes can be obtained through an accelerometer configured in the image acquisition device.
  • the angular velocity information on these three axes can be acquired through a gyroscope configured in the image acquisition device.
  • acceleration information and/or angular velocity information in the spatial coordinate system may be acquired in different ways, which is not limited in this embodiment.
  • the acceleration information and/or angular velocity information of the image acquisition device in the spatial coordinate system it can be determined whether the image acquisition device is tilted upward or downward, and then it can be determined whether the user intends to shoot in a line change.
  • the image capture device maintains a certain inclination or quickly tends to retract the image capture device's posture for a period of time after capturing the edge of the target object, it means that the user has captured complete image information of the target object.
  • Carrying out the intent to change line shooting can generate guidance information that guides the user to end image collection.
  • step S106 can be performed to Generate guidance information that guides the user to continue image collection.
  • step S106 includes the following sub-steps:
  • Sub-step S1061 Determine the current posture of the image acquisition device according to the acceleration information and/or angular velocity information.
  • the acceleration information of the image acquisition device on the X axis, Y axis, and Z axis it can be determined whether it moves and/or rotates the image acquisition device. According to its angular velocity information on the X-axis, Y-axis and Z-axis, the inclination angle of the deviation from the horizontal or vertical state can be determined. According to the acceleration information and/or angular velocity information, the current posture of the image acquisition device can be determined.
  • the X axis points horizontally to the right
  • the Y axis points vertically to the front
  • the Z axis points directly above the screen of the image capture device.
  • the image capture device When it is determined based on the angular velocity information that the image capture device has an angular velocity on the X axis, it can be determined to be in a tilted state, and the current posture can be determined to be tilted upward or tilted according to the value of the angular velocity. In the same way, when it is determined that there is acceleration on the Z axis according to the acceleration information, it can be determined that the image acquisition device has a tendency to move upward.
  • Sub-step S1062 generating, according to the current posture, guidance information that instructs the user to move in a direction matching the current posture for continuous image collection.
  • the corresponding relationship between the current posture and the guidance information or guidance keywords can be set in the image acquisition device. After the current posture is determined, the matching guidance information or guidance keywords can be determined according to the corresponding relationship between the current posture and the setting, and then the corresponding guidance information or guidance keywords can be generated. Guide information for moving in the direction matched by the current posture to continue image collection.
  • the guidance information instructing the user to move upwards and continue image collection can be generated.
  • the guidance information instructing the user to move downward and continue image collection can be generated.
  • accurate guidance information can be generated to instruct the user to continue shooting through the guidance information, thereby ensuring that the complete image information of the target object can be collected.
  • the target object edge detection is performed on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data to pass the guidance
  • the information guides users to conduct image collection in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object
  • the image acquisition method of this embodiment can be executed by any appropriate electronic device with data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones, etc.), and PCs.
  • FIG. 4 there is shown a flow chart of the steps of an image acquisition method according to the fourth embodiment of the present invention.
  • the image acquisition method of this embodiment includes the aforementioned steps S102 to S106.
  • the method may include or not include step S100.
  • step S102 may be implemented in the implementation manner in the second embodiment.
  • Step S104 may adopt the implementation manner of the third embodiment or other implementation manners.
  • step S106 may adopt the implementation manner of the third embodiment.
  • multiple images collected can be used to form complete image information of the target object.
  • forming the complete image information of the target object using the multiple collected images includes: splicing the multiple collected images to obtain a complete image containing the complete image information of the target object.
  • the image acquisition process of the target object adopts the method of fractional acquisition, by splicing the acquired multiple images, a complete image containing the complete image information of the target object can be obtained.
  • the complete image allows users to observe the target object more intuitively.
  • the stitching of the collected multiple images to obtain a complete image containing complete image information of the target object includes the following steps:
  • Step S108 Determine multiple groups of images having an image coincidence relationship from the multiple collected images.
  • each group of images includes two images.
  • the overlapping relationship between the images indicates that the two images are adjacent in space. Therefore, the relative position relationship between the images can be inferred from this overlapping relationship, and then image splicing can be performed according to the relative position relationship.
  • image splicing can be performed according to the relative position relationship. Using this splicing method can not only ensure the accuracy of splicing, but also realize rapid splicing.
  • One way to determine multiple sets of images with image overlap is: extracting features from each of the multiple images collected to obtain the feature points corresponding to each image; for any two images, The feature points of the images are matched, and the multiple sets of images with overlapping images are determined based on the matching result.
  • HOG Histogram of Oriented Gradient
  • LBP Local Binary Pattern, local binary pattern
  • Haar-like feature extraction algorithm you can use HOG (Histogram of Oriented Gradient) feature extraction algorithm, LBP (Local Binary Pattern, local binary pattern) feature extraction algorithm, or any appropriate algorithm such as Haar-like feature extraction algorithm .
  • the matching can be determined by comparing whether the similarity of the two images meets a certain threshold (specifically, calculating the distance between the feature points to determine the similarity) result. If the distance between the feature points of the two images is less than the preset value, the matching result indicates that the two images have an overlapping relationship; on the contrary, if the distance between the feature points of the two images is greater than or equal to the set value, the matching result indicates that the two images The images do not have an overlapping relationship.
  • a certain threshold specifically, calculating the distance between the feature points to determine the similarity
  • Step S110 Splicing a plurality of collected images according to the image coincidence relationship, and obtaining a complete image including complete image information of the target object according to the splicing result.
  • the adjacent images of each image are determined according to the overlapping relationship between the images, and the relative position relationship is determined according to the positions of the overlapping parts in the two adjacent images, and then multiple images are stitched to obtain a complete image .
  • the complete image contains complete image information of the target object.
  • image A and image B have overlapping parts according to the overlapping relationship, and the overlapping parts are located on the left side of image A and the right side of image B, then image A can be spliced on the right side of image B.
  • the image C is spliced onto the upper side of the image A.
  • the target object is a shelf.
  • the complete image of the shelf it can be determined whether a certain type of product is placed in a convenient location.
  • the splicing of images can also be completed by the server.
  • the image capture device uploads multiple images collected to the back-end server, and the server performs the corresponding identification and splicing operations, and then the splicing is completed.
  • the complete image is sent back to the image acquisition device to reduce the burden of data processing on the image acquisition device.
  • the target object edge detection is performed on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data to pass the guidance
  • the information guides users to conduct image collection in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object.
  • the lightweight neural network model by dynamically sending the lightweight neural network model to the image acquisition device, it can perform real-time edge detection of the target object locally on the acquired image, which improves the detection timeliness under the premise of ensuring detectability. Therefore, it ensures the timeliness and accuracy of subsequent guidance information generation. Compared with the previous method of sending the collected images to the back-end server for detection by the server and then returning the detection results, the detection is performed locally on the image acquisition device. Limited by the network transmission speed, the reliability is better.
  • the user can observe the target object more directly, and the complete image can be analyzed and processed as needed to obtain the required analysis result.
  • the image acquisition method of this embodiment can be executed by any appropriate electronic device with data processing capabilities, including but not limited to: servers, mobile terminals (such as tablet computers, mobile phones, etc.), and PCs.
  • Fig. 5a shows a flowchart of the steps of the image acquisition method in the first use scenario, and Step in the figure is the meaning of the steps.
  • the image acquisition method is described by taking the image acquisition device as a mobile phone and the target object as a shelf as an example. Specifically, the image acquisition method includes the following steps:
  • Step A1 The user starts to photograph the shelf by means of image shooting.
  • the user photographs the shelf by taking one image at a time. Since only a part of the shelf can be photographed at a time, multiple photographs are required. And, for an image taken at a certain time, between it and the image taken in the previous time, between the image taken in the next time, between the images at the position above the corresponding shelf, and between the images at the position below the corresponding shelf. , Both have a certain degree of image overlap.
  • the coincidence degree can be set to be greater than or equal to 20% to ensure effective recognition and splicing of subsequent images.
  • Step B1 In the process of shooting the shelf, perform shelf edge detection on the captured image in real time. If the shelf edge is detected in the image, proceed to step C1; if the shelf edge is not detected in the image, it will directly generate a prompt to the user to continue shooting And repeat step B1.
  • Step C1 Calculate the acceleration and angular velocity of the mobile phone in the space coordinate system (ie X axis, Y axis, Z axis) through the accelerometer and gyroscope of the mobile phone, and judge whether the mobile phone has an upward or downward angle according to the calculation result , So as to analyze whether the user has the intention to continue shooting other parts of the shelf.
  • the space coordinate system ie X axis, Y axis, Z axis
  • step D1 is executed, and if there is an intention to continue shooting, step E1 is executed.
  • Step D1 If the mobile phone keeps a certain angle and there is almost no change in direction, it means that the user has finished shooting the whole section of the shelf and has no intention to continue shooting. Therefore, a guide message indicating the end of the shooting is generated, and the guide information can be displayed on the phone screen To guide users. After the execution of step D1 is completed, step F1 is executed.
  • Step E1 If the mobile phone suddenly changes in the direction of shooting upwards or shooting downwards, it means that the user has the intention of changing lines to shoot the upper or lower part of the shelf. Therefore, a guide message instructing the user to move up or down and continue shooting is generated. And the guide information can be displayed on the phone screen to guide the user.
  • the posture data of the image acquisition device can be obtained again, and based on the posture data, it can be determined whether the user has operated in accordance with the instructions of the guidance information. If the user does not follow the instructions of the guidance information, an alert can be generated Information to prompt the user; if the user performs an operation according to the instructions of the guidance information, no action is required.
  • step B1 After detecting the newly acquired image, return to step B1 to continue execution.
  • Step F1 End the shooting, splicing multiple shelf images taken by the user, thereby generating a complete image including a whole section of the shelf.
  • the shelf edge detection is performed on the captured image. If the user has captured the edge of the shelf according to the detection result, if the edge of the shelf is captured, the acceleration sensor and gyroscope on the mobile phone are used to analyze whether the user has the intention to photograph the rest of the shelf. , So as to better guide the user to shoot a complete section of the shelf according to the analyzed intention, ensure the quality of shooting, and ensure that the complete image information of the shelf can be obtained.
  • FIG. 5b it shows a flowchart of the steps of the shelf image acquisition method in the second use scenario.
  • the image acquisition device can be a mobile phone, pad, camera, etc.
  • a complete image of the shelf can be obtained, and then the product information can be analyzed, so as to replenish the product or adjust the position of the product according to the product information prompt.
  • the shelf image acquisition process includes:
  • Step A2 Acquire shelf images collected according to the instructions of the first guide information.
  • shelf is used to carry commodities.
  • Shelves can be used to display goods in shopping malls, supermarkets and other places, or they can be used to place goods in warehouses.
  • the shelf image contains part of the shelf information.
  • the first guide information is used to indicate the image collection path of the shelf.
  • the image collection path is a path generated by segmenting the shelf according to the shelf structure information, and the shelf structure information is determined according to at least one of an overall plan view, a three-dimensional view of the shelf, and a preset virtual model of the shelf.
  • the shelf structure information can be the overall image of the shelf taken in advance by the user. Since the overall image taken in advance is only used to obtain the shelf structure information, one or several overall images of the shelf with different perspectives can be taken for the convenience of the server Or the client analyzes the shelf structure information of the shelf from the overall image, and generates the image collection path through the shelf structure information.
  • shelf structure information such as pre-establishing virtual models of shelves of different specifications
  • the user can preselect the virtual model of the shelf that needs to collect images, and generate an image collection path based on the virtual model.
  • the specific implementation of the analysis of the shelf structure can be implemented by those skilled in the art using any appropriate method or algorithm according to actual needs, including but not limited to connected domain analysis, neural network model analysis, and the like.
  • the first guide information may be generated locally, or may be obtained by the client from the server after being generated by the server.
  • a schematic diagram of an image collection path is shown in 5c.
  • the dashed line indicated by 001 in the figure is a segmentation path for segmenting the shelf.
  • the segmentation path generated according to shelf structure information can be implemented by the server or locally in the image acquisition device.
  • different segmentation paths can be generated for the same shelf according to its structure.
  • the specific segmentation strategy can be preset in the server or client, such as linear segmentation, S-shaped segmentation, U-shaped segmentation, and rectangular segmentation. , Spiral segmentation, etc., you can also generate segmentation paths based on the output of shelf structure information through, for example, a trained neural network model.
  • the indicated line at 002 in the figure is the image acquisition path corresponding to the first guide information in the segmentation path.
  • the image acquisition path can be part or all of the segmentation path.
  • the position 003 in the figure indicates the shooting area of one image acquisition of the image acquisition device, which covers at least part of the image acquisition path, and the shooting areas corresponding to two adjacent acquisitions have partial overlap.
  • the user can photograph the corresponding part of the shelf along the corresponding path according to the guide instruction, and obtain the corresponding shelf image.
  • Step B2 Obtain an edge detection result of performing shelf edge detection on the shelf image.
  • the shelf edge detection can be performed on the shelf image.
  • the detection can be performed locally on the image acquisition device to directly obtain the edge detection result; or the shelf image can be sent to the server, and the server can perform the shelf edge detection, and send the edge detection result to the image acquisition device.
  • the lightweight neural network model trained for shelf edge detection can be used for detection to reduce the amount of calculation and ensure that the computing power of the image acquisition device can meet the detection requirements.
  • shelf edge detection is performed on the server side, a deep neural network model trained for shelf edge detection can be used for detection to improve detection accuracy.
  • step C2 is executed; otherwise, the fourth guide information indicating that the image acquisition path moves a certain distance to continue shooting is generated.
  • Step C2 If the edge detection result indicates that the shelf image includes the shelf edge, acquire second guide information indicating a new image acquisition path or acquire third guide information indicating the end of acquisition.
  • the new image acquisition path may be a part of the segmentation path, which can be determined according to the actual detection result and the previous path segmentation result.
  • the new image acquisition path is the path indicated by the dashed line at the bottom in FIG. 5c.
  • the user can move the image capture device to a position corresponding to the framing position and the new image capture path (such as the dotted shooting area in FIG. 5c), and continue shooting.
  • step C2 includes: if the edge detection result indicates that the shelf image includes shelf edges, performing product information identification on the collection result images generated by all the collected shelf images, and obtaining the product information result ; According to the result of the product information, obtain second guide information indicating a new image collection path or obtain third guide information indicating the end of collection.
  • the collection result images generated by all the collected shelf images can be generated locally on the image collection device, or each time a shelf image is collected, the shelf image is sent to the server, and the server generates the collection result image And send it to the image acquisition device.
  • the process of locally generating the collection result image on the image collection device may be: obtaining the collection result image generated after stitching all the collected shelf images.
  • the image acquisition device superimposes the overlapping parts of the two images according to the overlapping parts in the shelf images to form a spliced image of the acquisition result. For example, if the right side of image 1 and the left side of image 2 have overlapping parts, the overlapping parts of image 1 and image 2 are superimposed to form a collection result image.
  • a preview box (as shown at 005 in Figure 5d) can be configured in the display interface, and the spliced collection result image is displayed in the preview box.
  • obtaining second guide information indicating a new image collection path or obtaining third guide information indicating the end of collection includes: if the result of the commodity information indicates that the collection result image does not contain all All the products on the shelf, then acquire the second guide information indicating to switch the shooting line in the image acquisition path; or if the result of the product information indicates that the image of the acquisition result contains all the products on the shelf, acquire The third guide message indicating the end of shooting.
  • This method can generate accurate guidance information to guide users to collect shelf images multiple times, ensuring that clear product information can be included in the shelf images collected each time, which can be identified, so as to solve the problem that the existing shelf size is too long. Obtaining the overall image of the shelf through one collection will make the product information too small and unrecognizable.
  • step C2 includes: if the edge detection result indicates that the shelf image includes the shelf edge, acquiring the posture data of the image acquisition device; acquiring the information indicating a new image acquisition path according to the posture data The second guide information or the third guide information indicating the end of collection is acquired.
  • the posture data includes acceleration information and/or angular velocity information of the image acquisition device in a spatial coordinate system. According to the posture data, it can be determined whether the user has the intention to continue image collection, and then when there is the intention to continue image collection, the direction of the intention shooting can be determined, and the corresponding second guide information can be generated; when there is no intention to continue image collection The third guide information can be generated.
  • the reserved area is shown at 006 in Figure 5d.
  • the reserved area in the newly acquired shelf image and the set area in the display interface are determined according to the image acquisition path.
  • the next shelf image is acquired by moving a certain distance to the right along the image acquisition path, and the reserved area is the rightmost part of the newly acquired shelf image.
  • the area of this partial area may be 1/6 to 1/5 of the total area.
  • the setting area is the leftmost part of the display interface.
  • the subsequent shelf image is acquired by moving down to a new image acquisition path for acquisition, and the reserved area is the lowest part of the newly acquired shelf image.
  • the area of this partial area may be 1/6 to 1/5 of the total area.
  • the setting area is the uppermost part of the display interface.
  • the user can align the reserved area in the display interface with the corresponding area in the actual shelf when performing the next image capture operation, so that the collected goods image and the previous goods image have enough overlap to determine two goods
  • the positional relationship between the images will not cause too much overlap and cause a lot of useless data.
  • the identification step may be performed at any appropriate time after the acquisition result image is acquired, that is, the product information identification and/or product location identification are performed on the acquisition result image, and the product information result and/or product location result are obtained .
  • the identification step may be executed after the acquisition result image is generated from the acquired shelf image, or the identification step may be executed after the acquisition result image containing the complete information of the shelf is acquired.
  • the identification step is performed after the acquisition result image containing the complete information of the shelf is acquired.
  • the product information identification can be performed on the server or locally.
  • the server side When executed on the server side, the server side obtains the collection result image sent by the image collection device, or obtains the shelf image sent by the image collection device and splices the collection result image, and then uses the trained neural network model that can recognize product information for identification. Obtain product information results.
  • the image collection device can directly stitch the collection result image based on the shelf image or obtain the collection result image from the server, and use a trained neural network model capable of product information recognition to perform recognition and obtain product information results.
  • product location identification can be performed on the server or locally.
  • the product location result is analyzed, and an analysis result corresponding to the analysis operation is generated.
  • the analysis result includes at least one of the following: product sales information, product display information, product quantity information, and product replenishment status information.
  • the product information results and product location results can be analyzed to determine the products remaining on the shelf and the products in each position on the shelf, and then analyze the products in the vacant locations to determine the product sales information.
  • the product information can be analyzed to determine the products at each position on the shelf, thereby determining the product display information.
  • the analysis result includes product quantity information
  • the product information result and the product location result can be analyzed to determine the product at each placement location, and the product quantity information can be determined according to the placement location quantity.
  • the analysis result includes product replenishment status information
  • the product information result and the product location result can be analyzed to determine the product to be replenished and the corresponding replenishment location.
  • the user's collection termination operation when the user's collection termination operation is received, it may also be performed: determining whether the collection result image generated from all the collected shelf images contains all the commodities on the shelf.
  • the user may stop the image collection due to unexpected circumstances during the image collection process, when the user's instruction to terminate the image collection operation (such as exiting the operation or ending the collection operation) is obtained, it is determined whether the shelf has been completely collected, that is Whether the collection result image generated by all the collected shelf images contains all the goods on the shelf.
  • the collection result image contains all commodities on the shelf, it means that the collection has been completed, and the collection result image can be saved and the collection can be terminated.
  • the collection result image does not include all the products on the shelf, it means that the collection has not been completed.
  • the collected collection result image and related collection information (such as image collection path, etc.) can be saved, and the user will be prompted that there is an unuploaded (or unupdated) part To inform users that they can continue image acquisition at an appropriate time.
  • Fig. 5e shows a step flow chart of the commodity information processing method in the third use scenario.
  • the method includes the following steps:
  • Step A3 Collect image data of the shelf according to the acquired first guidance information.
  • the first guide information is used to indicate an image collection path of the shelf.
  • the first guide information can be generated in the manner described in the second use scenario, or generated in other manners, which is not limited in this use scenario.
  • Step B3 Recognize the image data, and obtain the product information on the shelf and whether the shelf edge information is included.
  • Recognition can include product information recognition and shelf edge recognition.
  • Commodity information recognition can be performed by using a neural network model capable of product information recognition in the second use scenario, or by other means.
  • Shelf edge recognition can use a neural network model capable of shelf edge recognition, or use other methods for recognition.
  • step C3 is executed; otherwise, no action may be taken or guidance information indicating that the collection will continue along the image collection path can be generated.
  • Step C3 If it is determined that the image data contains information on the edge of the shelf, determine whether all the product information is included in all the collected image data according to the product information, and obtain a second guide indicating a new image collection path according to the determination result Information or obtain third guide information indicating the end of the collection.
  • the number of product categories can be determined based on the product information. If the number of product categories meets the requirements, the judgment result is that all products are included, and the third guide to end the collection is obtained according to the judgment result. Information; if the number of commodity categories does not meet the requirements, the judgment result is that all commodities are not included, and the second guide information indicating a new image collection path is obtained according to the judgment result.
  • the subsequent method may also perform other steps based on the product information, such as generating replenishment prompt information.
  • the commodity information processing method of this use scenario can realize the processing of the commodity information on the shelf, so as to meet the requirements of replenishment reminders, reminders of changing commodity positions, etc.
  • Fig. 5f shows a flow chart of the shelf image acquisition method in the fourth use scenario.
  • the image capture device is a mobile phone, and the method includes the following steps:
  • Step A4 Display the first collection prompt information of the goods on the shelf.
  • the first collection prompt information is used to indicate the collection position when performing image collection of shelf commodities along the image collection path.
  • the first collection prompt information can be determined according to the image collection path indicated by the first guide information, for example, for the latest collection location, move a certain distance on the image collection path to determine a new collection location, and generate the first collection prompt based on the new collection location information.
  • the first guide information may be the first guide information described in the second use scenario.
  • Step B4 Acquire an image for image collection according to the first collection prompt information, and recognize the acquired image.
  • different recognition can be performed on the acquired image. For example, carry out shelf edge recognition, product information recognition and so on.
  • the specific identification method can be as described in the foregoing usage scenario, so it is not repeated here.
  • step C4 If the recognition result indicates that the image includes the shelf edge, perform step C4; otherwise, update the first collection prompt information according to the image collection path, and return to step A4 to continue execution.
  • Step C4 If the recognition result indicates that the image includes shelf edges, display the second collection prompt information for indicating a new image collection path and instructing to continue image collection.
  • a new image collection path is determined, and a second collection prompt message indicating it is generated to prompt the user to switch the image collection path to continue collection.
  • the process of determining a new image acquisition path can be the same as the aforementioned use scenario, so it will not be repeated.
  • shelf image collection method of this usage scenario a complete and accurate shelf image and product information on the shelf can be obtained, so that the product information can be analyzed to prompt for replenishment, prompt for changing product location, etc.
  • FIG. 5g shows a schematic structural diagram of the display interface of the client in the fifth use scenario.
  • the client includes a display interface.
  • the display interface is used to display first collection prompt information, and the first collection prompt information is used to instruct image collection of a target object along an image collection path; the display interface is also used to display second collection prompt information (as shown in Fig. At 007 in 5g), the second collection prompt information is information indicating that the image collection of the target object is performed along a new image collection path when the edge of the target object is included in the acquired image.
  • the first collection prompt information can be generated by using the method described in Scenario 4 and displayed through the display interface.
  • the second collection prompt information can be determined according to the new image collection path. For example, for the most recent collection location, move a certain distance on the new image collection path to determine the new collection location, and generate the second collection prompt information according to the new collection location, and display it through the display interface.
  • the client can display the first collection prompt information and the second collection prompt information, and then prompt the user to perform image collection, so as to improve the quality of the collected image, so that it can collect a high-quality complete image of the target object.
  • the target object includes at least one of the following: shelves, parking lots, and seats in venues.
  • this method can collect complete images of the parking lot, and then analyze the vehicle information.
  • venue seats the complete image can be collected through this method, and then the usage of the seats can be analyzed, and the attendance rate can be calculated.
  • the shelf the complete image of the shelf can be collected through this method, and then the vehicle information, seat information, or product information can be analyzed, so that subsequent processing can be performed.
  • FIG. 5h shows a schematic flow diagram of the steps of the commodity information processing method in the sixth use scenario.
  • the method includes:
  • Step A5 Collect image data of the shelf.
  • the image data of the shelf can be collected by an image collection device, which can be a mobile phone or the like.
  • the image data collection of the shelf can be implemented in any one of the aforementioned usage scenarios one to five.
  • Step B5 Process the image data, and identify the product information on the shelf.
  • the image data can be processed differently.
  • a neural network model trained to recognize the product information in the image is used to process the image and obtain the product information on the shelf.
  • the product information can include product name information, Category information, etc.
  • Step C5 Determine the commodity statistical information of the shelf according to the identified commodity information.
  • the commodity statistical information may include commodity quantity information, commodity category quantity, commodity quantity of each category, etc.
  • FIG. 5i shows a schematic flow chart of the steps of the commodity information processing method in the seventh use scenario.
  • the method includes:
  • Step A6 In response to the shooting operation initiated by the user, call the image acquisition device of the client to shoot the image data of the shelf.
  • the user can call the image capture device of the client through the server, or directly initiate a shooting operation on the client and call the image capture device.
  • the method of obtaining image data can be any one of the methods described in use scenarios 1 to 5, which is not limited in this use scenario.
  • Step B6 Process the image data to identify the product information on the shelf.
  • the method of obtaining product information can be the same as or different from the aforementioned use scenario six.
  • Step C6 Determine the commodity statistical information of the shelf according to the identified commodity information.
  • the commodity statistical information may include commodity quantity information, commodity category quantity, commodity quantity of each category, etc.
  • FIG. 5j shows a schematic flow chart of the steps of the method for processing merchandise replenishment in the eighth use scenario.
  • the method includes:
  • Step A7 In response to the replenishment operation initiated by the user, call the image acquisition device to capture the image data of the shelf.
  • the user can initiate the replenishment operation through the client, and the client directly calls the image acquisition device to capture the image data of the shelf, or the client sends the replenishment operation to the server, and the server calls the image acquisition device to capture the image data of the shelf.
  • the way of capturing the image data can be as shooting in any one of scenes one to five.
  • Step B7 Perform identification processing on the image data, and identify the product information on the shelf.
  • the method of obtaining product information can be the same as or different from the aforementioned use scenario six.
  • Step C7 Determine the commodity to be replenished according to the commodity information on the shelf.
  • the remaining commodities are determined according to the commodity information, and commodities other than the remaining commodities in the preset commodity information are determined as commodities to be replenished.
  • the method further includes:
  • Step D7 Generate and display replenishment prompt information for prompting replenishment of the commodity to be replenished according to the commodity to be replenished.
  • replenishment prompt information in an appropriate manner, for example, directly generate replenishment prompt information according to the name of the commodity to be replenished.
  • a complete and high-quality shelf image can be obtained, and then the product information of the shelf can be obtained, so as to determine the product to be replenished according to the product information, so that the product to be replenished can be automatically obtained by shooting the shelf image.
  • the user can quickly prompt the user to replenish by generating replenishment prompt information, which improves convenience.
  • FIG. 5k it shows a schematic diagram of information interaction between a user, an image acquisition device, and a server in use scenario 9.
  • the replenishment process includes:
  • the image acquisition device receives the trained product recognition model issued by the server.
  • the image acquisition device acquires shelf images by using the image acquisition methods in Embodiments 1 to 4, and obtains the shelf image as a result of acquisition.
  • Use the product recognition model to process product information on the collected images, and display recommended products based on the processing results.
  • the product to be replenished is determined according to the selected product, and a replenishment request is submitted to the server to generate a replenishment order on the server.
  • the image acquisition device can send the processing result to the server so that the server can continue to train the initial product recognition model, and compress the trained initial product recognition model regularly or according to other conditions.
  • the result is sent to the image capture device.
  • This replenishment process can ensure the collection quality of shelf images, thereby ensuring the quality of product information processing, thereby realizing reliable automatic replenishment.
  • FIG. 6 a flowchart of the steps of an image acquisition method according to the fifth embodiment of the present invention is shown.
  • Step S602 In the process of image acquisition of the target object, the posture data of the image acquisition device is acquired.
  • the posture data is used to indicate the posture of the image capture device being held, for example, horizontal holding, vertical holding, having an upward inclination angle, or having a downward inclination angle, etc.
  • the user's image acquisition intention can be determined according to the held posture. For example, in the image acquisition process, if the user intends to perform continuous acquisition along the current image acquisition path, the image acquisition device is usually held vertically; and if the user intends to switch to a new image acquisition path for continuous acquisition, The image capture device is generally held in a manner having an upward inclination angle or a downward inclination angle.
  • the posture data of the image acquisition device includes but is not limited to acceleration information and/or angular velocity information of the image acquisition device in the spatial coordinate system. It can also include distance information from the target object and so on.
  • Those skilled in the art can obtain the posture data of the image acquisition device in an appropriate manner, for example, obtain acceleration information through an acceleration sensor, and obtain angular velocity information through a gyroscope.
  • Step S604 Generate corresponding guidance information according to the posture data, and guide the user to perform continuous image collection on the target object through the guidance information.
  • step S604 may be implemented as: determining the current posture of the image acquisition device according to the acceleration information and/or angular velocity information; The guidance information of moving in the matching direction of the current posture to continue image collection.
  • the guidance information that instructs the user to move in the direction matching the current posture for continuous image collection is generated according to the current posture, if the current posture meets the preset path transition condition, then the guidance user is generated
  • the current image acquisition path is converted into a new image acquisition path matching the current posture, and the fifth guide information for continuous image acquisition is performed along the new image acquisition path.
  • the current posture is that the image acquisition device has a downward inclination angle or an upward inclination angle
  • it is determined that the current posture meets the preset path transition conditions and then fifth guidance information is generated, which guides the user to convert the current image acquisition path to and A new image acquisition path matching the current posture, and subsequent image acquisition along the new image acquisition path.
  • the new image acquisition path is converted into an image acquisition path below the current image acquisition path, and the fifth guidance message with the content "Please move down and continue shooting" is generated.
  • the new image acquisition path generated can be different.
  • Those skilled in the art can use any appropriate method to generate the image acquisition path as needed, for example, generate a new image acquisition path according to a preset image acquisition path generation strategy, or use the image acquisition path generation method in the foregoing embodiment.
  • the sixth guide information that guides the user to continue image collection along the current image collection path is generated.
  • the current posture is that the image capture device is held vertically
  • the current posture does not meet the preset path transition conditions, and then guide information instructing the user to move along the current image capture path and continue image capture is generated.
  • the posture data of the image acquisition device is acquired, and then according to the posture data, the guidance information that instructs the user to move in the direction matching the current posture to continue image acquisition can be generated.
  • the guidance information that instructs the user to move in the direction matching the current posture to continue image acquisition can be generated.
  • FIG. 7 a flowchart of the steps of an image acquisition method according to the sixth embodiment of the present invention is shown.
  • the image acquisition method includes the aforementioned steps S602 to S604.
  • the method further includes:
  • Step S604a Obtain an image of the target object collected by the image collecting device in real time.
  • the image of the target object may be an image collected by the user using the image collecting device according to the guidance information.
  • the image may be an image containing a part of the shelf.
  • Step S604b Perform edge detection on the collected image, and obtain a detection result.
  • the edge detection of the image can be performed in any appropriate manner, for example, the edge detection of the image is performed using a trained neural network model for edge detection, and the detection result is obtained.
  • the method of performing edge detection on the image in any of the foregoing embodiments may be used.
  • the detection result may indicate that the captured image contains the edge of the target object or does not contain the edge of the target object.
  • the step S604 includes generating corresponding guidance information according to the posture data and the detection result, and guiding the user to perform continuous image collection of the target object through the guidance information.
  • the user may have some jitters during the shooting process, so that the posture data of the image acquisition device indicates that its posture has changed.
  • the collected images Edge detection and then combining the posture data and detection results to generate guidance information, can make the generated guidance information more accurate.
  • a new path is generated that guides the user to convert the current image acquisition path to match the current posture.
  • Image acquisition path and follow the fifth guide information of image acquisition along the new image acquisition path.
  • a message is generated that guides the user to continue image acquisition along the current image acquisition path Sixth guide information.
  • the image acquisition method may further include:
  • Step S606 According to the posture data and the detection result, generate seventh guidance information that guides the user to stop image collection.
  • the seventh guidance information that guides the user to stop continuous image collection is generated.
  • the posture data of the image acquisition device is acquired, and then according to the posture data, the guidance information that instructs the user to move in the direction matching the current posture to continue image acquisition can be generated.
  • the guidance information that instructs the user to move in the direction matching the current posture to continue image acquisition can be generated.
  • FIG. 8 there is shown a structural block diagram of an image acquisition device according to the seventh embodiment of the present invention.
  • the image acquisition device of this embodiment includes: a detection module 802, configured to obtain a detection result of real-time target object edge detection on a captured image, wherein the captured image contains part of the image information of the target object; a first acquisition module 804, configured to, if the detection result indicates that the edge of the target object is detected in the image, acquire the posture data of the image acquisition device that collects the image; the generating module 806 is configured to generate The corresponding guide information guides the user to perform continuous image collection of the target object through the guide information, so as to use the collected multiple images to form complete image information of the target object.
  • the target object edge detection is performed on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data to pass the guidance
  • the information guides users to conduct image collection in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object.
  • FIG. 9 there is shown a structural block diagram of an image acquisition device according to the eighth embodiment of the present invention.
  • the image acquisition device of this embodiment includes: a detection module 902 for obtaining a detection result of real-time target object edge detection on a captured image, wherein the captured image contains part of the image information of the target object; a first acquisition module 904, configured to, if the detection result indicates that the edge of the target object is detected in the image, acquire the posture data of the image acquisition device that collects the image; the generating module 906 is configured to generate The corresponding guide information guides the user to perform continuous image collection of the target object through the guide information, so as to use the collected multiple images to form complete image information of the target object.
  • the apparatus further includes: a second acquisition module 908, configured to acquire, before obtaining the detection result of real-time target object edge detection on the collected image, A lightweight neural network model for edge detection of the target object; the detection module 902 is configured to use the lightweight neural network model to perform real-time target object edge detection on the collected images to obtain detection results.
  • a second acquisition module 908 configured to acquire, before obtaining the detection result of real-time target object edge detection on the collected image, A lightweight neural network model for edge detection of the target object
  • the detection module 902 is configured to use the lightweight neural network model to perform real-time target object edge detection on the collected images to obtain detection results.
  • the first acquiring module 904 is configured to acquire acceleration information and/or angular velocity information of the image acquisition device in the spatial coordinate system;
  • the generating module 906 includes: a first determining module 9061, configured to The acceleration information and/or angular velocity information determines the current posture of the image acquisition device; an information generation module 9062 is used to generate, according to the current posture, guidance information that instructs the user to move in a direction matching the current posture for subsequent image collection .
  • the device further includes: a splicing module 910, configured to splice a plurality of collected images to obtain a complete image including complete image information of the target object.
  • a splicing module 910 configured to splice a plurality of collected images to obtain a complete image including complete image information of the target object.
  • the stitching module 910 includes: a second determining module 9101, configured to determine a plurality of groups of images having an image overlapping relationship from a plurality of collected images, wherein each group of images includes two images;
  • the complete image obtaining module 9102 is configured to splice the multiple images collected according to the image coincidence relationship, and obtain a complete image including complete image information of the target object according to the splicing result.
  • the second determining module 9101 includes: a feature extraction module, which is used to perform feature extraction on each of the acquired multiple images to obtain feature points corresponding to each image; The two images are matched according to the feature points of the two images, and the multiple groups of images with overlapping images are determined based on the matching result.
  • the image acquisition device in this embodiment is used to implement the corresponding image acquisition methods in the foregoing multiple method embodiments, and has the beneficial effects of the corresponding method embodiments, which will not be repeated here.
  • the electronic device may include: a processor (processor) 1002, a communication interface (Communications Interface) 1004, a memory (memory) 1006, and a communication bus 1008.
  • processor processor
  • Communication interface Communication Interface
  • memory memory
  • the processor 1002, the communication interface 1004, and the memory 1006 communicate with each other through the communication bus 1008.
  • the communication interface 1004 is used to communicate with other electronic devices such as terminal devices or servers.
  • the processor 1002 is configured to execute the program 1010, and specifically can execute the relevant steps in the above-mentioned image acquisition method embodiment.
  • the program 1010 may include program code, and the program code includes computer operation instructions.
  • the processor 1002 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the one or more processors included in the electronic device may be processors of the same type, such as one or more CPUs; or processors of different types, such as one or more CPUs and one or more ASICs.
  • the memory 1006 is used to store the program 1010.
  • the memory 1006 may include a high-speed RAM memory, or may also include a non-volatile memory (non-volatile memory), for example, at least one disk memory.
  • the program 1010 can specifically be used to make the processor 1002 perform the following operations: obtain a detection result of real-time target object edge detection on the collected image, wherein the collected image contains part of the image information of the target object; if the detection result Indicate that the edge of the target object is detected in the image, then the posture data of the image acquisition device that collects the image is acquired; corresponding guidance information is generated according to the posture data, and the user is guided to the The target object performs subsequent image capture, so as to use the captured multiple images to form complete image information of the target object.
  • the program 1010 is also used to enable the processor 1002 to obtain the real-time target object edge detection detection result of the captured image before obtaining the data that is dynamically issued to the image acquisition device for The lightweight neural network model for edge detection of the target object; and when the detection result of real-time target object edge detection on the collected image is obtained, the lightweight neural network model is used to perform real-time detection on the collected image Edge detection of the target object to obtain the detection result.
  • the program 1010 is also used to enable the processor 1002 to acquire the posture data of the image acquisition device that acquired the image, and acquire the acceleration information and/or acceleration information of the image acquisition device in the spatial coordinate system.
  • Angular velocity information and when the corresponding guidance information is generated according to the posture data, and the user is guided through the guidance information to continue image collection of the target object, the image collection is determined according to the acceleration information and/or angular velocity information
  • the current posture of the device generating, according to the current posture, guidance information that instructs the user to move in a direction matching the current posture for continuous image collection.
  • the program 1010 is further configured to enable the processor 1002 to splice the collected multiple images when the multiple collected images are used to form complete image information of the target object, so as to obtain information containing all the images.
  • a complete image that describes the complete image information of the target object.
  • the program 1010 is also used to enable the processor 1002 to splice multiple captured images to obtain a complete image containing the complete image information of the target object from the multiple captured images.
  • the image multiple groups of images with an image overlap relationship are determined, where each group of images includes two images; the multiple images collected are spliced according to the image overlap relationship, and the target is obtained according to the splicing result.
  • the complete image of the complete image information of the object is also used to enable the processor 1002 to splice multiple captured images to obtain a complete image containing the complete image information of the target object from the multiple captured images.
  • the image multiple groups of images with an image overlap relationship are determined, where each group of images includes two images; the multiple images collected are spliced according to the image overlap relationship, and the target is obtained according to the splicing result.
  • the complete image of the complete image information of the object is also used to enable the processor 1002 to splice multiple captured images to obtain a complete image containing the complete image information of the target object from
  • the program 1010 is further configured to cause the processor 1002 to determine, from the acquired multiple images, a plurality of sets of images having an image overlap relationship, for each of the acquired multiple images Image feature extraction is performed to obtain feature points corresponding to each image; any two images are matched according to the feature points of the two images, and the multiple groups of images with overlapping images are determined based on the matching result.
  • the program 1010 may specifically be used to cause the processor 1002 to perform the following operations: obtain shelf images collected according to the instructions of the first guide information, where the shelf is used to carry commodities, and the first guide information is used to control the shelf. Instruct the image collection path; obtain the edge detection result of the shelf edge detection of the shelf image; if the edge detection result indicates that the shelf image includes the shelf edge, obtain the second guide information indicating a new image collection path Or obtain the third guide information indicating the end of the collection.
  • the program 1010 is further configured to enable the processor 1002 to acquire the first guide information before acquiring the shelf image collected according to the instructions of the first guide information, where the first guide information Is the guide information corresponding to the image collection path, the image collection path is a path generated by segmenting the shelf according to the shelf structure information, and the shelf structure information is based on the overall plan view, three-dimensional view, and preview of the shelf. At least one of the set shelf virtual models is determined.
  • the program 1010 when the edge detection result indicates that the shelf image includes the shelf edge, is further configured to cause the processor 1002 to obtain the second guide information indicating a new image acquisition path or When acquiring the third guide information indicating the end of the collection, if the edge detection result indicates that the shelf image includes shelf edges, perform product information identification on the collection result images generated by all the collected shelf images, and obtain the product information result ; According to the result of the product information, obtain second guide information indicating a new image collection path or obtain third guide information indicating the end of collection.
  • the program 1010 is further configured to cause the processor 1002 to obtain the second guide information indicating a new image collection path or the third guide information indicating the end of the collection according to the result of the commodity information. If the result of the product information indicates that the collection result image does not contain all the commodities of the shelf, then obtain the second guide information indicating to switch the shooting line in the image collection path; or, if the result of the product information is Instructing that the collection result image contains all the commodities on the shelf, and then acquiring the third guide information indicating the end of shooting.
  • the program 1010 when the edge detection result indicates that the shelf image includes the shelf edge, is further configured to cause the processor 1002 to obtain the second guide information indicating a new image acquisition path or When acquiring the third guide information indicating the end of the collection, if the edge detection result indicates that the shelf image includes the shelf edge, the posture data of the image collection device is obtained; and the second image collection path indicating the new image collection path is obtained according to the posture data Second, guide information or obtain third guide information indicating the end of collection.
  • the posture data includes acceleration information and/or angular velocity information of the image acquisition device in a spatial coordinate system.
  • the program 1010 is further configured to enable the processor 1002 to obtain a reserved area corresponding to the current image collection path from the newly collected shelf image, and display the reserved area in the setting area of the display interface.
  • the reserved area is used to indicate the image capture alignment position of the next image capture operation through the reserved area.
  • the program 1010 is further configured to enable the processor 1002 to obtain collection result images generated after stitching all collected shelf images.
  • the program 1010 is also used to enable the processor 1002 to perform product information recognition and/or product location recognition on the collection result image, and obtain product information results and/or product location results; Perform an analysis operation on the product information result and/or the product location result, and generate an analysis result corresponding to the analysis operation.
  • the analysis result includes at least one of the following: product sales information, product display information, product quantity information, and product replenishment status information.
  • the program 1010 may specifically be used to cause the processor 1002 to perform the following operations: collect image data of the shelf according to the acquired first guidance information, where the first guidance information is used to indicate the image acquisition path of the shelf; The image data is identified, and the product information on the shelf and whether the shelf edge information is obtained; if it is determined that the image data contains the shelf edge information, it is determined based on the product information in all the collected image data Whether all product information is included, the second guide information indicating a new image acquisition path or the third guide information indicating the end of the acquisition is acquired according to the judgment result.
  • the program 1010 can specifically be used to make the processor 1002 perform the following operations: display the first collection prompt information of the shelf commodities, where the first collection prompt information is used to indicate the collection of the shelf commodities along the image collection path during image collection. Location; acquire images for image acquisition according to the first acquisition prompt information, and identify the acquired images; if the identification result indicates that the image includes shelf edges, display for indicating a new image acquisition path and instructions to continue The second collection prompt message for image collection.
  • the program 1010 can specifically be used to make the processor 1002 perform the following operations: collect image data of the shelf; process the image data to identify the product information on the shelf; determine the product information on the shelf according to the identified product information The commodity statistics of the shelf.
  • the program 1010 can specifically be used to make the processor 1002 perform the following operations: in response to a user-initiated shooting operation, call the image capture device of the client to capture the image data of the shelf; process the image data to identify the goods on the shelf Information; according to the product information obtained by the identification, the product statistical information of the shelf is determined.
  • the program 1010 can specifically be used to make the processor 1002 perform the following operations: in response to the replenishment operation initiated by the user, call the image acquisition device to capture the image data of the shelf; perform identification processing on the image data to identify the goods on the shelf Information; according to the product information on the shelf, determine the product to be replenished.
  • the program 1010 is further configured to cause the processor 1002 to generate and display replenishment prompt information for prompting replenishment of the commodity to be replenished according to the commodity to be replenished.
  • the program 1010 can specifically be used to cause the processor 1002 to perform the following operations: in the process of image acquisition of the target object, obtain the posture data of the image collection device; generate corresponding guidance information according to the posture data, and guide through the guidance information The user performs continuous image collection on the target object.
  • the posture data includes acceleration information and/or angular velocity information of the image acquisition device in the spatial coordinate system; the program 1010 is also used to cause the processor 1002 to generate corresponding information according to the posture data.
  • Guide information when the user is guided to perform continuous image collection on the target object through the guide information, the current posture of the image collection device is determined according to the acceleration information and/or angular velocity information; the current posture is generated to indicate the user direction and The guidance information of moving in the matching direction of the current posture to continue image collection.
  • the program 1010 is also used to enable the processor 1002 to generate guidance information according to the current posture that instructs the user to move in a direction matching the current posture for continuous image collection, if the current posture matches
  • the preset path transition conditions are generated to guide the user to convert the current image acquisition path to a new image acquisition path matching the current posture, and to continue image acquisition along the new image acquisition path; if the current posture is not If the preset path conversion condition is met, the sixth guide information that guides the user to continue image collection along the current image collection path is generated.
  • the program 1010 is also used to enable the processor 1002 to acquire the image of the target object collected by the image acquisition device in real time; perform edge detection on the collected image to obtain the detection result; and the program 1010 It is also used to enable the processor 1002 to generate corresponding guidance information according to the posture data and use the guidance information to guide the user to perform continuous image collection of the target object, and generate corresponding guidance information according to the posture data and the detection result.
  • Guide information which guides the user to perform continuous image collection on the target object through the guide information.
  • the program 1010 is further configured to cause the processor 1002 to generate corresponding guidance information according to the posture data and the detection result, and guide the user to continue to the target object through the guidance information.
  • the processor 1002 During image collection, if the current posture meets the preset path conversion conditions and the detection result indicates that the edge of the target object is detected, a new image collection path that guides the user to convert the current image collection path to the current posture is generated , And perform fifth guide information for continuous image acquisition along the new image acquisition path; if the current posture does not meet the preset path transition conditions and the detection result indicates that the edge of the target object is not detected, then generate and guide the user along The sixth guide information for the current image acquisition path to continue image acquisition.
  • the edge of the target object is detected on the captured image in real time.
  • the posture data of the image capture device is obtained, and the corresponding guidance information is generated according to the posture data.
  • the guidance information the user is guided to collect images in a standardized manner, so as to finally complete the image collection of multiple parts of the entire target object, avoid omissions, and obtain a complete image of the target object.
  • each component/step described in the embodiment of the present invention can be split into more components/steps, or two or more components/steps or partial operations of components/steps can be combined into New components/steps to achieve the purpose of the embodiments of the present invention.
  • the above method according to the embodiments of the present invention can be implemented in hardware, firmware, or implemented as software or computer code that can be stored in a recording medium (such as CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk), or implemented by
  • a recording medium such as CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk
  • the computer code downloaded from the network is originally stored in a remote recording medium or a non-transitory machine-readable medium and will be stored in a local recording medium, so that the method described here can be stored using a general-purpose computer, a dedicated processor or a programmable Or such software processing on a recording medium of dedicated hardware (such as ASIC or FPGA).
  • a computer, processor, microprocessor controller, or programmable hardware includes storage components (for example, RAM, ROM, flash memory, etc.) that can store or receive software or computer code, when the software or computer code is used by the computer, When accessed and executed by the processor or hardware, the image acquisition method described here is implemented.
  • storage components for example, RAM, ROM, flash memory, etc.
  • the execution of the code converts the general-purpose computer into a dedicated computer for executing the image capturing method shown here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

实施例提供了一种图像采集方法、装置、电子设备及计算机存储介质,该方法包括:获取根据第一引导信息的指示采集的货架图像,其中,所述货架用于承载商品,所述第一引导信息用于对所述货架的图像采集路径进行指示;获取对所述货架图像进行货架边缘检测的边缘检测结果;若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。通过实施例,可以提升货架图像采集质量。

Description

图像采集方法、装置、电子设备及计算机存储介质
本申请要求2019年07月30日递交的申请号为201910697213.7、发明名称为“图像采集方法、装置、电子设备及计算机存储介质”中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明实施例涉及计算机技术领域,尤其涉及一种图像采集方法、装置、电子设备及计算机存储介质。
背景技术
现有技术中,一些图像采集场景中,由于采集的目标对象的长度和/或高度较大,导致一次图像采集操作无法获得目标对象的完整图像,使得用户需要进行多次图像采集操作,以采集目标对象不同部分的图像,从而实现对目标对象的全面图像采集。
例如,在对线下门店的货架陈列信息进行数字化处理时,需要先拍摄完整的货架图像,再通过人工智能技术对这些货架图像进行识别,从而根据识别结果生成数字化的陈列信息。在拍摄货架图像时,由于货架间距较窄且一整节货架一般长度较长,因而很难通过一张照片完整拍摄整个货架并获得清晰的货品信息,为了解决这一问题需要对货架进行多次拍摄。
在多次拍摄过程中,可能由于用户操作不规范等原因,导致采集图像时遗漏了目标对象的部分区域,而无法获得目标对象的完整图像和信息。
发明内容
有鉴于此,本发明实施例提供一种图像采集方案,以解决上述部分或全部问题。
根据本发明实施例的第一方面,提供了一种货架图像采集方法,其包括:获取根据第一引导信息的指示采集的货架图像,其中,所述货架用于承载商品,所述第一引导信息用于对所述货架的图像采集路径进行指示;获取对所述货架图像进行货架边缘检测的边缘检测结果;若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
根据本发明实施例的第二方面,提供了一种商品信息处理方法,其包括:根据获取的第一引导信息采集货架的图像数据,其中,所述第一引导信息用于对所述货架的图像 采集路径进行指示;对所述图像数据进行识别,并获得所述货架上的商品信息和是否包含货架边缘的信息;若确定所述图像数据中包含货架边缘的信息,则根据所述商品信息判断已采集的所有图像数据中是否包含所有商品信息,根据判断结果获得指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
根据本发明实施例的第三方面,提供了一种货架图像采集方法,其包括:展示对货架商品的第一采集提示信息,其中,所述第一采集提示信息用于指示沿图像采集路径对货架商品进行图像采集时的采集位置;获取根据所述第一采集提示信息进行图像采集的图像,并对获取的图像进行识别;若识别结果指示所述图像中包括货架边缘,则展示用于指示新的图像采集路径并指示继续进行图像采集的第二采集提示信息。
根据本发明实施例的第四方面,提供了一种客户端,其包括:展示界面,所述展示界面用于展示第一采集提示信息,所述第一采集提示信息用于指示沿图像采集路径对目标对象进行图像采集;所述展示界面还用于展示第二采集提示信息,所述第二采集提示信息为在获取的图像中包含所述目标对象的边缘时,指示沿新的图像采集路径对所述目标对象进行图像采集的信息。
根据本发明实施例的第五方面,提供了一种商品信息处理方法,其包括:采集货架的图像数据;对所述图像数据进行处理,识别得到所述货架上的商品信息;根据所述识别得到的商品信息,确定所述货架的商品统计信息。
根据本发明实施例的第六方面,提供了一种商品信息的处理方法,其包括:响应于用户发起的拍摄操作,调用客户端的图像采集装置拍摄货架的图像数据;对所述图像数据进行处理,识别得到所述货架上的商品信息;根据所述识别得到的商品信息,确定所述货架的商品统计信息。
根据本发明实施例的第七方面,提供了一种商品补货的处理方法,其包括:响应于用户发起的补货操作,调用图像采集装置拍摄货架的图像数据;对所述图像数据进行识别处理,识别得到所述货架上的商品信息;根据所述货架上的商品信息,确定待补货商品。
根据本发明实施例的第八方面,提供了一种图像采集方法,其包括:获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形 成所述目标对象的完整图像信息。
根据本发明实施例的第九方面,提供了一种图像采集方法,其包括:在对目标对象进行图像采集的过程中,获取图像采集设备的姿态数据;根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
根据本发明实施例的第十方面,提供了一种图像采集装置,其包括:检测模块,用于获取对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;第一获取模块,用于若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;生成模块,用于根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
根据本发明实施例的第十一方面,提供了一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如第一方面~第三方面和第五~第九方面中任一所述的方法对应的操作。
根据本发明实施例的第十二方面,提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面~第三方面和第五~第九方面中任一所述的方法。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明实施例中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为根据本发明实施例一的一种图像采集方法的步骤流程图;
图2为根据本发明实施例二的一种图像采集方法的步骤流程图;
图3为根据本发明实施例三的一种图像采集方法的步骤流程图;
图4为根据本发明实施例四的一种图像采集方法的步骤流程图;
图5a为本发明的使用场景一的步骤流程图;
图5b为本发明的使用场景二的步骤流程图;
图5c为本发明的使用场景二的分割路径的示意图;
图5d为本发明的使用场景二的拍摄界面的示意图;
图5e为本发明的使用场景三的步骤流程图;
图5f为本发明的使用场景四的步骤流程图;
图5g为本发明的使用场景五的客户端的展示界面的示意图;
图5h为本发明的使用场景六的步骤流程图;
图5i为本发明的使用场景七的步骤流程图;
图5j为本发明的使用场景八的步骤流程图;
图5k为本发明的使用场景九的用户、图像采集设备和服务器的信息交互图;
图6为根据本发明实施例五的一种图像采集方法的步骤流程图;
图7为根据本发明实施例六的一种图像采集方法的步骤流程图;
图8为根据本发明实施例七的一种图像采集装置的结构框图;
图9为根据本发明实施例八的一种图像采集装置的结构框图;
图10为根据本发明实施例九的一种电子设备的结构示意图。
具体实施方式
为了使本领域的人员更好地理解本发明实施例中的技术方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明实施例一部分实施例,而不是全部的实施例。基于本发明实施例中的实施例,本领域普通技术人员所获得的所有其他实施例,都应当属于本发明实施例保护的范围。
下面结合本发明实施例附图进一步说明本发明实施例具体实现。
实施例一
参照图1,示出了根据本发明实施例一的一种图像采集方法的步骤流程图。
本实施例的图像采集方法包括以下步骤:
步骤S102:获得对采集的图像进行实时目标对象边缘检测的检测结果。
其中,采集的所述图像中包含目标对象的部分图像信息,目标对象边缘检测用于检测采集的图像中是否包含目标对象的边缘。
对目标对象边缘的检测可以在客户端进行检测,也可以在服务端进行检测后将检测 结果发送至客户端。其可以采用任意适当的模型或算法或其它方式实现,例如,使用训练完成的能够进行目标对象进行边缘检测的神经网络模型如卷积神经网络模型(Convolutional Neural Network,CNN),对采集的图像进行边缘检测。
又例如,使用特征提取算法对采集的图像进行特征提取,并根据提取出的特征确定图像中是否包含目标对象的边缘,进而生成检测结果。
根据检测结果可以判断用户是否采集到目标对象边缘的图像,进而为后续生成适当的引导信息提供参考,以引导用户,避免用户采集图像时出现错误操作,确保可以采集到完整图像信息。
例如,检测结果指示在图像中检测到目标对象的边缘,则执行步骤S104;反之,则可以直接生成用于指示用户继续沿当前移动方向移动并拍摄的引导信息。
步骤S104:若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据。
图像采集设备的姿态数据用于表征其当前所处的状态,图像采集设备的姿态数据包括但不限于空间坐标系中的加速度信息和/或角速度信息。例如,通过所述加速度信息和/或角速度信息确定图像采集设备当前处于45度角上仰状态,等等。
根据该姿态数据可以确定出用户是否有对目标对象的不同位置进行接续拍摄的意图,则生成的引导信息与该意图相匹配,以引导用户进行准确的图像采集,确保能够采集到目标对象的完整图像信息。
步骤S106:根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
在一具体实现方式中,图像采集设备中可以设置有姿态数据与引导信息或引导关键词的对应关系,据此,可以根据姿态数据确定并生成相应的引导信息,或者,可以根据姿态数据确定对应的引导关键词,进而,根据引导关键词生成相应的引导信息。例如,若姿态数据对应引导关键词“上移”,则可生成诸如“请上移一格进行拍摄”等引导信息。通过引导信息,可以有效引导用户进行接续图像采集,以便后续能够根据采集的多个图像形成目标对象的完整图像信息。
需要说明的是,在获取完整图像信息的过程中,步骤S102~步骤S106可以重复执行多次,直至根据步骤S104中获取的图像采集设备的姿态数据确定用户没有接续采集的意图时,可以生成指示用户结束图像采集的引导信息。在结束采集后,可以使用采集的多 个图像形成目标对象的完整图像信息。
需要说明的是,对于目标对象是货架的情况,本实施例中的图像采集方法尤其适用于货架上货物摆放不规范的使用场景。例如,小的零售店中的货架,这种货架都存在着货架摆放紧密、商品凌乱不规范的问题,本实施例的图像采集方法能够克服这些问题,实现对货架的完整图像的采集,且能够获得清晰的货品信息,以供后续可以通过识别完整图像,确定货架上的货品。
对于大商场场景或大规模拍摄场景可能会采用无人机,但无人机拍摄的技术手段并不能适用于本申请的使用场景,其技术手段也难以转用到本申请的使用场景中。同样的,电子价签场景中的技术手段也难以在本使用场景中实施。
通过本实施例,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的。
本实施例的图像采集方法可以由任意适当的具有数据处理能力的电子设备执行,包括但不限于:服务器、移动终端(如平板电脑、手机等)和PC机等。
实施例二
参照图2,示出了根据本发明实施例二的一种图像采集方法的步骤流程图。
本实施例的图像采集方法包括前述的步骤S102~步骤S106。
其中,在步骤S102之前,所述方法还包括:
步骤S100:获取动态下发到所述图像采集设备上的、用于进行所述目标对象边缘检测的轻量级神经网络模型。
需要说明的是,本步骤为可选步骤。若执行该步骤,则其可以在步骤S102之前的任意适当的时机执行。
在本实施例中,为了保证能够及时检测到采集的图像中是否包含目标对象的边缘,以确保生成的引导信息的准确性,图像采集设备本地具有动态下发的轻量级神经网络模型,利用轻量级神经网络模型在图像采集设备本地就可以对图像进行实时检测,无需传至服务端,由此大大提升了检测的速度和效率。
轻量级神经网络模型也称微型神经网络模型,其是指需要参数数量较少和计算代价较小的神经网络模型。由于其计算开销小,因此可以部署在计算资源较为有限的图像采 集设备上。
具体地,该轻量级神经网络模型可以是预先训练完成的轻量级卷积神经网络模型。该卷积神经网络模型具有输入层、隐层和输出层。在对卷积神经网络模型进行训练时,使用预先收集的大量的包含目标对象(如货架、大型机械装备、大型容器等)的图像,对这些图像进行标注,主要标注目标对象的边缘,然后利用这些标注的图像对卷积神经网络模型进行训练。使训练完成的卷积神经网络模型可以正确识别到图像中是否包含货架的边缘。之后可以将训练完成的卷积神经网络模型动态下发到图像采集设备。
采用轻量级神经网络模型的形式,在保证能够准确地对图像进行目标对象边缘检测的前提下,降低了对计算能力的需求和对存储空间的需求,从而使得本方案能够适应更多的图像采集设备,尤其是小型图像采集设备,如手机、平板电脑等移动终端设备。
在本实施例中,在执行步骤S100的情况下,所述步骤S102可以实现为:使用所述轻量级神经网络模型对采集的图像进行实时的目标对象边缘检测,获得检测结果。通过所述检测结果指示当前图像中是否包含目标对象的边缘。
由此,可以实现在图像采集设备本地快速、高效、准确地进行目标对象边缘检测,进而保证生成引导信息的及时性。
通过本实施例,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的。
此外,通过将轻量级神经网络模型动态下发到的图像采集设备上,使其能够在本地对采集的图像进行实时的目标对象边缘检测,在确保检测性的前提下,提升了检测的及时性,进而保证了后续引导信息生成的及时性,相较于以往需要将采集的图像发送到后台服务端进行检测再返回检测结果的方式,在图像采集设备本地进行检测,无需受到网络传输速度的限制,可靠性更好,速度和效率也更高。
本实施例的图像采集方法可以由任意适当的具有数据处理能力的电子设备执行,包括但不限于:服务器、移动终端(如平板电脑、手机等)和PC机等。
实施例三
参照图3,示出了根据本发明实施例三的一种图像采集方法的步骤流程图。
本实施例的图像采集方法包括前述的步骤S102~步骤S106。
其中,所述方法可以包括或不包括步骤S100,在包括步骤S100时,步骤S102可以采用实施例二中的实现方式实现。
在本实施例中,所述步骤S104即所述获取采集所述图像的图像采集设备的姿态数据可以实现为:获取所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息。
例如,空间坐标系包括X轴、Y轴和Z轴,在这三个轴上的加速度信息可以通过配置在图像采集设备中的加速度计进行获取。在这三个轴上的角速度信息可以通过配置在图像采集设备中的陀螺仪进行获取。
当然,针对不同结构的图像采集设备,可以采用不同的方式获取到空间坐标系中的加速度信息和/或角速度信息,本实施例对此不作限定。
根据图像采集设备的在空间坐标系中的加速度信息和/或角速度信息可以确定出其是否有向上或向下的倾斜图像采集设备,进而可以确定用户是否有进行换行拍摄的意图。
一种情况中,如果图像采集设备在采集到目标对象的边缘后的一段时间内保持某个倾角或者快速趋于收起图像采集设备的姿态,则表示用户采集了目标对象的完整图像信息,无进行换行拍摄意图,可以生成引导用户结束图像采集的引导信息。
另一种情况中,如果图像采集设备在采集到目标对象的边缘后的一段时间内图像采集设备有向上或向下的倾角变化,则表示用户有换行拍摄的意图,因此可以执行步骤S106,以生成引导用户进行接续图像采集的引导信息。
可选地,在所述步骤S104采用前述的实现方式时,步骤S106包括以下子步骤:
子步骤S1061:根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态。
根据图像采集设备在X轴、Y轴和Z轴上的加速度信息可以确定其是否移动和/或旋转图像采集设备。根据其在X轴、Y轴和Z轴的角速度信息,可以确定其偏离水平或竖直状态的倾角。根据加速度信息和/或角速度信息就可以确定图像采集设备的当前姿态。
例如,陀螺仪中X轴水平指向右方,Y轴垂直指向前方,Z轴指向图像采集设备的屏幕的正上方。
当根据角速度信息确定图像采集设备有在X轴上的角速度时,可以确定其为倾斜状态,根据角速度的值可以确定当前姿态为向上倾斜或向下倾斜。同理,当根据加速度信息确定在Z轴上有加速度时,可以确定图像采集设备有向上移动的趋势。
子步骤S1062:根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
图像采集设备中可以设置有当前姿态与引导信息或引导关键词的对应关系,在确定当前姿态后,可以根据当前姿态和设置的对应关系确定匹配的引导信息或引导关键词,进而生成与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
例如,当前姿态为向上倾斜,其匹配的引导关键词为“上移”,则可以生成指示用户向上移动并进行接续图像采集的引导信息。
又例如,当前姿态为向下倾斜,其匹配的引导关键词为“下移”,则可以生成指示用户向下移动并进行接续图像采集的引导信息。
当然,也可以直接匹配到诸如“请上移拍摄”或者“请下移拍摄”的引导信息。
由上,可以生成准确的引导信息,以通过引导信息指示用户接续拍摄,从而保证能够采集到目标对象的完整图像信息。
通过本实施例,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的
本实施例的图像采集方法可以由任意适当的具有数据处理能力的电子设备执行,包括但不限于:服务器、移动终端(如平板电脑、手机等)和PC机等。
实施例四
参照图4,示出了根据本发明实施例四的一种图像采集方法的步骤流程图。
本实施例的图像采集方法包括前述的步骤S102~步骤S106。
其中,所述方法可以包括或不包括步骤S100,在包括步骤S100时,步骤S102可以采用实施例二中的实现方式实现。步骤S104可以采用实施例三的实现方式或其他实现方式。在步骤S104采用实施例三的实现方式时,步骤S106可以采用实施例三中的实现方式。
在本实施例中,若根据步骤S104中获取的图像采集设备的姿态数据确定用户没有拍摄目标对象的其他部分的意图,则可以使用采集的多个图像形成所述目标对象的完整图像信息。
在一种可行方式中,使用采集的多个图像形成所述目标对象的完整图像信息包括:对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像。
由于在目标对象的图像采集过程中采用了分次采集的方式,因此,通过对采集的多 个图像进行拼接,就可以获得包含所述目标对象的完整图像信息的完整图像,而且,拼接处的完整图像可以使用户能够更加直观地观察目标对象。
具体地,所述对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像,包括以下步骤:
步骤S108:从采集的多个图像中,确定具有图像重合关系的多组图像。
在本实施例中,每组图像中包括两张图像。图像间具有重合关系表示这两张图像在空间位置上是相邻的,因此,可以根据这种重合关系推断出图像间的相对位置关系,进而依据相对位置关系进行图像拼接。采用这种拼接方式既可以保证拼接的准确性,又可以实现快速拼接。
一种确定具有图像重合关系的多组图像的方式为:对采集的多个图像中的每个图像进行特征提取,获得每个图像对应的特征点;对任意两张图像,根据两个所述图像的特征点进行匹配,并基于匹配结果确定所述具有图像重合关系的多组图像。
对图像进行特征提取可以采用HOG(方向梯度直方图,Histogram of Oriented Gradient)特征提取算法、LBP(Local Binary Pattern,局部二值模式)特征提取算法、或者Haar-like特征提取算法等任意适当的算法。
在对任意两张图像根据两个所述图像的特征点进行匹配时,可以通过比较两张图像的相似度是否满足一定阈值(具体例如,计算特征点间的距离确定相似度)的方式确定匹配结果。如两张图像的特征点间的距离小于预设值,则匹配结果指示两张图像具有重合关系;反之,若两张图像的特征点间的距离大于或等于设定值,则匹配结果指示两张图像不具有重合关系。
采用这种方式可以准确地判断具有重合关系的图像,从而保证拼接的准确性。
步骤S110:根据所述图像重合关系对采集的多个图像进行拼接,并根据拼接结果获得包含所述目标对象的完整图像信息的完整图像。
在一具体实现中,根据图像间的重合关系,确定各个图像的相邻图像,并根据两个相邻图像中重合部分的位置确定相对位置关系,进而对多个图像进行拼接,并获得完整图像。该完整图像中就包含了目标对象的完整图像信息。
如,根据重合关系确定图像A与图像B具有重合部分,且重合部分位于图像A的左侧和图像B的右侧,则可以将图像A拼接在图像B的右侧。
又例如,图像A的上侧与图像C的下侧有重合部分,则将图像C拼接到图像A的上方。
后续用户可以根据需要对完整图像进行分析和/或处理,进而获得需要的信息。例如,目标对象为货架,通过分析货架的完整图像可以确定某类产品是否被摆放在便于拿取的位置等。
需要说明的是,对图像的拼接也可以由服务端完成,例如,图像采集设备将采集到的多个图像上传至后台服务端,由服务端进行相应的识别和拼接操作,然后再接拼接完成的完整图像发送回图像采集设备,以减轻图像采集设备的数据处理负担。
通过本实施例,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的。
此外,通过将轻量级神经网络模型动态下发到的图像采集设备上,使其能够在本地对采集的图像进行实时的目标对象边缘检测,在确保检测性的前提下,提升了检测的及时性,进而保证了后续引导信息生成的及时性和准确性,相较于以往需要将采集的图像发送到后端服务器由服务器进行检测再返回检测结果的方式,在图像采集设备本地进行检测,无需受到网络传输速度的限制,可靠性更好。
此外,通过将采集的多个图像拼接形成包含完整图像信息的完整图像,可以使用户能够更加直接地观察目标对象,而且可以根据需要对完整图像进行分析和处理以获得需要的分析结果。
本实施例的图像采集方法可以由任意适当的具有数据处理能力的电子设备执行,包括但不限于:服务器、移动终端(如平板电脑、手机等)和PC机等。
使用场景一
参照图5a,其示出了本使用场景一中的图像采集方法的步骤流程图,图中Step即为步骤的含义。
在本使用场景中,以图像采集设备为手机、目标对象为货架为例,对图像采集方法进行说明。具体地,该图像采集方法包括以下步骤:
步骤A1:用户采用图像拍摄的方式开始拍摄货架。
此种场景下,用户采用每次拍摄一张图像的方式对货架进行拍摄。因一次仅能拍摄货架的一部分,因此,需要多次拍摄。并且,对于某一次拍摄的图像来说,其与前次拍摄的图像之间、后次拍摄的图像之间、其对应的货架上方位置的图像之间、其对应的货架下方位置的图像之间,均具有一定的图像重合度。可选地,所述重合度可以设定为大 于或等于20%,以保证后续图像的有效识别和拼接。
步骤B1:在拍摄货架过程中,实时对拍摄的图像进行货架边缘检测,如果检测到图像中有货架边缘,则进入步骤C1;如果未检测到图像中有货架边缘,则直接生成提示用户继续拍摄的引导信息,并重复执行步骤B1。
步骤C1:通过手机的加速度计和陀螺仪,分别计算手机在空间坐标系(即X轴、Y轴、Z轴)上的加速度和角速度,并根据计算结果判断手机是否有向上或者向下的角度,从而分析用户是否有继续拍摄货架其他部分的意图。
如果没有继续拍摄的意图,则执行步骤D1,如果有继续拍摄意图,则执行步骤E1。
步骤D1:如果手机一直保持某个角度且几乎没有方向的变化,说明用户拍摄完了整节货架,其没有继续拍摄意图,因此生成指示拍摄结束的引导信息,并可以将该引导信息显示在手机屏幕上,以引导用户。在步骤D1执行完成后,执行步骤F1。
步骤E1:如果手机突然有向上拍或者向下拍的方向变化,则说明用户有换行拍货架上面或者货架下面部分的意图,因此,生成指示用户进行向上或向下移动并继续拍摄的引导信息,并可以将该引导信息显示在手机屏幕上引导用户。
可选地,在生成引导信息后,可以再次获取图像采集设备的姿态数据,并根据姿态数据确定用户是否依照引导信息的指示进行了操作,若用户未依照引导信息的指示操作,则可以生成警示信息,以提示用户;若用户依照引导信息的指示进行了操作,则可以不动作。
在检测到新采集的图像后,返回步骤B1继续执行。
步骤F1:结束拍摄,对用户拍摄的多个货架图像进行拼接,从而生成包含一整节货架的完整图像。
通过本过程,对拍摄的图像进行货架边缘检测,若根据检测结果分析用户是否拍摄到了货架边缘,若拍摄到货架边缘则通过手机上的加速度传感器和陀螺仪分析用户是否有拍摄货架其余部分的意图,从而根据分析出的意图更好地引导用户拍摄完整的一节货架,确保了拍摄质量,保证能够获得货架的完整图像信息。
使用场景二
参照图5b,其示出了本使用场景二中的货架图像采集方法的步骤流程图。
在本使用场景中,图像采集设备可以为手机、pad、相机等。通过本方法可以获得货架的完整图像,进而分析商品信息,以根据商品信息提示进行补货或调整商品摆放位置等。
具体地,以图像采集设备为手机,目标对象为货架为例,该货架图像采集过程包括:
步骤A2:获取根据第一引导信息的指示采集的货架图像。
其中,所述货架用于承载商品。货架可以是在商场、超市等场所中用于陈列商品的货架,也可以是在仓库中用于放置商品的货架。货架图像中包含货架的部分信息。
所述第一引导信息用于对所述货架的图像采集路径进行指示。所述图像采集路径为根据所述货架结构信息对所述货架进行分割生成的路径,所述货架结构信息根据所述货架的整体平面图、立体图和预设的货架虚拟模型中的至少一个确定。
其中,货架结构信息可以是用户提前拍摄的货架的整体图像,由于提前拍摄的整体图像仅用于获得货架结构信息,因此可以拍摄一张或几张不同视角的货架的整体图像,以便于服务端或者客户端从整体图像中分析出货架的货架结构信息,并通过货架结构信息生成图像采集路径。
当然,也可以采用其他方式获取货架结构信息,如预先建立不同规格货架的虚拟模型,用户可以预先选择需要采集图像的货架的虚拟模型,并根据该虚拟模型生成图像采集路径。
对货架结构的分析的具体实现可以由本领域技术人员根据实际需求采用任意适当方式或算法实现,包括但不限于连通域分析、使用神经网络模型分析等。第一引导信息可以是在本地生成的,也可以是由服务端生成后客户端从服务端获取的。
例如,图像采集路径示意图如5c所示,图中001指示的虚线线条为分割所述货架的分割路径,根据货架结构信息生成分割路径可以由服务端实现,也可以在图像采集设备本地实现。在生成图像采集路径时,针对同一货架可以根据其结构生成不同的分割路径,具体的分割策略可以预置在服务端或客户端中,如采用直线分割、S型分割、U型分割、矩形分割、螺旋分割等,也可以通过诸如训练完成的神经网络模型根据货架结构信息输出生成分割路径。
图中002处指示线条为分割路径中与第一引导信息对应的图像采集路径。通常图像采集路径可以是分割路径的部分或全部。图中003处指示图像采集设备的一次图像采集的拍摄区域,该区域覆盖至少部分图像采集路径,相邻两次采集对应的拍摄区域具有部分重合。
基于第一引导信息,用户可以根据其引导指示沿相应的路径对货架的相应部分进行拍摄,并获得相应的货架图像。
步骤B2:获取对所述货架图像进行货架边缘检测的边缘检测结果。
在获得当前拍摄的货架图像后,可对该货架图像进行货架边缘检测。该检测可以在图像采集设备本地执行,直接获得边缘检测结果;或者将货架图像发送到服务端,由服务端进行货架边缘检测,并将边缘检测结果发送至图像采集设备。
若在图像采集设备本地进行货架边缘检测,则可以采用训练的用于进行货架边缘检测的轻量型神经网络模型进行检测,以降低计算量,确保图像采集设备的计算能力能够满足检测需求。
若在服务端进行货架边缘检测,则可以采用训练的用于进行货架边缘检测的深度较深的神经网络模型进行检测,以提升检测精度。
若获得的边缘检测结果指示所述货架图像中包括货架边缘,则执行步骤C2;反之,则生成指示沿着所述图像采集路径移动一定距离继续拍摄的第四引导信息。
步骤C2:若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
其中,新的图像采集路径可以是分割路径的一部分,其可以根据实际检测结果和之前的路径分割结果确定,例如,新的图像采集路径为图5c中处于下部的虚线指示的路径。在后续步骤中用户可以将图像采集设备移动到取景位置与新的图像采集路径对应的位置(如图5c中虚线拍摄区域),进行继续拍摄。
一种可行的步骤C2的实现方式包括:若所述边缘检测结果指示所述货架图像中包括货架边缘,则对已采集的所有货架图像生成的采集结果图像进行商品信息识别,并获取商品信息结果;根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
其中,已采集的所有货架图像生成的采集结果图像可以在图像采集设备本地生成,也可以是在每次采集到一张货架图像后,将货架图像发送至服务端,由服务端生成采集结果图像并发送至图像采集设备。
在图像采集设备本地生成采集结果图像的过程可以是:获得根据对采集的所有货架图像进行拼接后生成的采集结果图像。
例如,图像采集设备根据货架图像中的重合部分,将两张图像的重合部分叠加,形成拼接后的采集结果图像。如,图像1的右侧与图像2的左侧具有重合部分,则将图像1和图像2的重合部分叠加,形成采集结果图像。
为了便于用户查看,在显示界面中可以配置预览框(如图5d中005处所示),在预览框中展示拼接出的采集结果图像。
所述根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息,包括:若所述商品信息结果指示所述采集结果图像中未包含所述货架的全部商品,则获取指示切换所述图像采集路径中的拍摄行的第二引导信息;或者,若所述商品信息结果指示所述采集结果图像中包含所述货架的全部商品,则获取指示结束拍摄的第三引导信息。
这种方式可以生成准确的引导信息,以引导用户对货架图像进行多次采集,确保每次采集的货架图像中能够包括清晰的商品信息,可供识别,从而解决现有的货架尺寸过长,通过一次采集获取货架整体图像会使得商品信息过小,不能识别的问题。
另一种可行的步骤C2的实现方式包括:若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取图像采集设备的姿态数据;根据所述姿态数据获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
其中,所述姿态数据包括所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息。根据姿态数据可以确定用户是否有继续进行图像采集的意图,进而在有继续进行图像采集的意图时可以确定意图拍摄的方向,进而生成对应的第二引导信息;在没有继续进行图像采集的意图时可以生成第三引导信息。
这种方式也可以生成准确的引导信息,以确保能够获取到包括可供识别的商品信息的货架图像,从而确保在后续步骤中可以根据识别出的商品信息,进行对应处理。
可选地,在图像采集过程中,在首次采集之后的其他次采集之前,还可以执行:从最新采集的货架图像中,获取与当前图像采集路径对应的保留区域并在显示界面的设定区域展示所述保留区域,以通过所述保留区域指示下一图像采集操作的图像采集对齐位置。
通过在设定区域展示保留区域,可以方便用户在进行下一次图像采集时将保留区域与货架对应位置对齐,从而保证用户相邻两次采集的货架图像重合部分足够进行拼接,又能够防止重合部分过多导致采集货架的完整信息需要的采集次数过多。
保留区域如图5d中006处所示,最新采集的货架图像中的保留区域、以及显示界面中的设定区域均根据图像采集路径确定。
例如,针对最新采集的货架图像,其后一张货架图像是采用沿图像采集路径向右移动一定距离进行采集的方式获取的,则保留区域是最新采集的货架图像的最右侧的部分区域,该部分区域的面积可以是总面积的1/6至1/5。相应的,设定区域是展示界面最左侧的部分区域。
又例如,针对最新采集的货架图像,其后一张货架图像是采用向下移动到新的图像采集路径进行采集的方式获取的,则保留区域是最新采集的货架图像的最下方的部分区域,该部分区域的面积可以是总面积的1/6至1/5。相应的,设定区域是展示界面最上方的部分区域。
这样就可以使用户在进行下一次图像采集操作时将显示界面中的保留区域与实际货架中对应的区域对齐,使采集的货物图像与上一张货物图像有足够的重合部分可以确定两张货物图像之间的位置关系,又不会造成重合部分过多,造成大量无用数据。
可选地,在获取采集结果图像后的任意适当时间还可以执行识别步骤,即:对所述采集结果图像进行商品信息识别和/或商品位置识别,并获得商品信息结果和/或商品位置结果。
例如,可以在每次图像采集操作之后,根据已采集的货架图像生成采集结果图像后均执行该识别步骤,也可以在获取到包含货架完整信息的采集结果图像后,执行该识别步骤。
优选地,为了提升效率,在获取到包含货架完整信息的采集结果图像后,执行该识别步骤。
其中,商品信息识别可以在服务端执行或者在本地执行。
在服务端执行时,服务端获得图像采集设备发送的采集结果图像,或者获得图像采集设备发送的货架图像并拼接形成采集结果图像,进而使用训练的能够进行商品信息识别的神经网络模型进行识别,获得商品信息结果。
在本地执行时,图像采集设备可以直接根据货架图像拼接出采集结果图像或者从服务端获得采集结果图像,并使用训练的能够进行商品信息识别的神经网络模型进行识别,获得商品信息结果。
类似地,商品位置识别可以在服务端执行或者在本地执行。识别时,可以使用训练的能够进行商品位置识别的神经网络模型进行识别,获得商品位置结果。
可选地,在对所述采集结果图像进行商品信息识别和/或商品位置识别,并获得商品信息结果和/或商品位置结果之后,还可以执行:对所述商品信息结果和/或所述商品位置结果进行分析操作,并生成与所述分析操作对应的分析结果。
所述分析结果包括下列至少之一:商品售卖信息、商品陈列信息、商品数量信息、商品补货状态信息。
为了获得不同的分析结果,可以进行不同的分析操作。
例如,分析结果包括商品售卖信息,则可以分析商品信息结果和商品位置结果,确定货架上剩余的商品以及货架上各摆放位置的商品,进而分析出空余位置的商品,从而确定商品售卖信息。
又例如,分析结果包括商品陈列信息,则可以分析商品信息结果确定货架上各摆放位置的商品,从而确定商品陈列信息。
再例如,分析结果包括商品数量信息,则可以分析商品信息结果和商品位置结果,确定各摆放位置的商品,并根据摆放位置数量确定商品数量信息。
再例如,分析结果包括商品补货状态信息,则可以分析商品信息结果和商品位置结果,确定待补货商品及对应的补货位置。
可选地,当接收到用户的终止采集操作时,还可以执行:确定根据已采集的所有货架图像生成的采集结果图像中是否包含货架的全部商品。
由于用户在图像采集过程中可能出现意外情况而终止图像采集,因此,在获取到用户指示终止图像采集的操作(如退出操作或结束采集操作)时,确定是否已经对货架进行了完全采集,即已采集的所有货架图像生成的采集结果图像中是否包含货架的全部商品。
若采集结果图像包含货架的全部商品,表示进行了完全采集,可以将采集结果图像保存并终止采集。
若采集结果图像未包含货架的全部商品,表示未进行完全采集,可以将已采集的采集结果图像和相关采集信息(如图像采集路径等)保存,并提示用户存在未上传(或未更新)部分的提示信息,以告知用户可以在适当时间继续进行图像采集。
这样可以充分利用采集到的货架图像,及时、智能地进行分析,并根据需要使用分析结果进行相应处理,如根据分析结果确定需要补货,则可以生成补货提醒,并指示待补充货品类型等。
使用场景三
参照图5e,其示出了本使用场景三中的商品信息处理方法的步骤流程图。
在本使用场景中,以图像采集设备为手机为例,该方法包括以下步骤:
步骤A3:根据获取的第一引导信息采集货架的图像数据。
其中,所述第一引导信息用于对所述货架的图像采集路径进行指示。第一引导信息可以采用使用场景二中所述的方式生成,或者采用其他方式生成,本使用场景对此不作限定。
步骤B3:对所述图像数据进行识别,并获得所述货架上的商品信息和是否包含货架边缘的信息。
识别可以包括商品信息识别和货架边缘识别。商品信息识别可以采用使用场景二中的能够进行商品信息识别的神经网络模型进行识别,或者采用其他方式识别。货架边缘识别可以采用能够进行货架边缘识别的神经网络模型进行识别,或者采用其他方式进行识别。
若信息指示图像数据中包含货架边缘,则执行步骤C3;反之,则可以不动作或生成指示沿着图像采集路径继续进行采集的引导信息。
步骤C3:若确定所述图像数据中包含货架边缘的信息,则根据所述商品信息判断已采集的所有图像数据中是否包含所有商品信息,根据判断结果获得指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
在判断已采集的所有图像数据中是否包含所有商品信息时,可以根据商品信息确定商品品类数量,若商品品类数量满足要求则判断结果为包含所有商品,根据判断结果获取指示结束采集的第三引导信息;若商品品类数量不满足要求,则判断结果为未包含所有商品,根据判断结果获取指示新的图像采集路径的第二引导信息。
后续所述方法还可以根据商品信息执行其他步骤,如生成补货提示信息等。通过本使用场景的商品信息处理方法可以实现对货架上商品信息的处理,从而满足补货提示、变更商品位置提示等需求。
使用场景四
参照图5f,其示出了本使用场景四中的货架图像采集方法的步骤流程图。
在本使用场景中,图像采集设备为手机,该方法包括以下步骤:
步骤A4:展示对货架商品的第一采集提示信息。
其中,所述第一采集提示信息用于指示沿图像采集路径对货架商品进行图像采集时的采集位置。第一采集提示信息可以根据第一引导信息指示的图像采集路径确定,例如,针对最近一次采集位置,在图像采集路径上移动一定距离确定新的采集位置,根据新的采集位置生成第一采集提示信息。第一引导信息可以如使用场景二中所述的第一引导信息。
步骤B4:获取根据所述第一采集提示信息进行图像采集的图像,并对获取的图像进行识别。
根据需要的不同,可以对获取的图像进行不同的识别。例如,进行货架边缘识别、 商品信息识别等。具体的识别方式可以如前述使用场景中所述的方式,故在此不再赘述。
若识别结果中指示图像中包括货架边缘则执行步骤C4;反之,则根据图像采集路径更新第一采集提示信息,并返回步骤A4继续执行。
步骤C4:若识别结果指示所述图像中包括货架边缘,则展示用于指示新的图像采集路径并指示继续进行图像采集的第二采集提示信息。
若包含货架边缘,则确定新的图像采集路径,并生成指示其的第二采集提示信息,以提示用户切换图像采集路径进行继续采集。确定新的图像采集路径的过程可以与前述使用场景相同,故不再赘述。
通过本使用场景的货架图像采集方法,可以获得完整、准确的货架图像,以及货架上的商品信息,从而可以对商品信息进行分析,以进行补货提示、变更商品位置提示等。
使用场景五
参照图5g,其示出了本使用场景五中的客户端的展示界面的结构示意图。
在本使用场景中,客户端包括展示界面。所述展示界面用于展示第一采集提示信息,所述第一采集提示信息用于指示沿图像采集路径对目标对象进行图像采集;所述展示界面还用于展示第二采集提示信息(如图5g中的007处),所述第二采集提示信息为在获取的图像中包含所述目标对象的边缘时,指示沿新的图像采集路径对所述目标对象进行图像采集的信息。
第一采集提示信息可以通过使用场景四中所述的方式生成,并通过展示界面展示。第二采集提示信息可以根据新的图像采集路径确定。例如,针对最近一次采集位置,在新的图像采集路径上移动一定距离确定新的采集位置,根据新的采集位置生成第二采集提示信息,并通过展示界面显示。
通过该客户端可以展示第一采集提示信息和第二采集提示信息,进而提示用户进行图像采集,以提升采集的图像质量,使其能够采集到高质量的目标对象的完整图像。
可选地,所述目标对象包括下列至少之一:货架、停车场、场馆的坐席。对于停车场,通过该方法可以采集停车场的完整图像,进而可以分析其中的车辆信息。对于场馆坐席,通过该方法可以采集其完整图像,进而可以分析坐席使用情况,进而计算上座率等。对于货架,通过该方法可以采集货架的完整图像,进而可以分析其中的车辆信息、座位信息或商品信息等,从而可以进行后续处理。
使用场景六
参照图5h,其示出了本使用场景六中的商品信息处理方法的步骤流程示意图。
在本使用场景中,该方法包括:
步骤A5:采集货架的图像数据。
货架的图像数据可以通过图像采集设备进行采集,图像采集设备可以是手机等。采集货架的图像数据可以采用前述的使用场景一到五中任一的方式实现。
步骤B5:对所述图像数据进行处理,识别得到所述货架上的商品信息。
根据需要的不同,可以对图像数据进行不同的处理,例如,使用训练的能够识别图像中商品信息的神经网络模型对图像进行处理,并获取货架上的商品信息,商品信息可以包括商品名称信息、品类信息等。
步骤C5:根据所述识别得到的商品信息,确定所述货架的商品统计信息。
商品统计信息可以包括商品数量信息、商品品类数量、各品类商品数量等。
通过本方法,可以获得质量较高的包含了货架全部信息的图像数据,进而可以分析图像数据从而商品信息,以获得商品统计信息,便于后续根据商品统计信息进行补货提示等。
使用场景七
参照图5i,其示出了本使用场景七中的商品信息处理方法的步骤流程示意图。
本使用场景中,所述方法包括:
步骤A6:响应于用户发起的拍摄操作,调用客户端的图像采集装置拍摄货架的图像数据。
用户可以通过服务端调用客户端的图像采集装置,也可以直接在客户端发起拍摄操作,并调用图像采集装置。
获取图像数据的方式可以采用使用场景一到五中任一所述的方式,本使用场景对此不作限定。
步骤B6:对所述图像数据进行处理,识别得到所述货架上的商品信息。
获取商品信息的方式可以与前述使用场景六相同,或不同。
步骤C6:根据所述识别得到的商品信息,确定所述货架的商品统计信息。
商品统计信息可以包括商品数量信息、商品品类数量、各品类商品数量等。
通过本方法,可以获得质量较高的包含了货架全部信息的图像数据,进而可以分析图像数据从而商品信息,以获得商品统计信息,便于后续根据商品统计信息进行补货提示等。
使用场景八
参照图5j,其示出了本使用场景八中的商品补货的处理方法的步骤流程示意图。
在本使用场景中,所述方法包括:
步骤A7:响应于用户发起的补货操作,调用图像采集装置拍摄货架的图像数据。
用户可以通过客户端发起补货操作,客户端直接调用图像采集装置拍摄货架的图像数据,或者客户端将补货操作发送至服务端,由服务端调用图像采集装置拍摄货架的图像数据。
拍摄图像数据的方式可以如使用场景一至五中任一方式拍摄。
步骤B7:对所述图像数据进行识别处理,识别得到所述货架上的商品信息。
获取商品信息的方式可以与前述使用场景六相同,或不同。
步骤C7:根据所述货架上的商品信息,确定待补货商品。
例如,根据商品信息确定剩余商品,确定预设的商品信息中剩余商品之外的商品作为待补货商品。
可选地,所述方法还包括:
步骤D7:根据所述待补货商品,生成并显示用于提示对所述待补货商品进行补货的补货提示信息。
本领域技术人员可以使用适当的方式生成补货提示信息,例如直接根据待补货商品的名称生成补货提示信息。
通过本方法,可以获得完整的、高质量的货架图像,进而获取到货架的商品信息,从而根据商品信息确定待补货商品,以便能够通过拍摄货架图像的方式自动地获取到待补货商品,而不用用户手动对货架进行逐一盘点,通过生成补货提示信息,可以快速地提示用户进行补货,提升便捷性。
使用场景九
参照图5k,其示出了一种使用场景九中的用户、图像采集设备和服务器之间信息交互示意图。
在本使用场景中,补货过程包括:
图像采集设备接收服务器下发的训练的商品识别模型。当接收到用户的开始拍摄指令时,图像采集设备通过实施例一到四的图像采集方法采集货架图像,并获得货架的采集结果图像。使用商品识别模型对采集结果图像进行商品信息处理,并根据处理结果展示推荐商品。接收到用户对推荐商品的选择操作后,根据选择的商品确定待补货商品,并向服务器提交补货请求,以在服务器生成补货订单。
此外,图像采集设备获取到商品信息处理的处理结果后,可以将处理结果发送到服务器,使服务器继续训练初始商品识别模型,并定期或根据其他条件对训练的初始商品识别模型进行压缩,将压缩结果发送给图像采集设备。
该补货过程可以确保货架图像的采集质量,进而保证商品信息处理的质量,从而实现可靠的自动补货。
实施例五
参照图6,示出了根据本发明实施例五的一种图像采集方法的步骤流程图。
本实施例的图像采集方法包括以下步骤:
步骤S602:在对目标对象进行图像采集的过程中,获取图像采集设备的姿态数据。
姿态数据用于指示图像采集设备被持握的姿态,例如,水平持握、竖直持握、具有向上的倾角或者具有向下的倾角等。根据被持握的姿态就可以确定出用户的图像采集意图。例如,在图像采集过程中,用户意图沿着当前图像采集路径进行接续采集,则图像采集设备通常是被竖直持握;而用户如果有切换到新的图像采集路径进行接续采集的意图时,图像采集设备通常被以具有向上的倾角或者具有向下的倾角的方式持握。
图像采集设备的姿态数据包括但不限于所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息。其还可以包括与目标对象之间的距离信息等。
本领域技术人员可以通过适当的方式获取图像采集设备的姿态数据,例如,通过加速度传感器获得加速度信息,通过陀螺仪获得角速度信息。
步骤S604:根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
例如,在姿态数据包括加速度信息和/或角速度信息时,步骤S604可以实现为:根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
第一种情况中,在所述根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息时,若当前姿态符合预设的路径转换条件,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息。
若当前姿态为图像采集设备具有向下倾角或者具有向上倾角,则确定当前姿态符合预设的路径转换条件,则生成第五引导信息,该第五引导信息引导用户将当前图像采集 路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集。如当前姿态是具有向下倾角,则将新的图像采集路径转换为当前图像采集路径的下方的图像采集路径,并生成内容为“请向下移动并继续拍摄”的第五引导信息。
根据目标对象结构的不同,生成的新的图像采集路径可以不同。本领域技术人员可以根据需要采用任何适当的方式生成图像采集路径,例如,根据预设图像采集路径生成策略生成新的图像采集路径,或者,采用前述实施例中的图像采集路径生成方式等。
第二种情况中,若当前姿态不符合预设的路径转换条件,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
例如,当前姿态为图像采集设备被竖直持握,则当前姿态不符合预设的路径转换条件,进而生成指示用户沿着当前图像采集路径移动,并进行接续图像采集的引导信息。
通过本实施例,在图像采集的过程中,获取图像采集设备的姿态数据,进而根据该姿态数据可以生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息,从而可以更好地引导用户对目标对象进行图像采集。
实施例六
参照图7,示出了根据本发明实施例六的一种图像采集方法的步骤流程图。
在本实施例中,所述图像采集方法包括前述的步骤S602~S604。
其中,所述方法还包括:
步骤S604a:获取所述图像采集设备实时采集的目标对象的图像。
目标对象的图像可以是用户根据引导信息使用图像采集设备采集的图像。例如,目标对象为货架,则该图像可以是包含货架一部分的图像。
步骤S604b:对采集的所述图像进行边缘检测,获取检测结果。
对图像进行边缘检测可以采用任何适当的方式,例如,采用训练完成的用于进行边缘检测的神经网络模型对图像进行边缘检测,并获得检测结果。或者,可以采用前述的任一实施例中的对图像进行边缘检测的方式。
检测结果可以指示采集的图像中包含目标对象的边缘,或者不包含目标对象的边缘等。
在获得检测结果的情况下,所述步骤S604中包括根据所述姿态数据和所述检测结果生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
用户在拍摄过程中可能会产生一些抖动,从而使图像采集设备的姿态数据指示其姿 态产生变化,为了避免对这些抖动对生成的引导信息的影响,确保生成的引导信息准确,对采集的图像进行边缘检测,进而结合姿态数据和检测结果生成引导信息,可以使生成的引导信息更加准确。
例如,第一种情况中,若当前姿态符合预设的路径转换条件且所述检测结果指示检测到所述目标对象的边缘,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息。
当根据姿态数据确定当前姿态符合预设的路径转换条件,例如,当前姿态是具有向下的倾角时,且检测结果指示检测到所述目标对象的边缘,则表示用户希望向下继续采集目标对象的其他部分的图像,因此,可以生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息。
又例如,第二种情况中,若当前姿态不符合预设的路径转换条件且所述检测结果指示未检测到所述目标对象的边缘,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
当根据姿态数据确定当前姿态不符合预设的路径转换条件,例如,当前姿态是竖直持握,且检测结果指示未检测到所述目标对象的边缘,则表示用户希望沿着当前的图像采集路径继续采集目标对象的其他部分的图像,因此,可以生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
可选地,本实施例中,图像采集方法还可以包括:
步骤S606:根据所述姿态数据和所述检测结果,生成引导用户停止图像采集的第七引导信息。
例如,若根据姿态数据确定的当前姿态不符合预设的路径转换条件且所述检测结果指示检测到所述目标对象的边缘,则生成引导用户停止进行接续图像采集的第七引导信息。
通过本实施例,在图像采集的过程中,获取图像采集设备的姿态数据,进而根据该姿态数据可以生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息,从而可以更好地引导用户对目标对象进行图像采集。
此外,还可以对采集的图像进行边缘检测,从而确定是否引导用户进行继续拍摄,使得智能性更好。
实施例七
参照图8,示出了根据本发明实施例七的一种图像采集装置的结构框图。
本实施例的图像采集装置包括:检测模块802,用于获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;第一获取模块804,用于若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;生成模块806,用于根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
通过本实施例,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的。
实施例八
参照图9,示出了根据本发明实施例八的一种图像采集装置的结构框图。
本实施例的图像采集装置包括:检测模块902,用于获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;第一获取模块904,用于若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;生成模块906,用于根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
可选地,所述装置还包括:第二获取模块908,用于在所述获得对采集的图像进行实时目标对象边缘检测的检测结果之前,获取动态下发到所述图像采集设备上的、用于进行所述目标对象边缘检测的轻量级神经网络模型;所述检测模块902用于使用所述轻量级神经网络模型对采集的图像进行实时的目标对象边缘检测,获得检测结果。
可选地,所述第一获取模块904用于获取所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息;所述生成模块906包括:第一确定模块9061,用于根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;信息生成模块9062,用于根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
可选地,所述装置还包括:拼接模块910,用于对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像。
可选地,所述拼接模块910包括:第二确定模块9101,用于从采集的多个图像中,确定具有图像重合关系的多组图像,其中,每组图像中包括两张图像;完整图像获得模块9102,用于根据所述图像重合关系对采集的多个图像进行拼接,并根据拼接结果获得包含所述目标对象的完整图像信息的完整图像。
可选地,所述第二确定模块9101包括:特征提取模块,用于对采集的多个图像中的每个图像进行特征提取,获得每个图像对应的特征点;匹配模块,用于对任意两张图像,根据两个所述图像的特征点进行匹配,并基于匹配结果确定所述具有图像重合关系的多组图像。
本实施例的图像采集装置用于实现前述多个方法实施例中相应的图像采集方法,并具有相应方法实施例的有益效果,在此不再赘述。
实施例九
参照图10,示出了根据本发明实施例九的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。
如图10所示,该电子设备可以包括:处理器(processor)1002、通信接口(Communications Interface)1004、存储器(memory)1006、以及通信总线1008。
其中:
处理器1002、通信接口1004、以及存储器1006通过通信总线1008完成相互间的通信。
通信接口1004,用于与其它电子设备如终端设备或服务器进行通信。
处理器1002,用于执行程序1010,具体可以执行上述图像采集方法实施例中的相关步骤。
具体地,程序1010可以包括程序代码,该程序代码包括计算机操作指令。
处理器1002可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器1006,用于存放程序1010。存储器1006可能包含高速RAM存储器,也可能 还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序1010具体可以用于使得处理器1002执行以下操作:获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002在获得对采集的图像进行实时目标对象边缘检测检测结果之前,获取动态下发到所述图像采集设备上的、用于进行所述目标对象边缘检测的轻量级神经网络模型;且在所述获得对采集的图像进行实时目标对象边缘检测的检测结果时,使用所述轻量级神经网络模型对采集的图像进行实时的目标对象边缘检测,获得检测结果。
在一种可选的实施方式中,程序1010还用于使得处理器1002获取采集所述图像的图像采集设备的姿态数据时,获取所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息;且在根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集时,根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002在使用采集的多个图像形成所述目标对象的完整图像信息时,对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像。
在一种可选的实施方式中,程序1010还用于使得处理器1002在对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像时,从采集的多个图像中,确定具有图像重合关系的多组图像,其中,每组图像中包括两张图像;根据所述图像重合关系对采集的多个图像进行拼接,并根据拼接结果获得包含所述目标对象的完整图像信息的完整图像。
在一种可选的实施方式中,程序1010还用于使得处理器1002在从采集的多个图像中,确定具有图像重合关系的多组图像时,对采集的多个图像中的每个图像进行特征提取,获得每个图像对应的特征点;对任意两张图像,根据两个所述图像的特征点进行匹配,并基于匹配结果确定所述具有图像重合关系的多组图像。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:获取根据第一引导信息的指示采集的货架图像,其中,所述货架用于承载商品,所述第一引导信息用于对所述货架的图像采集路径进行指示;获取对所述货架图像进行货架边缘检测的边缘检测结果;若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002在获取根据第一引导信息的指示采集的货架图像之前,获取所述第一引导信息,其中,所述第一引导信息为与所述图像采集路径对应的引导信息,所述图像采集路径为根据所述货架结构信息对所述货架进行分割生成的路径,所述货架结构信息根据所述货架的整体平面图、立体图和预设的货架虚拟模型中的至少一个确定。
在一种可选的实施方式中,当所述边缘检测结果指示所述货架图像中包括货架边缘时,程序1010还用于使得处理器1002在获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息时,若所述边缘检测结果指示所述货架图像中包括货架边缘,则对已采集的所有货架图像生成的采集结果图像进行商品信息识别,并获取商品信息结果;根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002在根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息时,若所述商品信息结果指示所述采集结果图像中未包含所述货架的全部商品,则获取指示切换所述图像采集路径中的拍摄行的第二引导信息;或者,若所述商品信息结果指示所述采集结果图像中包含所述货架的全部商品,则获取指示结束拍摄的第三引导信息。
在一种可选的实施方式中,当所述边缘检测结果指示所述货架图像中包括货架边缘时,程序1010还用于使得处理器1002在获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息时,若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取图像采集设备的姿态数据;根据所述姿态数据获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
在一种可选的实施方式中,所述姿态数据包括所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002从最新采集的所述货 架图像中,获取与当前图像采集路径对应的保留区域并在显示界面的设定区域展示所述保留区域,以通过所述保留区域指示下一图像采集操作的图像采集对齐位置。
在一种可选的实施方式中,程序1010还用于使得处理器1002获得根据对采集的所有货架图像进行拼接后生成的采集结果图像。
在一种可选的实施方式中,程序1010还用于使得处理器1002在对所述采集结果图像进行商品信息识别和/或商品位置识别,并获得商品信息结果和/或商品位置结果;对所述商品信息结果和/或所述商品位置结果进行分析操作,并生成与所述分析操作对应的分析结果。
在一种可选的实施方式中,所述分析结果包括下列至少之一:商品售卖信息、商品陈列信息、商品数量信息、商品补货状态信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:根据获取的第一引导信息采集货架的图像数据,其中,所述第一引导信息用于对所述货架的图像采集路径进行指示;对所述图像数据进行识别,并获得所述货架上的商品信息和是否包含货架边缘的信息;若确定所述图像数据中包含货架边缘的信息,则根据所述商品信息判断已采集的所有图像数据中是否包含所有商品信息,根据判断结果获得指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:展示对货架商品的第一采集提示信息,其中,所述第一采集提示信息用于指示沿图像采集路径对货架商品进行图像采集时的采集位置;获取根据所述第一采集提示信息进行图像采集的图像,并对获取的图像进行识别;若识别结果指示所述图像中包括货架边缘,则展示用于指示新的图像采集路径并指示继续进行图像采集的第二采集提示信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:采集货架的图像数据;对所述图像数据进行处理,识别得到所述货架上的商品信息;根据所述识别得到的商品信息,确定所述货架的商品统计信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:响应于用户发起的拍摄操作,调用客户端的图像采集装置拍摄货架的图像数据;对所述图像数据进行处理,识别 得到所述货架上的商品信息;根据所述识别得到的商品信息,确定所述货架的商品统计信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:响应于用户发起的补货操作,调用图像采集装置拍摄货架的图像数据;对所述图像数据进行识别处理,识别得到所述货架上的商品信息;根据所述货架上的商品信息,确定待补货商品。
在一种可选的实施方式中,程序1010还用于使得处理器1002根据所述待补货商品,生成并显示用于提示对所述待补货商品进行补货的补货提示信息。
或者,
程序1010具体可以用于使得处理器1002执行以下操作:在对目标对象进行图像采集的过程中,获取图像采集设备的姿态数据;根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
在一种可选的实施方式中,姿态数据包括所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息;程序1010还用于使得处理器1002在根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集时,根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002在根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息时,若当前姿态符合预设的路径转换条件,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息;若当前姿态不符合预设的路径转换条件,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
在一种可选的实施方式中,程序1010还用于使得处理器1002获取所述图像采集设备实时采集的目标对象的图像;对采集的所述图像进行边缘检测,获取检测结果;且程序1010还用于使得处理器1002在根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集时,根据所述姿态数据和所述检测结果生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
在一种可选的实施方式中,程序1010还用于使得处理器1002在根据所述姿态数据和所述检测结果生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行 接续图像采集时,若当前姿态符合预设的路径转换条件且所述检测结果指示检测到所述目标对象的边缘,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息;若当前姿态不符合预设的路径转换条件且所述检测结果指示未检测到所述目标对象的边缘,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
程序1010中各步骤的具体实现可以参见上述图像采集方法实施例中的相应步骤和单元中对应的描述,在此不赘述。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的设备和模块的具体工作过程,可以参考前述方法实施例中的对应过程描述,在此不再赘述。
通过本实施例的电子设备,实时对采集的图像进行目标对象边缘检测,在采集的图像中包含目标对象的边缘时,获取图像采集设备的姿态数据,进而根据姿态数据生成对应的引导信息,以通过该引导信息引导用户规范地进行图像采集,达到最终完成对整个目标对象包含的多个部分的图像采集,避免遗漏,获得目标对象的完整图像的目的。
需要指出,根据实施的需要,可将本发明实施例中描述的各个部件/步骤拆分为更多部件/步骤,也可将两个或多个部件/步骤或者部件/步骤的部分操作组合成新的部件/步骤,以实现本发明实施例的目的。
上述根据本发明实施例的方法可在硬件、固件中实现,或者被实现为可存储在记录介质(诸如CD ROM、RAM、软盘、硬盘或磁光盘)中的软件或计算机代码,或者被实现通过网络下载的原始存储在远程记录介质或非暂时机器可读介质中并将被存储在本地记录介质中的计算机代码,从而在此描述的方法可被存储在使用通用计算机、专用处理器或者可编程或专用硬件(诸如ASIC或FPGA)的记录介质上的这样的软件处理。可以理解,计算机、处理器、微处理器控制器或可编程硬件包括可存储或接收软件或计算机代码的存储组件(例如,RAM、ROM、闪存等),当所述软件或计算机代码被计算机、处理器或硬件访问且执行时,实现在此描述的图像采集方法。此外,当通用计算机访问用于实现在此示出的图像采集方法的代码时,代码的执行将通用计算机转换为用于执行在此示出的图像采集方法的专用计算机。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及方法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术 人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明实施例的范围。
以上实施方式仅用于说明本发明实施例,而并非对本发明实施例的限制,有关技术领域的普通技术人员,在不脱离本发明实施例的精神和范围的情况下,还可以做出各种变化和变型,因此所有等同的技术方案也属于本发明实施例的范畴,本发明实施例的专利保护范围应由权利要求限定。

Claims (32)

  1. 一种货架图像采集方法,其特征在于,包括:
    获取根据第一引导信息的指示采集的货架图像,其中,所述货架用于承载商品,所述第一引导信息用于对所述货架的图像采集路径进行指示;
    获取对所述货架图像进行货架边缘检测的边缘检测结果;
    若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
  2. 根据权利要求1所述的方法,其特征在于,在获取根据第一引导信息的指示采集的货架图像之前,所述方法还包括:
    获取所述第一引导信息,其中,所述第一引导信息为与所述图像采集路径对应的引导信息,所述图像采集路径为根据所述货架结构信息对所述货架进行分割生成的路径,所述货架结构信息根据所述货架的整体平面图、立体图和预设的货架虚拟模型中的至少一个确定。
  3. 根据权利要求1所述的方法,其特征在于,所述若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息,包括:
    若所述边缘检测结果指示所述货架图像中包括货架边缘,则对已采集的所有货架图像生成的采集结果图像进行商品信息识别,并获取商品信息结果;
    根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述商品信息结果,获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息,包括:
    若所述商品信息结果指示所述采集结果图像中未包含所述货架的全部商品,则获取指示切换所述图像采集路径的第二引导信息;或者,
    若所述商品信息结果指示所述采集结果图像中包含所述货架的全部商品,则获取指示结束拍摄的第三引导信息。
  5. 根据权利要求1所述的方法,其特征在于,所述若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息,包括:
    若所述边缘检测结果指示所述货架图像中包括货架边缘,则获取图像采集设备的姿 态数据;
    根据所述姿态数据获取指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
  6. 根据权利要求5所述的方法,其特征在于,所述姿态数据包括所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息。
  7. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    从最新采集的所述货架图像中,获取与当前图像采集路径对应的保留区域并在显示界面的设定区域展示所述保留区域,以通过所述保留区域指示下一图像采集操作的图像采集对齐位置。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    获得根据对采集的所有货架图像进行拼接后生成的采集结果图像。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    对所述采集结果图像进行商品信息识别和/或商品位置识别,并获得商品信息结果和/或商品位置结果;
    对所述商品信息结果和/或所述商品位置结果进行分析操作,并生成与所述分析操作对应的分析结果。
  10. 根据权利要求9所述的方法,其特征在于,所述分析结果包括下列至少之一:商品售卖信息、商品陈列信息、商品数量信息、商品补货状态信息。
  11. 一种商品信息处理方法,其特征在于,包括:
    根据获取的第一引导信息采集货架的图像数据,其中,所述第一引导信息用于对所述货架的图像采集路径进行指示;
    对所述图像数据进行识别,并获得所述货架上的商品信息和是否包含货架边缘的信息;
    若确定所述图像数据中包含货架边缘的信息,则根据所述商品信息判断已采集的所有图像数据中是否包含所有商品信息,根据判断结果获得指示新的图像采集路径的第二引导信息或获取指示结束采集的第三引导信息。
  12. 一种货架图像采集方法,其特征在于,包括:
    展示对货架商品的第一采集提示信息,其中,所述第一采集提示信息用于指示沿图 像采集路径对货架商品进行图像采集时的采集位置;
    获取根据所述第一采集提示信息进行图像采集的图像,并对获取的图像进行识别;
    若识别结果指示所述图像中包括货架边缘,则展示用于指示新的图像采集路径并指示继续进行图像采集的第二采集提示信息。
  13. 一种客户端,其特征在于,包括:
    展示界面,所述展示界面用于展示第一采集提示信息,所述第一采集提示信息用于指示沿图像采集路径对目标对象进行图像采集;
    所述展示界面还用于展示第二采集提示信息,所述第二采集提示信息为在获取的图像中包含所述目标对象的边缘时,指示沿新的图像采集路径对所述目标对象进行图像采集的信息。
  14. 根据权利要求13所述的客户端,其特征在于,所述目标对象包括下列至少之一:货架、停车场、场馆的坐席。
  15. 一种商品信息处理方法,其特征在于,包括:
    采集货架的图像数据;
    对所述图像数据进行处理,识别得到所述货架上的商品信息;
    根据所述识别得到的商品信息,确定所述货架的商品统计信息。
  16. 一种商品信息的处理方法,其特征在于,包括:
    响应于用户发起的拍摄操作,调用客户端的图像采集装置拍摄货架的图像数据;
    对所述图像数据进行处理,识别得到所述货架上的商品信息;
    根据所述识别得到的商品信息,确定所述货架的商品统计信息。
  17. 一种商品补货的处理方法,其特征在于,包括:
    响应于用户发起的补货操作,调用图像采集装置拍摄货架的图像数据;
    对所述图像数据进行识别处理,识别得到所述货架上的商品信息;
    根据所述货架上的商品信息,确定待补货商品。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    根据所述待补货商品,生成并显示用于提示对所述待补货商品进行补货的补货提示 信息。
  19. 一种图像采集方法,其特征在于,包括:
    获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;
    若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;
    根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
  20. 根据权利要求19所述的方法,其特征在于,
    所述获取采集所述图像的图像采集设备的姿态数据,包括:获取所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息;
    所述根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,包括:
    根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;
    根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
  21. 根据权利要求19所述的方法,其特征在于,
    在所述获得对采集的图像进行实时目标对象边缘检测的检测结果之前,所述方法还包括:获取动态下发到所述图像采集设备上的、用于进行所述目标对象边缘检测的轻量级神经网络模型;
    所述获得对采集的图像进行实时目标对象边缘检测的检测结果包括:使用所述轻量级神经网络模型对采集的图像进行实时的目标对象边缘检测,获得检测结果。
  22. 根据权利要求19所述的方法,其特征在于,所述使用采集的多个图像形成所述目标对象的完整图像信息,包括:
    对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像。
  23. 根据权利要求22所述的方法,其特征在于,所述对采集的多个图像进行拼接,以获得包含所述目标对象的完整图像信息的完整图像,包括:
    从采集的多个图像中,确定具有图像重合关系的多组图像,其中,每组图像中包括 两张图像;
    根据所述图像重合关系对采集的多个图像进行拼接,并根据拼接结果获得包含所述目标对象的完整图像信息的完整图像。
  24. 根据权利要求23所述的方法,其特征在于,所述从采集的多个图像中,确定具有图像重合关系的多组图像,包括:
    对采集的多个图像中的每个图像进行特征提取,获得每个图像对应的特征点;
    对任意两张图像,根据两个所述图像的特征点进行匹配,并基于匹配结果确定所述具有图像重合关系的多组图像。
  25. 一种图像采集方法,其特征在于,包括:
    在对目标对象进行图像采集的过程中,获取图像采集设备的姿态数据;
    根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
  26. 根据权利要求25所述的方法,其特征在于,所述姿态数据包括所述图像采集设备在空间坐标系中的加速度信息和/或角速度信息;
    所述根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,包括:
    根据所述加速度信息和/或角速度信息,确定所述图像采集设备的当前姿态;
    根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息。
  27. 根据权利要求26所述的方法,其特征在于,所述根据当前姿态生成指示用户向与所述当前姿态所匹配方向移动以进行接续图像采集的引导信息,包括:
    若当前姿态符合预设的路径转换条件,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息;
    若当前姿态不符合预设的路径转换条件,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
  28. 根据权利要求25至27中任一项所述的方法,其特征在于,所述方法还包括:
    获取所述图像采集设备实时采集的目标对象的图像;对采集的所述图像进行边缘检测,获取检测结果;
    所述根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,包括:
    根据所述姿态数据和所述检测结果生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集。
  29. 根据权利要求28所述的方法,其特征在于,所述根据所述姿态数据和所述检测结果生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,包括:
    若当前姿态符合预设的路径转换条件且所述检测结果指示检测到所述目标对象的边缘,则生成引导用户将当前图像采集路径转换为与当前姿态匹配的新的图像采集路径,并沿新的图像采集路径进行接续图像采集的第五引导信息;
    若当前姿态不符合预设的路径转换条件且所述检测结果指示未检测到所述目标对象的边缘,则生成引导用户沿当前图像采集路径进行接续图像采集的第六引导信息。
  30. 一种图像采集装置,其特征在于,包括:
    检测模块,用于获得对采集的图像进行实时目标对象边缘检测的检测结果,其中,采集的所述图像中包含目标对象的部分图像信息;
    第一获取模块,用于若所述检测结果指示在所述图像中检测到所述目标对象的边缘,则获取采集所述图像的图像采集设备的姿态数据;
    生成模块,用于根据所述姿态数据生成对应的引导信息,通过所述引导信息引导用户对所述目标对象进行接续图像采集,以使用采集的多个图像形成所述目标对象的完整图像信息。
  31. 一种电子设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;
    所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-10中任一项所述的货架图像采集方法对应的操作,或者执行如权利要求11所述的商品信息处理方法对应的操作,或者执行如权利要求12所述的货架图像采集方法对应的操作,或者执行如权利要求15所述的商品信息处理方法对应的操作,或者执行如权利要求16所述的商品信息的处理方法对应的操作,或者执行如权利要求17或18所述的商品补货的处理方法对应的操作,或者执行如权利要求19-24所述的图像采集方法 对应的操作,或者执行如权利要求25-29所述的图像采集方法对应的操作。
  32. 一种计算机存储介质,其上存储有计算机程序,该程序被处理器实现时实现如权利要求1-10中任一项所述的货架图像采集方法,或者实现如权利要求11所述的商品信息处理方法,或者实现如权利要求12所述的货架图像采集方法,或者实现如权利要求15所述的商品信息处理方法,或者实现如权利要求16所述的商品信息的处理方法,或者实现如权利要求17或18所述的商品补货的处理方法,或者实现如权利要求19-24所述的图像采集方法,或者实现如权利要求25-29所述的图像采集方法。
PCT/CN2020/104014 2019-07-30 2020-07-24 图像采集方法、装置、电子设备及计算机存储介质 WO2021018019A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910697213.7A CN112308869A (zh) 2019-07-30 2019-07-30 图像采集方法、装置、电子设备及计算机存储介质
CN201910697213.7 2019-07-30

Publications (1)

Publication Number Publication Date
WO2021018019A1 true WO2021018019A1 (zh) 2021-02-04

Family

ID=74228356

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/104014 WO2021018019A1 (zh) 2019-07-30 2020-07-24 图像采集方法、装置、电子设备及计算机存储介质

Country Status (2)

Country Link
CN (1) CN112308869A (zh)
WO (1) WO2021018019A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308869A (zh) * 2019-07-30 2021-02-02 阿里巴巴集团控股有限公司 图像采集方法、装置、电子设备及计算机存储介质
CN114040096A (zh) * 2021-10-27 2022-02-11 上海小零网络科技有限公司 针对货架图像的辅助拍摄方法、装置、设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132633B (zh) * 2021-04-07 2024-04-12 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备及计算机可读存储介质
CN113780248B (zh) * 2021-11-09 2022-03-18 武汉星巡智能科技有限公司 多视角识别商品智能生成订单方法、装置及智能售货机

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809620A (zh) * 2015-01-19 2016-07-27 株式会社理光 用于线性全景图像拼接的预览图像获取用户界面
US20160328618A1 (en) * 2013-06-12 2016-11-10 Symbol Technologies, Llc Method and apparatus for image processing to avoid counting shelf edge promotional labels when couting product labels
CN106558027A (zh) * 2015-09-30 2017-04-05 株式会社理光 用于估计相机姿态中的偏离误差的算法
WO2018078408A1 (en) * 2016-10-28 2018-05-03 The Nielsen Company (Us), Llc Reducing scale estimate errors in shelf images
CN108549851A (zh) * 2018-03-27 2018-09-18 合肥美的智能科技有限公司 智能货柜内货品识别方法及装置、智能货柜
CN108846401A (zh) * 2018-05-30 2018-11-20 京东方科技集团股份有限公司 商品检测终端、方法、系统以及计算机设备、可读介质
CN109564619A (zh) * 2016-05-19 2019-04-02 思比机器人公司 跟踪商店内的货架上的产品的放置的方法
CN109741519A (zh) * 2018-12-10 2019-05-10 深圳市思拓通信系统有限公司 一种无人超市货架监控系统及其控制方法
CN109977886A (zh) * 2019-03-29 2019-07-05 京东方科技集团股份有限公司 货架空置率计算方法及装置、电子设备、存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001109804A (ja) * 1999-10-12 2001-04-20 Toshiba Corp 情報提供システム
JP6679847B2 (ja) * 2015-06-03 2020-04-15 日本電気株式会社 棚割情報生成装置、棚割情報生成システム、棚割情報生成方法、撮像装置、およびプログラム
US20180232689A1 (en) * 2017-02-13 2018-08-16 Iceberg Luxembourg S.A.R.L. Computer Vision Based Food System And Method
CN107292248B (zh) * 2017-06-05 2023-04-07 广州诚予国际市场信息研究有限公司 一种基于图像识别技术的商品管理方法及系统
WO2019033635A1 (zh) * 2017-08-16 2019-02-21 图灵通诺(北京)科技有限公司 结算方法、装置和系统
JP7019357B2 (ja) * 2017-09-19 2022-02-15 東芝テック株式会社 棚情報推定装置及び情報処理プログラム
US20200394599A1 (en) * 2017-11-29 2020-12-17 Ntt Docomo, Inc. Shelf-allocation information generating device and shelf-allocation information generating program
CL2017003463A1 (es) * 2017-12-28 2019-10-11 Univ Pontificia Catolica Chile Sistema robótico autónomo para el monitoreo automático del estado de estanterías en tiendas
CN109033985B (zh) * 2018-06-29 2020-10-09 百度在线网络技术(北京)有限公司 商品识别的处理方法、装置、设备、系统及存储介质
CN109472652A (zh) * 2018-12-28 2019-03-15 出门问问信息科技有限公司 智慧门店的管理方法、装置、电子设备及计算机存储介质
CN112308869A (zh) * 2019-07-30 2021-02-02 阿里巴巴集团控股有限公司 图像采集方法、装置、电子设备及计算机存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160328618A1 (en) * 2013-06-12 2016-11-10 Symbol Technologies, Llc Method and apparatus for image processing to avoid counting shelf edge promotional labels when couting product labels
CN105809620A (zh) * 2015-01-19 2016-07-27 株式会社理光 用于线性全景图像拼接的预览图像获取用户界面
CN106558027A (zh) * 2015-09-30 2017-04-05 株式会社理光 用于估计相机姿态中的偏离误差的算法
CN109564619A (zh) * 2016-05-19 2019-04-02 思比机器人公司 跟踪商店内的货架上的产品的放置的方法
WO2018078408A1 (en) * 2016-10-28 2018-05-03 The Nielsen Company (Us), Llc Reducing scale estimate errors in shelf images
CN108549851A (zh) * 2018-03-27 2018-09-18 合肥美的智能科技有限公司 智能货柜内货品识别方法及装置、智能货柜
CN108846401A (zh) * 2018-05-30 2018-11-20 京东方科技集团股份有限公司 商品检测终端、方法、系统以及计算机设备、可读介质
CN109741519A (zh) * 2018-12-10 2019-05-10 深圳市思拓通信系统有限公司 一种无人超市货架监控系统及其控制方法
CN109977886A (zh) * 2019-03-29 2019-07-05 京东方科技集团股份有限公司 货架空置率计算方法及装置、电子设备、存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308869A (zh) * 2019-07-30 2021-02-02 阿里巴巴集团控股有限公司 图像采集方法、装置、电子设备及计算机存储介质
CN114040096A (zh) * 2021-10-27 2022-02-11 上海小零网络科技有限公司 针对货架图像的辅助拍摄方法、装置、设备及介质

Also Published As

Publication number Publication date
CN112308869A (zh) 2021-02-02

Similar Documents

Publication Publication Date Title
WO2021018019A1 (zh) 图像采集方法、装置、电子设备及计算机存储介质
AU2020418608B2 (en) Fine-grained visual recognition in mobile augmented reality
CN107251096B (zh) 图像捕获装置和方法
US11238653B2 (en) Information processing device, information processing system, and non-transitory computer-readable storage medium for storing program
US6677969B1 (en) Instruction recognition system having gesture recognition function
US11475800B2 (en) Method of displaying price tag information, apparatus, and shelf system
EP3709266A1 (en) Human-tracking methods, apparatuses, systems, and storage media
CN113038018B (zh) 辅助用户拍摄车辆视频的方法及装置
WO2021027537A1 (zh) 一种拍摄证件照的方法、装置、设备及存储介质
EP4102458A1 (en) Method and apparatus for identifying scene contour, and computer-readable medium and electronic device
US8666145B2 (en) System and method for identifying a region of interest in a digital image
CN110472460A (zh) 人脸图像处理方法及装置
CN109840982B (zh) 排队推荐方法及装置、计算机可读存储介质
CN109063679A (zh) 一种人脸表情检测方法、装置、设备、系统及介质
KR101256046B1 (ko) 공간 제스처 인식을 위한 신체 트래킹 방법 및 시스템
CN110149476A (zh) 一种延时摄影方法、装置、系统及终端设备
US20230206093A1 (en) Music recommendation method and apparatus
WO2019090904A1 (zh) 确定距离的方法、装置、设备及存储介质
CN115278014A (zh) 一种目标跟踪方法、系统、计算机设备及可读介质
CN113409056B (zh) 支付方法、装置、本地识别设备、人脸支付系统及设备
US11301508B2 (en) System for creating an audio-visual recording of an event
WO2014206274A1 (en) Method, apparatus and terminal device for processing multimedia photo-capture
US11551379B2 (en) Learning template representation libraries
CN115620378A (zh) 多视角牛脸智能采集方法、装置、系统及相关设备
KR102576795B1 (ko) 포즈 추정 기반의 정면 영상 획득 방법 및 이를 위한 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20848005

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20848005

Country of ref document: EP

Kind code of ref document: A1