CN110827487A - Article image data acquisition method and device, storage medium and electronic equipment - Google Patents

Article image data acquisition method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110827487A
CN110827487A CN201911067130.6A CN201911067130A CN110827487A CN 110827487 A CN110827487 A CN 110827487A CN 201911067130 A CN201911067130 A CN 201911067130A CN 110827487 A CN110827487 A CN 110827487A
Authority
CN
China
Prior art keywords
information
target
image acquisition
image data
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911067130.6A
Other languages
Chinese (zh)
Inventor
刘艺美
白锴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201911067130.6A priority Critical patent/CN110827487A/en
Publication of CN110827487A publication Critical patent/CN110827487A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/12Cash registers electronically operated
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07GREGISTERING THE RECEIPT OF CASH, VALUABLES, OR TOKENS
    • G07G1/00Cash registers
    • G07G1/0036Checkout procedures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The disclosure relates to a method and a device for acquiring article image data, a storage medium and electronic equipment. The method applied to the terminal comprises the following steps: acquiring article information of a target article to be acquired; acquiring image acquisition guide information about the target object, wherein the image acquisition guide information is used for guiding a user to carry out placement operation of the target object; outputting the image acquisition guide information; triggering a camera to carry out photographing operation in response to the received photographing instruction; and executing target operation for storing the image data of the target object shot by the camera in association with the object information. Therefore, the recording speed and the recording quality of the article image data can be improved.

Description

Article image data acquisition method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of data acquisition, and in particular, to a method and an apparatus for acquiring image data of an article, a storage medium, and an electronic device.
Background
At present, the utilization rate of self-service cash registers is higher and higher. Most of common self-service cash registers are based on a code scanning mode, for example, a barcode of an article to be purchased by a user is aligned to a scanning window on the self-service cash register, and the self-service cash register obtains information and price of the article to be purchased by the user by analyzing the barcode, so that subsequent settlement operation is performed. However, such self-service cash register is limited in that bar code information must be attached to the package of the article, and if the bar code is missing, the self-service cash register operation cannot be completed.
With the development of artificial intelligence technology, a self-service cash register based on artificial intelligence AI has appeared at present. This self-service cash registering machine shoots article image through the camera, then this article of high in the clouds server according to this article image identification to the price of this article is pinpointed, in order to accomplish subsequent settlement operation.
Disclosure of Invention
The invention aims to provide a method and a device for acquiring article image data, a storage medium and electronic equipment, so as to improve the recording speed and quality of the article image data.
In order to achieve the above object, in a first aspect, the present disclosure provides a method for acquiring article image data, the method including: acquiring article information of a target article to be acquired; acquiring image acquisition guide information about the target object, wherein the image acquisition guide information is used for guiding a user to carry out placement operation of the target object; outputting the image acquisition guide information; triggering a camera to carry out photographing operation in response to the received photographing instruction; and executing target operation for storing the image data of the target object shot by the camera in association with the object information.
Optionally, the method further comprises: sending the article information to a server; the target operation is: and sending the image data to the server so as to be stored by the server in association with the article information.
Optionally, the image acquisition guidance information includes target placement angle information; the acquiring of the image acquisition guidance information about the target item includes: and acquiring the target placement angle information of the target object according to the video stream information which is shot by the camera and contains the target object.
Optionally, the image acquisition guide information includes target placement position information; the acquiring of the image acquisition guidance information about the target item includes: and determining target placing position information about the target object according to the object information.
Optionally, the target placement position information indicates at least one target placement position, and each target placement position corresponds to a display area of a display screen of the terminal; the outputting of the image acquisition guide information includes: setting a display area corresponding to a target placing position where image acquisition operation is not finished on the display screen to be in a first display state; and setting a display area corresponding to the target placing position where the image acquisition operation is finished on the display screen to be in a second display state.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target item and stored image data about the target item; the acquiring of the image acquisition guidance information about the target item includes: and according to the video stream information which is shot by the camera and contains the target object, acquiring the similarity information between the current image data of the target object and the stored image data related to the target object.
Optionally, the image acquisition guide information includes multiple types of image acquisition guide information, and the multiple types of image acquisition guide information have a preset sequence; the acquiring of the image acquisition guidance information about the target item includes: acquiring the image acquisition guide information with the top rank from the image acquisition guide information of the plurality of types according to the sequence; and if the image acquisition operation aiming at the currently acquired image acquisition guide information of the type is finished, acquiring the image acquisition guide information of the next ranked image acquisition guide information according to the sequence until the image acquisition operation aiming at each image acquisition guide information in the image acquisition guide information of the plurality of types is finished.
Optionally, the image acquisition guidance information is selected from one of the following information: target placement angle information, target placement position information, similarity information between current image data of the target item and stored image data related to the target item.
In a second aspect, the present disclosure provides a method of acquiring image data of an article, the method comprising: receiving object information of a target object to be acquired, which is sent by a terminal; determining image acquisition guiding information about the target object, wherein the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object; sending the image acquisition guide information to the terminal; receiving image data of the target object sent by the terminal; storing the image data in association with the item information.
Optionally, the image acquisition guidance information includes target placement angle information; the determining image capture guidance information about the target item includes: receiving video stream information which is sent by the terminal and contains the target object; determining the target placement angle information about the target item according to the video stream information.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target object and the image data which is stored in the server and is related to the target object; the determining image capture guidance information about the target item includes: receiving video stream information which is sent by the terminal and contains the target object; and according to the video stream information, determining the similarity information of the current image data of the target object and the image data which is stored in the server and is related to the target object.
In a third aspect, the present disclosure provides an apparatus for acquiring image data of an article, the apparatus comprising: the first acquisition module is used for acquiring the article information of a target article to be acquired; the second acquisition module is used for acquiring image acquisition guiding information about the target object, and the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object; the output module is used for outputting the image acquisition guide information; the triggering module is used for triggering the camera to carry out photographing operation in response to the received photographing instruction; and the execution module is used for executing target operation for storing the image data of the target object shot by the camera in a manner of being associated with the object information.
Optionally, the apparatus further comprises: the sending module is used for sending the article information to a server; the target operation is: and sending the image data to the server so as to be stored by the server in association with the article information.
Optionally, the image acquisition guidance information includes target placement angle information; the second obtaining module is configured to obtain the target placement angle information about the target object according to the video stream information that is shot by the camera and contains the target object.
Optionally, the image acquisition guide information includes target placement position information; the second obtaining module is used for determining target placing position information of the target object according to the object information.
Optionally, the target placement position information indicates at least one target placement position, and each target placement position corresponds to a display area of a display screen of the terminal; the output module is used for setting a display area corresponding to a target placing position where the image acquisition operation is not finished on the display screen to be in a first display state; and setting a display area corresponding to the target placing position where the image acquisition operation is finished on the display screen to be in a second display state.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target item and stored image data about the target item; the second obtaining module is used for obtaining the similarity information between the current image data of the target object and the stored image data related to the target object according to the video stream information which is shot by the camera and contains the target object.
Optionally, the image acquisition guide information includes multiple types of image acquisition guide information, and the multiple types of image acquisition guide information have a preset sequence; the second obtaining module is used for obtaining the image acquisition guide information with the top rank from the image acquisition guide information of the plurality of types according to the sequence; and if the image acquisition operation aiming at the currently acquired image acquisition guide information of the type is finished, acquiring the image acquisition guide information of the next ranked image acquisition guide information according to the sequence until the image acquisition operation aiming at each image acquisition guide information in the image acquisition guide information of the plurality of types is finished.
In a fourth aspect, the present disclosure provides an apparatus for acquiring image data of an article, the apparatus comprising: the first receiving module is used for receiving the object information of the target object to be acquired, which is sent by the terminal; the determining module is used for determining image acquisition guiding information about the target object, and the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object; the sending module is used for sending the image acquisition guiding information to the terminal; the second receiving module is used for receiving the image data of the target object sent by the terminal; and the storage module is used for storing the image data and the article information in a correlation manner.
Optionally, the image acquisition guidance information includes target placement angle information; the determining module is used for receiving video stream information which is sent by the terminal and contains the target object; determining the target placement angle information about the target item according to the video stream information.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target item and the image data about the target item stored in the server; the determining module is used for receiving video stream information which is sent by the terminal and contains the target object; and according to the video stream information, determining similarity information between the current image data of the target object and the image data which is stored in the server and is related to the target object.
In a fifth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the first aspect of the present disclosure.
In a sixth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the second aspect of the present disclosure.
In a seventh aspect, the present disclosure provides an electronic device, comprising: a camera for taking a picture; a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method provided by the first aspect of the present disclosure.
In an eighth aspect, the present disclosure provides an electronic device comprising: a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method provided by the second aspect of the present disclosure.
By the technical scheme, the terminal can acquire the image acquisition guide information of the target object to be acquired and output the image acquisition guide information. The image acquisition guide information can guide the user to carry out the placing operation of the target object, so that the user can conveniently place the target object according to the indication of the guide information, the process that the user needs to learn how to input the image data of the object in advance in the related technology is saved, and the time and the labor are saved. In addition, after the user puts the target article according to image acquisition guide information, the article image data that the terminal was gathered can more laminate subsequent image identification demand, so, on the one hand, can avoid typing into invalid or the not good article image data of image quality, improve and type speed and quality, on the other hand still helps improving the accuracy when subsequently carrying out article identification according to the article image data of typing into.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an exemplary embodiment of the present disclosure
FIG. 2 is a flow chart illustrating a method of acquiring image data of an item according to an exemplary embodiment of the present disclosure.
Fig. 3 shows a schematic diagram of an exemplary implementation of the terminal outputting target placement angle information.
Fig. 4A and 4B are diagrams illustrating an exemplary implementation of a terminal outputting target placement position information.
FIG. 5 illustrates a schematic diagram of one exemplary implementation of a terminal outputting similarity information of current image data of a target item with stored image data about the target item.
FIG. 6 is a flow chart illustrating a method of acquiring image data of an item according to an exemplary embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating an acquisition device for item image data according to an exemplary embodiment of the present disclosure.
FIG. 8 is a block diagram illustrating an acquisition device for item image data according to an exemplary embodiment of the present disclosure.
FIG. 9 is a block diagram illustrating an electronic device in accordance with an example embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
As described in the background art, in the related art, AI self-service cash registers based on image recognition technology have appeared. As is known, to realize AI self-service cash collection, image data and article information corresponding to a large number of articles need to be entered in advance at a server side, and the image data and the article information corresponding to the same article need to be associated with each other. The item information may include an item name and an item price. Optionally, the item information may also include other information that can describe the item, such as an item brand, an item specification (e.g., capacity), and so forth. Therefore, when the AI self-service cash register is used, the self-service cash register can send the shot images of the articles to be settled to the server, and the server matches the images of the articles to be settled with the image data of a large number of pre-stored articles, so that the articles to be settled are identified, the price of the articles is accurately positioned, and the subsequent settlement operation is completed. Therefore, the accuracy in the cash register link is influenced by the mode and the method for inputting the article image data. The more standard the user input data, the richer the item image data, so that the identification rate is higher when the item is checked out.
In the related art, the following two recording methods are mainly used.
The first method comprises the following steps: the merchant is assisted by a technician in entering the item image data. This type of method is relatively labor-consuming, and once a merchant has a new article, a technician is required to re-enter the article image data for the merchant again, resulting in low timeliness of data entry.
And the second method comprises the following steps: entered by the merchant himself. The entry instructions (e.g., in a teletext manner) may be provided offline to the merchant, who reads the entry instructions themselves and learns how to enter the item image data. On one hand, the mode takes time for a merchant to learn how to enter, so that the time is consumed, the entering speed is slow, and the efficiency is low. On the other hand, such an offline teaching guidance manner easily results in poor quality of recorded image data when the image data is recorded on the actual online, for example, because a recording person forgets a recorded description, or the recorded description is not accurately understood. Thus, the subsequent article identification rate is affected.
In view of the above, the present disclosure is directed to a method, an apparatus, a storage medium, and an electronic device for acquiring article image data to improve the recording speed and quality of the article image data.
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an exemplary embodiment of the present disclosure. As shown in FIG. 1, the implementation environment may include: terminal 100, camera 200 and server 300. Therein, the camera 200 may be integrated on the terminal 100, or the camera 200 may be physically separated from the terminal 100 but communicatively coupled thereto. The terminal 100 can acquire contents including video and images shot by the camera 200. The terminal 100 and the server 300 may communicate via a wired or wireless communication connection, for example, via any of bluetooth, WiFi, 2G, 3G, 4G, 5G, NB-IOT, eMTC, and the like. The terminal 100 may transmit item information and image data to be entered to the server 300 to be stored by the server 300. Alternatively, the terminal 100 may locally store the item information and the image data to be entered. After the entry is completed, when the terminal 100 is actually applied, the acquired image data of the article to be settled may be sent to the server 300, so that the server 300 may identify the article to be settled according to the stored data, and feed back the article information of the article to be settled to the terminal 100. Alternatively, the terminal 100 may identify the item to be settled by itself based on the collected image data of the item to be settled to obtain the item information of the item to be settled. The terminal 100 may include a display screen 102, and the display screen 102 may be used to present item information and provide payment access to the user, thereby completing a self-service checkout operation.
Optionally, as shown in fig. 1, the terminal 100 may further include a tray 103 for placing the articles, and the tray 103 may be disposed in the shooting direction of the camera 200, so that the camera 200 can shoot images or video information of the articles on the tray 103.
Fig. 2 is a flowchart illustrating a method for acquiring image data of an article according to an exemplary embodiment of the present disclosure, which may be applied to a terminal, such as the terminal 100 shown in fig. 1, for example. As shown in fig. 2, the method may include:
in S201, item information of a target item to be collected is acquired.
The item information may include, for example, an item name, an item price, and optionally other information that can describe the item, such as an item brand, an item specification (e.g., capacity), and so on. The user may manually input item information of a target item to be collected to the terminal 100. For example, the user completes the input of the item information of the target item to be collected by performing at least one touch operation of inputting, selecting, clicking, and sliding on the display screen 102. The terminal 100 may acquire the article information input by the user according to the above-described operation by the user.
In S202, image capture guidance information about the target item is acquired, and the image capture guidance information is used for guiding the user to perform a placing operation of the target item.
In this step, the terminal 100 may acquire image capture guidance information about the target item. The image acquisition guide information can guide a user to carry out placing operation of the target object. For example, the image capturing guidance information may indicate a target placement position of the target object, or may indicate target placement angle information of the target object, or the like.
In S203, the image capture guidance information is output.
Illustratively, the terminal 100 may display the image capture guide information through the display screen 102, or the terminal 100 may broadcast the image capture guide information through a speaker. After seeing or hearing the image acquisition guide information, the user can put the target object as indicated by the image acquisition guide information.
In S204, in response to receiving the photographing instruction, the camera is triggered to perform a photographing operation.
After the user places the target object according to the image capturing guidance information, a photographing instruction may be input to the terminal 100, for example, by clicking a photographing button. After receiving the photographing instruction, the terminal 100 triggers the camera 200 to perform photographing operation, so as to complete the acquisition of image data of the target object.
In S205, a target operation for storing the image data of the target item captured by the camera in association with the item information is performed.
It should be noted that, each time S204 and S205 are executed, the entry operation of one image data of the target item may be completed. If the image capturing guidance information indicates a plurality of pieces of information related to the placing operation, for example, three target placing positions are indicated, the user may sequentially adjust the placing positions of the target objects according to the three target placing positions, and once each adjustment is completed, input a photographing instruction, that is, the terminal 100 performs S204 and S205 once. In this way, theoretically, the user inputs a photographing instruction at least three times, and the terminal 100 executes S204 and S205 three times, that is, the camera 200 respectively captures image data of the target object at each of the three target placement positions, and then three pieces of image data about the target object are stored, corresponding to the three target placement positions, respectively, thereby completing the operation of inputting the three pieces of image data of the target object.
That is, the above S204 and S205 may be executed a plurality of times depending on the photographing instruction input by the user.
By the technical scheme, the terminal can acquire the image acquisition guide information of the target object to be acquired and output the image acquisition guide information. The image acquisition guide information can guide the user to carry out the placing operation of the target object, so that the user can conveniently place the target object according to the indication of the guide information, the process that the user needs to learn how to input the image data of the object in advance in the related technology is saved, and the time and the labor are saved. In addition, after the user puts the target article according to image acquisition guide information, the article image data that the terminal was gathered can more laminate subsequent image identification demand, so, on the one hand, can avoid typing into invalid or the not good article image data of image quality, improve and type speed and quality, on the other hand still helps improving the accuracy when subsequently carrying out article identification according to the article image data of typing into.
In one embodiment, the target operation mentioned in S205 may be: the terminal 100 stores the image data of the target item captured by the camera 200 in association with the item information. That is, the image data and the article information are stored locally in the terminal.
In another embodiment, the target operation mentioned in S205 may be: the image data of the target item captured by the camera 200 is sent to the server, so that the image data is stored in association with the item information of the target item by the server. In this embodiment, the method may further include: and sending the article information to a server. The terminal 100 transmits the acquired item information of the target item to the server 300, so that the server 300 knows which item the subsequently acquired image data corresponds to, and thus, after receiving the image data, the server 300 can store the image data and associate the image data with the received item information, so that the target item can be identified subsequently according to the image data.
The server is used for data storage and article identification operation, so that the data processing amount of the terminal side equipment can be reduced, the storage space of the terminal side equipment is saved, and the processing speed of the terminal side equipment is improved.
In the present disclosure, the image capture guidance information may give guidance about any one or more of the following three kinds of information: target placement angle information, target placement position information, and similarity information between current image data of a target item and stored image data relating to the target item. The following describes a procedure of how the terminal acquires the information, with respect to these three kinds of information, respectively.
The first method comprises the following steps: target placement angle information.
For example, the manner of acquiring the target placement angle information by the terminal 100 may be as follows: the terminal 100 identifies the type of the target item according to the acquired item information, for example, identifies that the target item is a sandwich. The terminal 100 may locally store a corresponding relationship between the article type and the target placement angle, so that the terminal 100 may query the corresponding relationship according to the identified article type, thereby obtaining the corresponding target placement angle information. For example, the target placement angles corresponding to the sandwich type of article are as follows: front, side and back. Thus, when the terminal 100 recognizes that the target item is a sandwich, three pieces of target placement angle information, i.e., front, side, and back, can be obtained.
Further, for example, the manner of the terminal 100 obtaining the target placement angle information may further be: and acquiring target placement angle information of the target object according to the video stream information which is shot by the camera and contains the target object. For example, the terminal 100 may transmit video stream information including the target item captured by the camera 200 to the server 300, so that the server 300 transmits target placement angle information about the target item according to the video stream information; after that, the terminal 100 receives the target placement angle information transmitted from the server 300.
In this example, when the user places the target item in the shooting direction of the camera 200, e.g., onto the tray 103, the camera 200 is in a wake-up state. Wherein, the camera 200 being in the wake-up state may include: when a user places a target object in the shooting direction of the camera 200, the camera 200 is in an awakening state; alternatively, when the user places the target object in the shooting direction of the camera 200, the camera 200 itself is in a sleep state, and when the user places the target object in the shooting direction of the camera 200, the camera 200 switches to an awake state, where the switching operation may be triggered manually by the user, or may be automatically woken up by the camera 200 when the user detects the object. When the camera 200 is in the wake-up state, the camera 200 can capture video stream information containing the target object. The terminal 100 may transmit the video stream information to the server 300. After receiving the video stream information, the server 300 determines the type of the target item through image recognition technology, for example, recognizes that the target item is a sandwich. The server 300 may locally store a corresponding relationship between the article type and the target placement angle, so that the server 300 may query the corresponding relationship according to the identified article type, thereby obtaining corresponding target placement angle information. For example, the server 300 may obtain three pieces of target placement angle information of the front side, the side, and the back side. Then, the server 300 transmits the obtained target placement angle information to the terminal 100, and the terminal 100 receives and outputs the information.
Alternatively, the terminal 100 may determine the type of the target object by using an image recognition technique according to the video stream information containing the target object captured by the camera 200. The terminal 100 may locally store a corresponding relationship between the article type and the target placement angle, so that the terminal 100 itself may query the corresponding relationship according to the identified article type, thereby obtaining corresponding target placement angle information.
Illustratively, the terminal 100 may display the target pose angle information on the display screen 102. For example, the target placement angle information may be displayed in a graphic form, wherein the graphic corresponds to the shape and placement angle of the article. As shown in fig. 3, assuming that the target object is a sandwich, after the terminal 100 acquires three pieces of target placement angle information, namely, the front side, the side, and the back side, the graph 104, the graph 105, and the graph 106 corresponding to the front side, the side, and the back side, respectively, can be displayed on the display screen 102, so as to intuitively inform the user that the front side image, the side image, and the back side image of the sandwich need to be collected.
By the implementation mode, the guidance of the target placing angle information of the article related to the user can be realized in the article image data, the accurate and efficient input of the multi-angle image data of the article is realized, and the article identification rate is improved.
And the second method comprises the following steps: and (4) target placement position information.
For example, the manner of acquiring the target placement angle information by the terminal 100 may be as follows: the terminal 100 determines target placement position information about the target object according to the acquired object information. For example, the terminal 100 identifies the type of the target item, for example, identifies that the target item is a sandwich, according to the item information. The terminal 100 may locally store a corresponding relationship between the type of the object and the target placement position, so that the terminal 100 may query the corresponding relationship according to the type of the identified target object, thereby obtaining corresponding target placement position information. For example, the target placement positions corresponding to the sandwich are as follows: the middle and four corners of the tray. Thus, when the terminal 100 recognizes that the target item is a sandwich, five pieces of target placement position information, namely, the middle of the tray and the four corners of the tray, can be obtained.
Further, for example, the manner of acquiring the target placement position information by the terminal 100 may also be: sending the video stream information containing the target item, which is shot by the camera 200, to the server 300, so that the server 300 sends target placement position information about the target item according to the video stream information; after that, the terminal 100 receives the target placement angle information transmitted from the server 300.
In this example, when the user places the target item in the shooting direction of the camera 200, the camera 200 can shoot video stream information containing the target item. The terminal 100 may transmit the video stream information to the server 300. After receiving the video stream information, the server 300 determines the category of the target item through an image recognition technology, for example, recognizes that the target item is a sandwich. The server 300 may locally store a corresponding relationship between the type of the object and the target placement position, so that the server 300 may query the corresponding relationship according to the identified type of the target object, thereby obtaining corresponding target placement position information. For example, the target placement positions corresponding to the sandwich are as follows: the middle and four corners of the tray. Thus, when the server 300 recognizes that the target item is a sandwich, five pieces of target placement position information, namely, the middle of the tray and the four corners of the tray, can be obtained. Then, the server 300 transmits the obtained target placement position information to the terminal 100, and the terminal 100 receives and outputs the target placement position information.
Illustratively, the terminal 100 may display the target placement position information on the display screen 102 in text form.
For another example, in order to more intuitively and clearly display the target placement position information to the user, in an optional embodiment of the present disclosure, the display screen 102 may be divided into a plurality of display areas, each corresponding to a placement position. For example, the display screen 102 is divided into 9 display areas, and at the same time, the tray 103 is also divided into 9 placement positions, and the 9 display areas are respectively in one-to-one correspondence with the 9 placement positions on the tray 103. In this way, in the case where the target placement position information obtained by the terminal 100 indicates at least one target placement position, the terminal outputting the image capture guidance information may include:
setting a display area on the display screen 102 corresponding to a target placement position at which image acquisition operation is not completed to be in a first display state; and setting a display area on the display screen 102 corresponding to the target placement position at which the image capturing operation has been completed, to a second display state.
After the terminal 100 responds to the received photographing instruction and triggers the camera 200 to perform the photographing operation, the terminal 100 may determine a current placement position of the target object according to the content of the image photographed by the camera 200, and may determine that the image acquisition operation is completed at the current placement position after the target operation is performed, at this time, the terminal 100 switches the display area corresponding to the current placement position on the display screen 102 from the first display state to the second display state.
In the present disclosure, the first display state and the second display state may be different display states for indicating two different results. For example, the first display state may be highlighted and the second display state may be non-highlighted. Alternatively, the first display state may be to appear a first color and the second display state may be to appear a second color different from the first color. Still alternatively, the first display state may be to assume a first shape, the second display state may be to assume a second shape different from the first shape, and so on.
Fig. 4A and 4B are diagrams illustrating an exemplary implementation of a terminal outputting target placement position information. As shown in fig. 4A, the target placement position information on the target item of the sandwich obtained by the terminal 100 indicates five target placement positions of the middle of the tray and the four corners of the tray, and the display areas corresponding to the five target placement positions on the display screen 102 are a display area 107, a display area 108, a display area 109, a display area 110, and a display area 111, respectively. Since the image capturing operation is not completed at all of the five target placement positions in the initial stage, the five display areas are in the first display state, such as highlighted display. Then, when the user places the target object in the middle of the tray, the user inputs a photographing instruction to trigger the camera 200 to perform photographing operation. After the terminal 100 performs the above-described target operation, the terminal 100 determines that the image capturing operation has been completed at the position in the middle of the tray, and thereafter sets the display area 107 to the second display state, such as non-highlighted display, as shown in fig. 4B. Therefore, the user can be intuitively informed of the target placement position information and the image acquisition progress of each target placement position, and the input efficiency of the article image data is further improved.
By the embodiment, the guidance of the target placement position information of the article related to the user can be realized in the article image data, the accurate and efficient entry of the image data of the article at multiple positions can be realized, and the article identification rate is improved.
In order to achieve the processing efficiency of the server 300 and the terminal 100, in an alternative embodiment of the present disclosure, the obtaining of the target placement angle information may be obtained by the terminal 100 from the server 300, and the obtaining of the target placement position information may be performed locally by the terminal 100.
And the third is that: similarity information between the current image data of the target item and stored image data relating to the target item.
For example, the way for the terminal 100 to obtain the similarity information may be: according to the video stream information which is shot by the camera 200 and contains the target object, the similarity information between the current image data of the target object and the stored image data related to the target object is obtained. For example, the terminal 100 may transmit video stream information including the target item captured by the camera 200 to the server 300, so that the server 300 determines, according to the video stream information, similarity information between current image data of the target item and image data about the target item already stored in the server 300, and transmits the similarity information; after that, the terminal 100 receives the similarity information transmitted from the server 300.
In this example, when the user places the target item in the shooting direction of the camera 200, the camera 200 can shoot video stream information containing the target item. The terminal 100 may transmit the video stream information to the server 300. The server 300, after receiving the video stream information, may intercept an image as current image data of the target item. Thereafter, the server 300 may perform image similarity calculation between the current image data and the previously stored image data of the same target item. For example, the server 300 may retrieve image data that has been previously stored in association with the item information from the received item information of the target item, and then perform similarity calculation between the current image data and these image data one by one. After the similarity is calculated, for each stored image data about the target item, the similarity information with the current image data can be obtained. After that, the server 300 may transmit the similarity information to the terminal 100. For example, the server 300 may transmit only the similarity information characterizing the most similar to the terminal 100. The terminal 100 may then output the similarity information, for example, displaying the similarity information on the display screen 102. For example, as shown in fig. 5, the similarity information may be in the form of a percentage, and a larger value indicates more similarity, and then the terminal 100 may output the similarity information in the form of the percentage on the display screen 102. Therefore, the user can intuitively know the repeatability degree between the current image data of the target object and the image data which is recorded before through the similarity information.
Alternatively, the terminal 100 may also intercept an image as the current image data of the target object according to the video stream information containing the target object captured by the camera 200. Thereafter, the terminal 100 may perform image similarity calculation on the current image data and the previously stored image data of the same target item to obtain similarity information. The specific process of obtaining the similarity information may refer to the above description, and is not repeated here.
It can be understood that, when the subsequent article identification operation is performed based on the image identification technology, if the image data of the article which is pre-recorded is richer and the repeatability between the images is lower, the identification rate is higher. Therefore, the similarity information is intuitively output to the user, so that when the current image data is similar to the input image data, the user can choose to abandon inputting the current image data and adjust the placement of the target object. In the adjusting process, the camera 200 may perform real-time shooting, and the terminal 100 may obtain video stream data in real time and obtain the similarity information in real time. Therefore, the user can know the repeatability degree of the current image data and the recorded image data in real time in the process of adjusting the target object. When the repeatability degree of the current image data and the recorded image data is low, the user can stop adjusting and input a photographing instruction to trigger the camera to perform photographing operation, so that the current image data is stored. Of course, when the current image data is similar to the entered image data, if the user still thinks that the current image data is to be entered, the user can also input a photographing instruction to trigger the camera to perform a photographing operation.
By guiding the user to enter the article image data through the embodiment, too much useless and repetitive image data can be effectively prevented from being entered, and the diversity of the stored image data about the same article is improved, so that the article identification rate in the later period is improved.
The acquisition process of the three kinds of image acquisition guidance information is described in detail above with reference to the drawings. In a complete entry operation for a target article, guidance may be given only on one of the three image capturing guidance information, for example, the terminal 100 only acquires target placement angle information, or the terminal 100 only acquires target placement position information, or the terminal 100 only acquires similarity information. However, in order to achieve more comprehensive item image data entry, in an alternative embodiment of the present disclosure, multiple types of image capture guidance information may be given.
In this optional embodiment, the image capturing guidance information may include a plurality of types of image capturing guidance information, and the plurality of types of image capturing guidance information have a preset order. Thus, the step of the terminal 100 acquiring the image capture guidance information about the target item may include: according to the sequence, acquiring the image acquisition guide information with the top rank in the image acquisition guide information; and if the image acquisition operation aiming at the currently acquired image acquisition guide information of the type is finished, acquiring the image acquisition guide information of the next ranked image acquisition guide information according to the sequence until the image acquisition operation aiming at each image acquisition guide information in the image acquisition guide information of the plurality of types is finished.
For example, the multiple types of image capture guidance information may include two or three types of information among target placement angle information, target placement position information, and similarity information between current image data of a target item and stored image data about the target item. And, there is a preset precedence order among the multiple types of image acquisition guidance information, for example, the target placement angle information is the first, the target placement position information is the second, and the similarity information is the last. In this way, entry of image data about an item may be divided into a plurality of processes, each process corresponding to a type of image capture guidance information.
The terminal 100 first obtains the image capturing guidance information of the type ranked the first, such as the target placement angle information. The process of how to obtain the target placement angle information is described above, and is not described here again. The terminal 100 may have various ways to know that the image data for each target placement angle is completely acquired. For example, a flow-one-complete button may be presented on the terminal 100, and when the user clicks the flow-one-complete button, the terminal 100 may determine that the image data for each target pose angle has been acquired completely. Or, the terminal 100 may determine the number N of the target placement angle information, where N is a positive integer, and if the three target placement angle information of the front, the back, and the side of the target object, such as a sandwich, is provided, the system defaults that the user places the object as indicated by the target placement angle information and performs image acquisition, so that after the terminal 100 has shot N pieces of image data and has performed the above-mentioned target operation on each piece of image data, the terminal may determine that the image data for each target placement angle has been acquired.
If the image acquisition operation for the currently acquired image acquisition guide information of the type is completed, acquiring the image acquisition guide information of the type ranked next, such as target placement position information. The process of how to obtain the target placement position information is described above, and is not described here again. The terminal 100 may have various ways to know that the image data for each target placement position is completely acquired. For example, a process two completion button may be presented on the terminal 100, and when the user clicks the process two completion button, the terminal 100 may determine that the image data for each target placement position has been completely acquired. Alternatively, the terminal 100 may determine the current position of the target object according to the image captured by the camera 200, so that the terminal 100 may determine whether the image data acquisition at each target placement position is completed.
If the image acquisition operation for the currently acquired image acquisition guide information of the type is completed, acquiring the image acquisition guide information of the type ranked next, such as similarity information. The process of how to obtain the similarity information is described above, and is not described herein again. The terminal 100 may know that the image capturing operation for the similarity information is completed in various ways. For example, a flow three completion button may be presented on the terminal 100, and when the user clicks the flow three completion button, the terminal 100 may determine that the image capturing operation for the similarity information is completed. Alternatively, the terminal 100 may obtain the number M (which may be preset) of image data allowed to be entered in the flow, where M is a positive integer, and thus, after the terminal 100 has captured M pieces of image data in the flow and performed the above-described target operation for each piece of image data, the terminal may determine that the image capturing operation for the similarity information has been completed.
Through obtaining multiclass image acquisition guide information, can realize the multidimension degree guide to the input process to improve the comprehensiveness of article image data input, and then help improving subsequent article identification rate.
Although the foregoing description is given by way of example in the order in which the target placement angle information is the first, the target placement position information is the second, and the similarity information is the last, the present disclosure is not limited thereto, and other orders may be defined for multiple types of image acquisition guidance information.
Fig. 6 is a flowchart illustrating a method for acquiring image data of an article according to an exemplary embodiment of the present disclosure, which may be applied to a server, for example, the server 300 shown in fig. 1. As shown in fig. 6, the method may include:
in S601, the item information of the target item to be collected, which is sent by the terminal, is received.
In S602, image capture guidance information about the target item is determined, where the image capture guidance information is used to guide a user to perform a placing operation of the target item.
In S603, the image capture guidance information is transmitted to the terminal.
In S604, the image data of the target item transmitted by the terminal is received.
In S605, the image data is stored in association with the item information.
By the technical scheme, the terminal can acquire the image acquisition guide information of the target object to be acquired and output the image acquisition guide information. The image acquisition guide information can guide the user to carry out the placing operation of the target object, so that the user can conveniently place the target object according to the indication of the guide information, the process that the user needs to learn how to input the image data of the object in advance in the related technology is saved, and the time and the labor are saved. In addition, after the user puts the target article according to image acquisition guide information, the article image data that the terminal was gathered can more laminate the image identification demand of server side, so, on the one hand, can avoid typing in invalid or the not good article image data of image quality, improve and type speed and quality, on the other hand, still help improving the server side follow-up when carrying out article identification according to the article image data of typing in accuracy.
Optionally, the image capturing guidance information may include target placement angle information. As such, S602 may further include: receiving video stream information which is sent by a terminal and contains a target object; and determining target placing angle information about the target object according to the video stream information.
Optionally, the image capture guidance information may include similarity information between the current image data of the target item and the image data about the target item stored in the server. As such, S602 may further include: receiving video stream information which is sent by a terminal and contains a target object; according to the video stream information, similarity information of the current image data of the target object and the image data which is stored in the server and related to the target object is determined.
With regard to the method in the above-mentioned embodiment, the specific implementation of each step has been described in detail in the embodiment of the method on the terminal side, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an apparatus 700 for acquiring image data of an article according to an exemplary embodiment of the present disclosure, where the apparatus 700 may be applied to a terminal, for example, the terminal 100 shown in fig. 1. As shown in fig. 7, the apparatus 700 may include: a first obtaining module 701, configured to obtain item information of a target item to be collected; a second obtaining module 702, configured to obtain image acquisition guidance information about the target item, where the image acquisition guidance information is used to guide a user to perform a placement operation of the target item; an output module 703, configured to output the image acquisition guidance information; the triggering module 704 is used for triggering the camera to perform photographing operation in response to the received photographing instruction; the executing module 705 is configured to execute a target operation for storing the image data of the target item captured by the camera in association with the item information.
Optionally, the apparatus 700 may further include: the sending module is used for sending the article information to a server; as such, the target operation is: and sending the image data to the server so as to be stored by the server in association with the article information.
Optionally, the image acquisition guidance information includes target placement angle information; the second obtaining module 703 may be configured to obtain the target placement angle information about the target item according to the video stream information that is captured by the camera and contains the target item.
Optionally, the image acquisition guide information includes target placement position information; the second obtaining module 703 may be configured to determine target placement position information about the target item according to the item information.
Optionally, the target placement position information indicates at least one target placement position, and each target placement position corresponds to a display area of a display screen of the terminal; the output module 704 may be configured to set a display area on the display screen corresponding to a target placement position at which an image capturing operation is not completed to a first display state; and setting a display area corresponding to the target placing position where the image acquisition operation is finished on the display screen to be in a second display state.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target item and stored image data about the target item; the second obtaining module 703 may be configured to obtain, according to the video stream information that includes the target item and is captured by the camera, similarity information between current image data of the target item and stored image data related to the target item.
Optionally, the image acquisition guide information includes multiple types of image acquisition guide information, and the multiple types of image acquisition guide information have a preset sequence; the second obtaining module 703 may be configured to obtain, according to the sequence, the image acquisition guidance information of the type with the top rank from among the multiple types of image acquisition guidance information; and if the image acquisition operation aiming at the currently acquired image acquisition guide information of the type is finished, acquiring the image acquisition guide information of the next ranked image acquisition guide information according to the sequence until the image acquisition operation aiming at each image acquisition guide information in the image acquisition guide information of the plurality of types is finished.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 8 is a block diagram illustrating an apparatus 800 for acquiring image data of an article according to an exemplary embodiment of the disclosure, where the apparatus 800 may be applied to a server, such as the server 300 shown in fig. 1. As shown in fig. 8, the apparatus 800 may include: a first receiving module 801, configured to receive item information of a target item to be acquired, where the item information is sent by a terminal; a determining module 802, configured to determine image acquisition guidance information about the target item, where the image acquisition guidance information is used to guide a user to perform a placing operation of the target item; a sending module 803, configured to send the image acquisition guidance information to the terminal; a second receiving module 804, configured to receive the image data of the target item sent by the terminal; a storage module 805, configured to store the image data in association with the item information.
Optionally, the image acquisition guidance information includes target placement angle information; the determining module 802 may be configured to receive video stream information sent by the terminal and containing the target item; determining the target placement angle information about the target item according to the video stream information.
Optionally, the image acquisition guidance information includes: similarity information between the current image data of the target item and the image data about the target item stored in the server; the determining module 802 may be configured to receive video stream information sent by the terminal and containing the target item; and according to the video stream information, determining similarity information between the current image data of the target object and the image data which is stored in the server and is related to the target object.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 9 is a block diagram illustrating an electronic device 900 in accordance with an example embodiment. As shown in fig. 9, the electronic device 900 may include: a processor 901, a memory 902, and a camera 906. The electronic device 900 may also include one or more of a multimedia component 903, an input/output (I/O) interface 904, and a communications component 905.
The processor 901 is configured to control the overall operation of the electronic device 900, so as to complete all or part of the steps in the above-mentioned method for acquiring image data of an article applied to the terminal side. The memory 902 is used to store various types of data to support operation of the electronic device 900, such as instructions for any application or method operating on the electronic device 900 and application-related data, such as item information, pictures, audio, video, payment information, and so forth. The Memory 902 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia component 903 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 902 or transmitted through the communication component 905. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 904 provides an interface between the processor 901 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 905 is used for wired or wireless communication between the electronic device 900 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 905 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc. The camera 906 may be used to capture image information, video information, etc. about the item, which may be further stored in the memory 902 or transmitted via the communication component 905.
In an exemplary embodiment, the electronic Device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described method for acquiring image data of an article applied to the terminal side.
In another exemplary embodiment, there is also provided a computer readable storage medium including program instructions which, when executed by a processor, implement the steps of the above-described acquisition method applied to the image data of an article on the terminal side. For example, the computer readable storage medium may be the memory 902 described above including program instructions executable by the processor 901 of the electronic device 900 to perform the above-described method for capturing image data of an article applied to the terminal side.
Fig. 10 is a block diagram illustrating an electronic device 1000 in accordance with another example embodiment. For example, the electronic device 1000 may be provided as a server. Referring to fig. 10, the electronic device 1000 includes a processor 1022, which may be one or more in number, and a memory 1032 for storing computer programs executable by the processor 1022. The computer programs stored in memory 1032 may include one or more modules that each correspond to a set of instructions. Further, the processor 1022 may be configured to execute the computer program to execute the above-described acquisition method applied to the item image data on the server side.
Additionally, the electronic device 1000 may also include a power component 1026 and a communication component 1050, the power component 1026 may be configured to perform power management for the electronic device 1000, and the communication component 1050 may be configured to enable communication for the electronic device 1000, e.g., wired or wireless communication. In addition, the electronic device 1000 may also include input/output (I/O) interfaces 1058. The electronic device 1000 may operate based on an operating system stored in memory 1032, such as a Windows Server, Mac OS XTM, UnixTM, Linux, and the like.
In another exemplary embodiment, a computer-readable storage medium is also provided, which comprises program instructions, which when executed by a processor, implement the above-mentioned steps of the acquisition method applied to the server-side image data of the item. For example, the computer readable storage medium may be the memory 1032 comprising program instructions executable by the processor 1022 of the electronic device 1000 to perform the above-described acquisition method applied to the server-side image data of the item.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned acquisition method applied to image data of an article on a server side when being executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (16)

1. A method of acquiring image data of an article, the method comprising:
acquiring article information of a target article to be acquired;
acquiring image acquisition guide information about the target object, wherein the image acquisition guide information is used for guiding a user to carry out placement operation of the target object;
outputting the image acquisition guide information;
triggering a camera to carry out photographing operation in response to the received photographing instruction;
and executing target operation for storing the image data of the target object shot by the camera in association with the object information.
2. The method of claim 1, wherein the method further comprises:
sending the article information to a server;
the target operation is: and sending the image data to the server so as to be stored by the server in association with the article information.
3. The method of claim 1, wherein the image acquisition guidance information includes target placement angle information;
the acquiring of the image acquisition guidance information about the target item includes:
and acquiring the target placement angle information of the target object according to the video stream information which is shot by the camera and contains the target object.
4. The method of claim 1, wherein the image acquisition guidance information includes target placement location information;
the acquiring of the image acquisition guidance information about the target item includes:
and determining target placing position information about the target object according to the object information.
5. The method of claim 4, wherein the target placement position information indicates at least one target placement position, each target placement position corresponding to a display area of a display screen of the terminal, respectively;
the outputting of the image acquisition guide information includes:
setting a display area corresponding to a target placing position where image acquisition operation is not finished on the display screen to be in a first display state;
and setting a display area corresponding to the target placing position where the image acquisition operation is finished on the display screen to be in a second display state.
6. The method of claim 1, wherein the image acquisition guidance information comprises: similarity information between the current image data of the target item and stored image data about the target item;
the acquiring of the image acquisition guidance information about the target item includes:
and according to the video stream information which is shot by the camera and contains the target object, acquiring the similarity information between the current image data of the target object and the stored image data related to the target object.
7. The method according to claim 1, wherein the image acquisition guide information comprises a plurality of types of image acquisition guide information, and the plurality of types of image acquisition guide information have a preset precedence order;
the acquiring of the image acquisition guidance information about the target item includes:
acquiring the image acquisition guide information with the top rank from the image acquisition guide information of the plurality of types according to the sequence;
and if the image acquisition operation aiming at the currently acquired image acquisition guide information of the type is finished, acquiring the image acquisition guide information of the next ranked image acquisition guide information according to the sequence until the image acquisition operation aiming at each image acquisition guide information in the image acquisition guide information of the plurality of types is finished.
8. The method of claim 7, wherein the image acquisition guidance information is selected from one of: target placement angle information, target placement position information, similarity information between current image data of the target item and stored image data related to the target item.
9. A method of acquiring image data of an article, the method comprising:
receiving object information of a target object to be acquired, which is sent by a terminal;
determining image acquisition guiding information about the target object, wherein the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object;
sending the image acquisition guide information to the terminal;
receiving image data of the target object sent by the terminal;
storing the image data in association with the item information.
10. The method of claim 9, wherein the image acquisition guidance information includes target placement angle information;
the determining image capture guidance information about the target item includes:
receiving video stream information which is sent by the terminal and contains the target object;
determining the target placement angle information about the target item according to the video stream information.
11. The method of claim 9, wherein the image acquisition guidance information comprises: similarity information between the current image data of the target item and the image data about the target item stored in the server;
the determining image capture guidance information about the target item includes:
receiving video stream information which is sent by the terminal and contains the target object;
and according to the video stream information, determining similarity information between the current image data of the target object and the image data which is stored in the server and is related to the target object.
12. An apparatus for acquiring image data of an article, the apparatus comprising:
the first acquisition module is used for acquiring the article information of a target article to be acquired;
the second acquisition module is used for acquiring image acquisition guiding information about the target object, and the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object;
the output module is used for outputting the image acquisition guide information;
the triggering module is used for triggering the camera to carry out photographing operation in response to the received photographing instruction;
and the execution module is used for executing target operation for storing the image data of the target object shot by the camera in a manner of being associated with the object information.
13. An apparatus for acquiring image data of an article, the apparatus comprising:
the first receiving module is used for receiving the object information of the target object to be acquired, which is sent by the terminal;
the determining module is used for determining image acquisition guiding information about the target object, and the image acquisition guiding information is used for guiding a user to carry out placing operation of the target object;
the sending module is used for sending the image acquisition guiding information to the terminal;
the second receiving module is used for receiving the image data of the target object sent by the terminal;
and the storage module is used for storing the image data and the article information in a correlation manner.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
15. An electronic device, comprising:
a camera for taking a picture;
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 8.
16. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 9 to 11.
CN201911067130.6A 2019-11-04 2019-11-04 Article image data acquisition method and device, storage medium and electronic equipment Pending CN110827487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911067130.6A CN110827487A (en) 2019-11-04 2019-11-04 Article image data acquisition method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911067130.6A CN110827487A (en) 2019-11-04 2019-11-04 Article image data acquisition method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110827487A true CN110827487A (en) 2020-02-21

Family

ID=69552365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911067130.6A Pending CN110827487A (en) 2019-11-04 2019-11-04 Article image data acquisition method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110827487A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288356A (en) * 2020-10-26 2021-01-29 胜斗士(上海)科技技术发展有限公司 Material management system
CN113840085A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Vehicle source information acquisition method and device, electronic equipment and readable medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346603A (en) * 2013-08-09 2015-02-11 富士施乐株式会社 Image processing apparatus and non-transitory computer readable medium
CN107610381A (en) * 2017-10-19 2018-01-19 安徽小豆网络科技有限公司 A kind of self-service cashier's machine of view-based access control model image recognition
JP6315636B1 (en) * 2017-06-30 2018-04-25 株式会社メルカリ Product exhibition support system, product exhibition support program, and product exhibition support method
CN109005350A (en) * 2018-08-30 2018-12-14 Oppo广东移动通信有限公司 Image repeats shooting reminding method, device, storage medium and mobile terminal
CN110164033A (en) * 2018-02-13 2019-08-23 青岛海尔特种电冰柜有限公司 Merchandise news extracting method, merchandise news extraction element and automatically vending system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346603A (en) * 2013-08-09 2015-02-11 富士施乐株式会社 Image processing apparatus and non-transitory computer readable medium
JP6315636B1 (en) * 2017-06-30 2018-04-25 株式会社メルカリ Product exhibition support system, product exhibition support program, and product exhibition support method
CN107610381A (en) * 2017-10-19 2018-01-19 安徽小豆网络科技有限公司 A kind of self-service cashier's machine of view-based access control model image recognition
CN110164033A (en) * 2018-02-13 2019-08-23 青岛海尔特种电冰柜有限公司 Merchandise news extracting method, merchandise news extraction element and automatically vending system
CN109005350A (en) * 2018-08-30 2018-12-14 Oppo广东移动通信有限公司 Image repeats shooting reminding method, device, storage medium and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288356A (en) * 2020-10-26 2021-01-29 胜斗士(上海)科技技术发展有限公司 Material management system
CN113840085A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Vehicle source information acquisition method and device, electronic equipment and readable medium

Similar Documents

Publication Publication Date Title
CN113132618B (en) Auxiliary photographing method and device, terminal equipment and storage medium
CN104427252B (en) Method and its electronic equipment for composograph
CN110333836B (en) Information screen projection method and device, storage medium and electronic device
EP3188034A1 (en) Display terminal-based data processing method
CN110658731B (en) Intelligent household appliance network distribution method, storage medium and intelligent terminal
CN105229582A (en) Based on the gestures detection of Proximity Sensor and imageing sensor
CN109727411B (en) Book borrowing system based on face recognition, code scanning authentication and human body induction
CN103873959A (en) Control method and electronic device
US10122925B2 (en) Method, apparatus, and computer program product for capturing image data
CN108345907A (en) Recognition methods, augmented reality equipment and storage medium
US20210233529A1 (en) Imaging control method and apparatus, control device, and imaging device
CN112613475A (en) Code scanning interface display method and device, mobile terminal and storage medium
WO2013179985A1 (en) Information processing system, information processing method, communication terminal, information processing device and control method and control program therefor
CN110827487A (en) Article image data acquisition method and device, storage medium and electronic equipment
CN112492201B (en) Photographing method and device and electronic equipment
US20200126253A1 (en) Method of building object-recognizing model automatically
CN112052784B (en) Method, device, equipment and computer readable storage medium for searching articles
CN113794834A (en) Image processing method and device and electronic equipment
US10354242B2 (en) Scanner gesture recognition
EP3929804A1 (en) Method and device for identifying face, computer program, and computer-readable storage medium
CN111104915B (en) Method, device, equipment and medium for peer analysis
CN110533898B (en) Wireless control learning system, method, apparatus, device and medium for controlled device
CN108197620B (en) Photographing and question searching method and system based on eye positioning and handheld photographing equipment
CN108182277B (en) Method and system for searching questions based on dominant points and handheld photographing equipment
CN108280184B (en) Test question extracting method and system based on intelligent pen and intelligent pen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200221