CN116843375A - Method and device for depicting merchant portrait, electronic equipment, verification method and system - Google Patents

Method and device for depicting merchant portrait, electronic equipment, verification method and system Download PDF

Info

Publication number
CN116843375A
CN116843375A CN202310728364.0A CN202310728364A CN116843375A CN 116843375 A CN116843375 A CN 116843375A CN 202310728364 A CN202310728364 A CN 202310728364A CN 116843375 A CN116843375 A CN 116843375A
Authority
CN
China
Prior art keywords
business
merchant
photograph
management
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310728364.0A
Other languages
Chinese (zh)
Inventor
鲁珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202310728364.0A priority Critical patent/CN116843375A/en
Publication of CN116843375A publication Critical patent/CN116843375A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • G06Q30/0202Market predictions or forecasting for commercial activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

Embodiments of the present disclosure relate to a method, apparatus, electronic device, method, system and computer readable storage medium for depicting a business portrait, the method comprising: acquiring merchant management photographs; inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph so as to obtain a recognition result of the merchant management photograph; under the condition that the business management photo is the business door photo, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business door photo; in the case that the business management photograph is the business interior photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business interior photograph; under the condition that the business management photograph is the business street view photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the mark building of the business street view photograph; determining business information of the merchant corresponding to the merchant management photograph based on the identification result; and describing the merchant portraits according to the business information of the merchant.

Description

Method and device for depicting merchant portrait, electronic equipment, verification method and system
The disclosure is divided application of China patent application with application number 202010078806.8 and application name of 'method, device, electronic equipment, verification method and system for describing merchant portrait' filed by China patent office on 02/03/2020.
Technical Field
Embodiments of the present disclosure relate to the field of image processing technologies, and more particularly, to a method for characterizing a business portrait, a method for verifying the business portrait, a device for characterizing the business portrait, an electronic device, a system for verifying the business portrait, and a computer-readable storage medium.
Background
Many B2B (Business-to-Business) merchants on one side are better informed of the Business conditions of the other side, for example, for the vertical mode B2B, the upstream company is better informed of the Business conditions of the downstream merchant; for example, for the comprehensive mode B2B, the buyer wants to know the relevant business information of the seller and the like, so as to facilitate the resource release and the activity development, and usually needs to be used as a merchant portrait, and the merchant portrait is depicted by collecting the information of the merchant.
However, in order to attract more merchants to make an account, the access threshold of the merchant is generally reduced, and the account opening process is simplified, so that when many merchants open an account, submitted merchant information may be incomplete. The lack of merchant information increases the difficulty of describing merchant images.
Disclosure of Invention
The embodiment of the specification provides a new technical scheme capable of describing merchant images.
According to a first aspect of the present description, there is provided an embodiment of a method of depicting a merchant representation, the method comprising:
acquiring merchant management photographs; the categories of the merchant management photographs comprise merchant door photographs, management interior photographs and/or surrounding street photographs;
inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph so as to obtain a recognition result of the merchant management photograph; in the case that the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is used for classifying and recognizing the object of the merchant portal photograph; in the case that the business management photograph is the business interior photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business interior photograph; under the condition that the business management photograph is the business street photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the mark building of the business street photograph;
determining business information of the merchant corresponding to the merchant management photograph based on the identification result;
and describing the merchant portraits according to the business information of the merchant.
Optionally, before the acquiring the merchant management photograph, the method further includes:
sending an instruction for submitting the merchant management photograph to the target object;
wherein, obtaining the merchant management photograph comprises:
and acquiring the merchant management photograph submitted by the target object based on the instruction.
Optionally, the business management photo is a business door photo, the image recognition model of the corresponding category is a first recognition model, and the first recognition model is used for classifying and recognizing the object of the business door photo;
the method for inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
inputting a merchant door head photo into a preset first identification model;
classifying and identifying objects in the merchant door head photo through a preset first identification model to obtain a corresponding identification result;
wherein, determining the business information of the business corresponding to the business of the business on the basis of the identification result comprises:
and determining the business scene of the business corresponding to the business gate according to the identification result, wherein the business scene of the business is used as business information of the business.
Optionally, the classes of the business scene of the merchant comprise fixed stores, vending carts and booths, and the object classes of the merchant top photo which are classified and identified by the first identification model comprise fixed stores, vending carts, booths and other classes;
The determining the business scene of the business corresponding to the business door head according to the identification result comprises the following steps:
and determining that the business scene of the merchant is a fixed store, a vending car or a stall according to the identification result obtained by the classification identification, or determining that the acquired business illumination is not the business illumination under the condition that the identification result is of other types.
Optionally, determining the business information of the merchant in the business photograph corresponding to the merchant based on the identification result further includes:
acquiring transaction LBS information of a merchant;
determining the transaction geographic position of the merchant according to the transaction LBS information;
determining a business scenario of a merchant based on the transaction geographic location;
verifying the business scene of the merchant determined by the identification result according to the business scene of the merchant determined based on the transaction geographic position;
and taking the business scene of the verified merchant as business information of the merchant.
Optionally, determining the business scenario of the merchant based on the transaction geographic location includes:
determining whether the transaction geographic location changes within a predetermined time period;
if the transaction geographic position is not changed, determining that the business scene of the merchant is a fixed-position business mode; or alternatively, the process may be performed,
and if the transaction geographic position changes, determining that the business scene of the merchant is a position flowing business mode.
Optionally, the method further comprises:
inputting the merchant door head photo into a preset character extraction model to obtain character data corresponding to the merchant door head photo;
wherein, determining the business information of the business corresponding to the business of the business on the basis of the identification result comprises:
and determining business industries of the corresponding merchants according to the text data, wherein the business industries of the merchants are used as business information of the merchants.
Optionally, determining, according to the text data, the business industry of the merchant corresponding to the merchant door shot includes:
inputting the extracted text data into a preset first industry classification model;
mapping and identifying the text data through a preset industry classification model to obtain industry classifications corresponding to the text data;
and determining the industry classification corresponding to the text data as the business industry corresponding to the business gate of the business.
Optionally, determining, according to the text data, the business industry of the merchant corresponding to the merchant door shot includes:
acquiring industry classification data of a database;
mapping and comparing the extracted text data with industry classification data of a database;
determining industry classification corresponding to the text data according to the mapping comparison result;
and determining the industry classification corresponding to the text data as the business industry corresponding to the business gate of the business.
Optionally, the method further comprises the step of generating a first recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the category of the sample represents whether the corresponding sample is a fixed store, a vending car, a stall or other types;
and training to obtain a first recognition model according to the training sample.
Optionally, the method further includes a step of generating a text extraction model, including:
obtaining training samples, wherein each training sample comprises a text image of the sample and text data of the corresponding sample;
and training to obtain a character extraction model according to the training sample.
Optionally, the method further comprises the step of generating a first industry classification model, comprising:
acquiring training samples, wherein each training sample comprises text data of the sample and industry classification categories of the corresponding sample;
and training to obtain a first industry classification model according to the training sample.
Optionally, the business management photograph is a business internal photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business internal photograph;
the step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
Acquiring relevant information of a merchant;
predicting at least two business industries corresponding to a merchant according to related information of the merchant;
inputting the business internal view into at least two preset second identification models correspondingly, wherein the at least two preset second identification models correspond to at least two predicted business industries respectively so as to classify and identify objects belonging to the corresponding predicted business industries in the business internal view;
classifying and identifying objects belonging to the corresponding predicted business industry in the business internal scene through at least two preset second identification models, and obtaining at least two identification similarities corresponding to the at least two preset second identification models;
determining a maximum value of the identification similarity in the at least two identification similarities;
determining whether the recognition similarity of the maximum value is greater than a predetermined threshold;
and taking the identification similarity of the maximum value as the identification result when the identification similarity of the maximum value is larger than a preset threshold value.
Optionally, the business management photograph is a business internal photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business internal photograph;
The step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
acquiring relevant information of the merchant;
predicting at least two business industries corresponding to a merchant according to related information of the merchant;
inputting the business internal view into one of at least two preset second identification models, wherein the at least two second identification models respectively correspond to the at least two predicted business industries so as to classify and identify objects belonging to the corresponding predicted business industries in the business internal view;
classifying and identifying objects belonging to the corresponding predicted business industry in the business interior scene through one of the at least two second identification models, and acquiring the identification similarity of one of the at least two second identification models;
determining whether the recognition similarity is greater than a predetermined threshold;
and taking the identification similarity as the identification result in the case that the identification similarity is larger than a preset threshold value.
Optionally, determining the business information of the merchant in the business photograph corresponding to the merchant based on the identification result includes:
and determining the predicted business industry corresponding to the identification similarity as the business industry of the merchant, wherein the business industry of the merchant is used as business information of the merchant.
Optionally, the business management photograph is a business internal photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business internal photograph;
the method for inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
inputting the operation internal scene into a preset second identification model;
and classifying and identifying a plurality of objects in the business inner scene through a preset second identification model to obtain a corresponding identification result.
Optionally, determining the business information of the merchant in the business photograph corresponding to the merchant based on the identification result includes:
respectively inputting the images of the identified objects into a preset second industry classification model;
mapping and identifying a plurality of objects respectively through a preset second industry classification model to obtain industry classifications corresponding to the objects respectively;
counting the number of corresponding objects belonging to the same industry classification, and sorting according to the number and the size;
and determining the industry classification corresponding to the object which belongs to the same industry classification and has the largest number of objects as the business industry corresponding to the business with the internal view, wherein the business industry of the business is taken as business information of the business.
Optionally, the method further comprises the step of generating a second recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and a category of an object in the corresponding sample;
and training to obtain a second recognition model according to the training sample.
Optionally, the method further comprises the step of generating a second industry classification model, comprising:
acquiring training samples, wherein each training sample comprises an image of the sample and an industry classification category of the corresponding sample;
and training to obtain a second industry classification model according to the training sample.
Optionally, the business management photograph is a business street photograph of the business, the image recognition model of the corresponding category is a third recognition model, and the third recognition model is used for classifying and recognizing the landmark building of the business street photograph;
the method for inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
and inputting the business street view into a preset third recognition model for image recognition, and determining a preset target in the business street view, wherein the preset target is used as a recognition result of the business street view.
Optionally, determining the business information of the merchant in the business photograph corresponding to the merchant based on the identification result includes:
Obtaining the geographic position of a preset target;
determining the geographic position of the merchant according to the relative geographic position of the predetermined target and the merchant in the business street view;
and determining the business address of the merchant according to the geographic position of the merchant, wherein the business address of the merchant is used as business information of the merchant.
Optionally, the method further comprises the step of generating a third recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the sample is a landmark building;
and training to obtain a third recognition model according to the training sample.
Optionally, the method further comprises:
the method comprises the steps of obtaining transaction information of a merchant management picture corresponding to a merchant to determine a transaction mode and fund flow data of the merchant;
and verifying the business information of the merchant determined based on the identification result based on the transaction mode and the fund flow data, wherein the business information determined by the identification result of verification is taken as the business information of the merchant.
Optionally, the merchant management photograph includes a merchant door photograph and a management interior photograph, and the method further includes:
cross-verifying the identification result corresponding to the merchant door photo and the identification result corresponding to the business internal scene photo; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
Optionally, the target object includes at least one of a merchant, a buyer, and a micro-customer;
wherein, when the target object comprises at least two of a merchant, a buyer, and a micro-customer, the method further comprises:
cross-verifying the identification result corresponding to the merchant management photograph submitted by the target object; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
According to a second aspect of the present specification, there is also provided an embodiment of a method of verifying a merchant representation, the method of verifying a merchant representation being implemented by a terminal device, comprising:
responding to a verification request based on merchant portraits characterized according to information submitted by merchants, and sending instructions for submitting merchant management photos of the merchants to buyers of the merchants;
acquiring merchant management photographs submitted by buyers based on instructions;
based on the business management photograph, performing a setting operation for completing verification of the business portrait, wherein the verification of the business portrait comprises the depiction of the business portrait, and the depiction of the business portrait comprises: inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph so as to obtain a recognition result of the merchant management photograph; determining business information of the merchant corresponding to the merchant management photograph based on the identification result; the business portrait is characterized according to business information of the business;
And comparing whether the merchant portraits marked by merchant management submitted by buyers are consistent with the merchant portraits marked according to the merchant submitted information, and verifying the merchant portraits marked according to the merchant submitted information.
Optionally, performing the setting operation for completing verification of the merchant representation includes:
sending the merchant management photograph submitted by the buyer to a server for verification of merchant portraits;
the method further comprises the steps of: and receiving a verification result returned by the server after the verification of the merchant portrait is completed.
According to a third aspect of the present specification, there is also provided an embodiment of a device for describing a business portrait, comprising:
the acquisition module is used for acquiring merchant management photographs; the categories of the merchant management photographs comprise merchant door photographs, management interior photographs and/or surrounding street photographs;
the identification module is used for inputting the merchant management photograph into an image identification model of a corresponding category according to the category of the merchant management photograph so as to obtain an identification result of the merchant management photograph; in the case that the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is used for classifying and recognizing the object of the merchant portal photograph; in the case that the business management photograph is the business interior photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business interior photograph; under the condition that the business management photograph is the business street photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the mark building of the business street photograph;
The determining module is used for determining the business information of the corresponding merchant of the merchant management photograph based on the identification result;
and the depicting module is used for depicting the merchant portraits according to the merchant business information.
According to a fourth aspect of the present specification, there is also provided an embodiment of an electronic device comprising a painting apparatus of a merchant portrait as described in the third aspect of the present specification, or the electronic device comprising:
a memory for storing executable commands;
a processor for executing the method according to the first or second aspect of the present specification under control of the executable command.
According to a fifth aspect of the present specification, there is also provided an embodiment of a merchant portrait verification system, comprising:
the server comprises a memory and a processor, wherein the memory of the server is used for storing executable commands; the processor of the server is configured to perform the method according to the first aspect of the present specification under control of the executable command; the method comprises the steps of,
the terminal equipment comprises a memory and a processor, wherein the memory of the terminal equipment is used for storing executable commands; the processor of the terminal device is adapted to perform the method as described in the second aspect of the present specification under control of said executable command.
According to a sixth aspect of the present description there is also provided an embodiment of a computer readable storage medium storing executable instructions which, when executed by a processor, perform a method as described in the first or second aspect of the present description.
In one embodiment, the identification result obtained by image identification by using the acquired business management photograph can obtain business information corresponding to the business and describe business portrait, so that the coverage rate and accuracy of the business portrait can be improved without manual auditing.
Other features of the present specification and its advantages will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description, serve to explain the principles of the specification.
FIG. 1 is a schematic view of a scene of a characterization method that may be used to implement the business image of one embodiment;
FIG. 2 is a block diagram of a hardware configuration of a merchant representation verification system that may be used to implement a method of depicting merchant representations of one embodiment;
FIG. 3 is a flow diagram of a method of depicting a merchant representation in accordance with one embodiment;
FIG. 4 is a flow chart of the business industry determination steps of one embodiment of the present invention;
FIG. 5 is a flow chart of a business transaction determining step according to another embodiment of the present invention;
FIG. 6 is a flow diagram of a method of depicting a merchant representation according to one example;
FIG. 7 is a flow chart of a merchant portrait verification method according to a first embodiment;
FIG. 8 is a functional block diagram of a depicting apparatus for merchant portrayal in accordance with one embodiment;
fig. 9 is a functional block diagram of an electronic device according to one embodiment.
Detailed Description
Various exemplary embodiments of the present specification will now be described in detail with reference to the accompanying drawings.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Fig. 1 is a schematic view of an application scenario of a method for describing a merchant portrait according to an embodiment of the present disclosure.
Fig. 1 shows an application scenario in which a user performs merchant portrait verification through a terminal device 1200. In this embodiment, the user may be a merchant a in B2B mode, and the merchant image of another merchant B is to be acquired and verified, where the verified merchant image may be characterized according to information submitted by another merchant B, and the information submitted by merchant B for describing the merchant image may be text information provided when the merchant B opens an account or when the subsequent information is completed, for example, filled information, or may be business information of merchant B used in the method for describing the merchant image according to the embodiment of the present invention, where the business information is provided by merchant B. In this application scenario, terminal device 1200 enters the verification interface for the business portrait as shown in fig. 1 in response to a user's verification request based on the business portrait that is characterized according to the information submitted by the business. The merchant portrayal verification interface may provide a key to trigger a command or request (i.e., an instruction) to verify a merchant portrayal characterized by information submitted by a merchant, and after the user clicks the key, the terminal device 1200 sends an instruction to the buyer of the verified merchant to submit the merchant's management photograph in response to the command or request. The buyer is a user who purchases the commodity of the verified merchant, for example, information of the corresponding buyer is obtained according to the transaction data of the merchant, for example, based on the transaction LBS (Location Based Services, location-based service information) of the buyer and the merchant, the buyer is known to be still located near the merchant, or just transacted with the merchant, an instruction for providing the merchant management of the merchant is sent to the buyer in a short message, a questionnaire or a mail manner, and relevant information about the merchant is queried for the buyer. The business management photo can be a door photo of a business, an interior management photo and/or a street photo around the business, the business management photo category provided by the buyer can be sent according to the requirement of the instruction sent by the user, for example, the instruction sent by the user requires the buyer to provide the door photo of the business, and then the buyer provides the corresponding door photo according to the requirement.
The terminal device 1300 is operated by the buyer and receives an instruction for providing the merchant management photograph sent by the terminal device 1200. Based on the instruction, the terminal device 1300 may obtain a corresponding merchant management photograph, for example, in a case where the terminal device 1300 already stores the corresponding merchant management photograph or no corresponding management photograph is stored in the terminal device 1300, the camera of the terminal device 1300 is used to capture the corresponding merchant management photograph.
After obtaining the merchant management photograph, the terminal device 1300 provides the merchant management photograph to the terminal device 1200, and the terminal device 1200 obtains the merchant management photograph submitted by the buyer based on the instruction.
The terminal device 1200 performs a setting operation for completing verification of the merchant portraits based on the merchant management photograph provided by the buyer, and in the application scenario embodiment of fig. 1, the terminal device 1200 performs the operation for completing verification of the merchant portraits by sending the merchant management photograph provided by the buyer to the server 1100 for merchant portraits verification. After receiving the business management photograph, the server 1100 processes the business management photograph and describes the business portrait according to the method for describing the business portrait according to the embodiment of the present invention. The server 1100 performs the description of the business portrait including: inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph to obtain a recognition result of the merchant management photograph; determining business information of the merchant corresponding to the merchant management photograph based on the identification result; and describing the merchant portrait according to the business information of the merchant. Namely, the corresponding merchant portrait is obtained according to the merchant management photograph provided by the buyer.
The server 1100 then verifies the merchant portraits depicted by the merchant's submitted information by comparing whether the merchant portraits depicted by the merchant's business photograph submitted by the buyer are consistent with the merchant portraits depicted by the merchant's submitted information. The merchant portrayal of the information submitted by the merchant is finished in advance and can be stored in a storage area of a server or obtained from a storage device of a third party.
If the merchant image passes the verification, indicating that the information provided by the merchant is accurate, the server 1100 will return a notification of the verification to the terminal device 1200, and the terminal device 1200 may perform a prompt about "verification passed", and jump to the next interface, or may directly jump to the next interface; if the merchant representation fails verification, indicating that the information provided by the merchant may be unrealistic business information, the server 1100 returns a notification of verification failure to the terminal device 1200, where the notification may include a message reflecting a cause of the verification failure, for example, a message indicating that the merchant business information acquired by the merchant business provided by the buyer is inconsistent with the information provided by the merchant, resulting in a failure of verification of the merchant representation, as shown in fig. 1, the terminal device 1200 may prompt for the failure of verification according to the notification, and may prompt for the cause of the failure of verification.
In the application scenario embodiment of fig. 1 described above, the terminal device 1200 sends an instruction to submit the merchant management policy of the merchant to the terminal device 1300 of the buyer, and in an application embodiment, the terminal device 1200 may also send an instruction to provide the merchant management policy of the merchant to the terminal device of the micro-buyer.
In the application scene, the accuracy of the business portrait can be improved when the business portrait is verified by the depiction of the business portrait.
In addition, the merchant portrait describing method of the embodiment obtains corresponding merchant management information based on the acquired merchant management information and describes merchants, so that compared with the condition that the existing merchant fills in information at will, merchant description can be automatically completed and higher-quality user management information can be obtained. In addition, merchant images marked by merchant management provided by buyers or microguests are used for cross-verifying merchant images marked by merchant self-provided information, so that the problems of time and labor waste in the conventional manual auditing can be avoided. Therefore, the high cost of the commodity portrait auditing is reduced, and the commodity depiction accuracy is improved.
In another embodiment, the merchant image verification may be performed on the terminal 1200 side, which is not limited herein. For example, the operation of describing the business portrait according to the business operation photograph provided by the buyer is performed on the side of the terminal device 1200, and comparing whether the business portrait described by the business operation photograph submitted by the buyer is consistent with the business portrait described according to the information submitted by the business on the side of the terminal device 1200, the terminal device 1200 may report the verification result to the server 1100 after the verification of the business portrait is completed, so that the server 1100 may perform the subsequent operation.
< hardware device >
FIG. 2 is a block diagram of a hardware configuration of a merchant portrayal verification system to which a method for portraying a merchant portrayal according to one embodiment of the present disclosure may be applied.
As shown in fig. 2, the merchant portrait authentication system 1000 of the present embodiment may include a server 1100, a terminal device 1200, and a network 1300.
The server 1100 may be, for example, a blade server, a rack server, or the like, and the server 1100 may be a server cluster deployed in the cloud, which is not limited herein.
As shown in fig. 2, the server 1100 may include a processor 1110, a memory 1120, an interface device 1130, a communication device 1140, a display device 1150, and an input device 1160. The processor 1110 is configured to execute program instructions that may employ an instruction set of an architecture such as x86, arm, RISC, MIPS, SSE, etc. The memory 1120 includes, for example, ROM (read only memory), RAM (random access memory), nonvolatile memory such as a hard disk, and the like. The interface device 1130 includes, for example, a USB interface, a serial interface, and the like. The communication device 1140 can perform wired or wireless communication, for example. The display device 1150 is, for example, a liquid crystal display. The input device 1160 may include, for example, a touch screen, a keyboard, and the like.
In this embodiment, memory 1120 of server 1100 is used to store instructions for controlling processor 1110 to operate to implement or support a method of depicting a business representation in accordance with at least some embodiments of the present description. The skilled person can design instructions according to the solution disclosed in the present specification. How the instructions control the processor to operate is well known in the art and will not be described in detail here.
It will be appreciated by those skilled in the art that although a plurality of devices of the server 1100 are shown in fig. 2, the server 1100 of the embodiments of the present description may refer to only some of the devices, for example, only the processor 1110, the memory 1120, the communication device 1140, and the like.
As shown in fig. 2, the terminal apparatus 1200 may include a processor 1210, a memory 1220, an interface device 1230, a communication device 1240, a display device 1250, an image pickup device 1260, an audio output device 1270, an audio pickup device 1280, and so on. The processor 1210 may be a central processing unit CPU, microprocessor MCU, or the like. The memory 1220 includes, for example, ROM (read only memory), RAM (random access memory), nonvolatile memory such as a hard disk, and the like. The interface device 1230 includes, for example, a USB interface, a headphone interface, and the like. The communication device 1240 can perform wired or wireless communication, for example. The display device 1250 is, for example, a liquid crystal display, a touch display, or the like. The image pickup device 1260 may include, for example, a camera or the like, and a user picks up RGB images. The terminal device 1200 may output audio information through an audio output means 1270, which audio output means 1270 for example comprises a speaker. The terminal apparatus 1200 can pick up voice information input by a user through an audio pick-up device 1280, which audio pick-up device 1280 includes, for example, a microphone.
The terminal device 1200 may further include information input means such as a keyboard, a touch screen, etc., which are not limited herein.
Terminal device 1200 may be a smart phone, a portable computer, a desktop computer, a tablet computer, etc.
In this embodiment, the memory 1220 of the terminal device 1200 is used to store instructions for controlling the processor 1210 to operate to implement or support implementing a method of depicting a merchant representation in accordance with at least some embodiments of the present description. The skilled person can design instructions according to the solution disclosed in the present specification. How the instructions control the processor to operate is well known in the art and will not be described in detail here.
It will be appreciated by those skilled in the art that although a plurality of devices of the terminal apparatus 1200 are shown in fig. 2, the terminal apparatus 1200 of the embodiment of the present specification may refer to only some of the devices thereof, for example, only the processor 1210, the memory 1220, the image pickup device 1260, and the like.
The communication network 1300 may be a wireless network or a wired network, or may be a local area network or a wide area network. The terminal device 1200 may communicate with the server 1100 through the communication network 1300.
The merchant portrayal verification system 1000 shown in fig. 2 is merely illustrative and is in no way intended to limit the specification, its application or use. For example, although fig. 2 shows only one server 1100 and one terminal device 1200, it is not meant to limit the respective numbers, and a plurality of servers 1100 and/or a plurality of terminal devices 1200 may be included in the merchant image verification system 1000.
< method example one >
The present embodiment provides a method for describing a merchant portrait, which may be implemented by the server 1100 in fig. 2 or by the terminal device 1200 in fig. 2, which is not limited herein. Fig. 3 is a flow chart showing a method for describing a business portrait according to this embodiment. As shown in fig. 3, the method for describing a merchant portrait of the present embodiment may include the following steps:
step 102, obtaining merchant management photographs.
In step 102, the obtaining of the merchant management photograph may be based on an instruction sent to a predetermined target object to submit the corresponding management photograph, that is, before step 102, the method may further include a step of sending an instruction to submit the merchant management photograph to the target object, where the target object may be the merchant corresponding to the merchant management photograph, the buyer corresponding to the merchant, or a micropassenger specially used for micropassenger verification.
In one embodiment, the merchant management photographs include a door photograph of the merchant, an interior view of the actual business location (store or shop) of the merchant, and a street view of the perimeter of the actual business location of the merchant. The door head photo refers to a photo of a signboard which is obviously visible to a commercial tenant operation place, the door head refers to a main external sign of a store or a brand, the standard door head design can accurately reflect the category and the operation characteristic of the commercial tenant, propaganda the operation content and the theme of the commercial tenant, and can reflect the commodity characteristics and the connotation. For example, depending on the door illumination, it may be reflected whether the merchant is a physical store (e.g., a restaurant shop, a daily necessaries store, a clothing store, etc.), a transportation-type vending vehicle (e.g., a breakfast vehicle, an ice cream vehicle, a fruit and vegetable vending vehicle, etc.), or a booth (e.g., a snack bar, a large gear, a fruit stall, etc.).
The business internal view is a photo reflecting the condition inside the business place of the merchant, for example, for a restaurant of an entity store, the business internal view is an internal view photo mainly showing dining areas inside the restaurant, such as table and chair, tableware, food and the like; for the ice cream car in a fixed or mobile sales mode, the operation interior view mainly displays various ice cream desserts put in the car; for fruit spreads, the internal management view of the fruit spreads mainly shows various fruits placed in the stall.
The instruction for submitting the merchant management photograph to the target object includes the category of the merchant management photograph submitted by the target object, for example, whether the target object submits the merchant door photograph, the merchant management content photograph or the street view photograph around the merchant. Therefore, the target object submits the corresponding merchant management photograph based on the class of the merchant management photograph required by the instruction.
And 104, inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph to obtain a recognition result of the merchant management photograph.
And step 106, determining the business information of the merchant corresponding to the business photograph based on the identification result.
As described above, in one embodiment, the merchant management photographs include, for example, a merchant portal photograph, merchant management content photographs, and merchant surrounding street photographs. In order to ensure accuracy, related photo categories can be designated, and training is performed through a training sample set of specific categories, so as to obtain an image recognition model with higher accuracy, namely, the relationship between the generality and the accuracy of the image recognition model is weighed.
In the embodiment of the invention, business management photos required for describing business portrait are divided into three categories: the merchant door photo, the business interior photo and the surrounding street photo correspond to different image recognition models respectively.
In one embodiment, in the case that the business transaction is a business gate shot, the image recognition model of the corresponding category is a first recognition model, and the first recognition model is used for classifying and recognizing the object of the business gate shot. Specifically, inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph includes: inputting the merchant portal photos into a preset first recognition model, and classifying and recognizing the objects in the merchant portal photos through the preset first recognition model to obtain corresponding recognition results.
In step 106, determining, based on the identification result, the business information of the merchant business corresponding to the merchant in one embodiment includes: and determining the business scene of the business corresponding to the business door head according to the identification result, wherein the business scene of the business is used as the business information of the business.
The business scene is easier to understand, namely if the submitted business entrance head photo is a real store, the business is a scene with fixed store business; if the submitted merchant door photo is a vehicle, the merchant door photo is a transportation class; if the merchant portal is a vendor, it may be a mobile or fixed vendor business.
In one embodiment, the categories of business scenarios for the merchant include fixed stores, vending carts, and booths, and the categories of objects for the merchant headings identified by the first identification model category include fixed stores, vending carts, booths, and other categories. The other class is an object for identifying a photograph entered into the first identification model, thereby identifying whether the photograph is a merchant portal. Because there may be a case where the picture submitted by the target object is a false photo, after the target object is obtained and submitted to the merchant portal according to the instruction, the first recognition model is input to recognize whether the submitted photo is the merchant portal or not. If the first recognition model recognizes an object in a photograph as a non-stationary store, a van, or a booth, it is recognized as another class. I.e., it is recognized that the photograph is not a merchant portal.
The vending machine can be divided into a fixed vending machine and a movable vending machine, and the stall can be divided into a fixed stall and a movable stall. However, besides a fixed store, the merchant portal can not judge whether the merchant's vending truck, the booth management scene is in a fixed location management mode or a mobile location management mode, and a cross-validation mode can be added, and a description will be developed later.
As described above, in step 106, determining the business information of the merchant in correspondence with the merchant based on the identification result includes: and determining the business scene of the business corresponding to the business gate according to the identification result, and taking the business scene of the business as the business information of the business.
Specifically, in one embodiment, according to the identification result obtained by classification and identification, it is determined that the business scenario of the merchant is a fixed store, a vending car or a booth, or if the identification result is other types, it is determined that the acquired business photograph is not a merchant portal photograph.
To obtain more accurate business management information to characterize the business management scenario, the business management scenario determined by the first recognition model recognition business portal may be cross-validated using transaction LBS (Location Based Services, location-based service) information. The LBS is to acquire the current position of the positioning equipment by utilizing various positioning technologies, and the LBS service integrates various information technologies such as mobile communication, internet, space positioning, position information, big data and the like, and utilizes a mobile internet service platform to update and interact data so that a user can acquire corresponding service through space positioning.
In one embodiment, determining the business information of the merchant in correspondence with the merchant business based on the identification result further comprises: and acquiring the transaction LBS information of the merchant, and determining the transaction geographic position of the merchant according to the transaction LBS information. And then, determining the business scene of the merchant based on the transaction geographic position, verifying the business scene of the merchant determined according to the identification result based on the business scene of the merchant determined by the transaction geographic position, and taking the verified business scene of the merchant as the business information of the merchant.
The trade LBS information of the merchant may be a single trade information or may be trade cluster information composed of a plurality of pieces of trade LBS information. The trade location, namely the geographical longitude and latitude information, generated by the merchant can be known according to the trade LBS information. For a scenario where the merchant has a fixed store operation, the transaction LBS information should be theoretically limited to a small area. For a vending car or booth, the trade LBS information may be changed in a certain geographical location, in particular, the vending car has mobility, may not be limited to a certain fixed trade location, and for a booth with a mobile vendor, the trade location may be changed. Therefore, the business scenario of the merchant can be determined based on the transaction geographic position determined by the transaction LBS information, specifically, whether the transaction geographic position of the merchant changes within a predetermined time period is determined, and if the transaction geographic position does not change, the business scenario of the merchant is determined to be in a fixed location business mode, including a fixed store, a fixed booth, or a fixed vending machine. And if the transaction geographic position changes, determining that the business scene of the merchant is a position mobile business mode, including a mobile vending truck or a mobile stall.
When cross-validation is performed, if the business scenario of the merchant determined according to the identification result determined by the first identification model is a fixed store and the business scenario of the merchant determined according to the transaction geographic position is a position flow mode, the business scenario determined according to the merchant portal identification may not be accurate, and the identification result is considered to be not passed through the cross-validation. If the business scenario of the merchant determined according to the identification result determined by the first identification model is a fixed store and the business scenario of the merchant determined based on the transaction geographic position is a fixed location mode, the identification result can be considered to pass the cross-validation. And taking the business scene of the fixed store of the verified merchant as the business information of the merchant.
In one embodiment, the method for describing the merchant portrait according to the embodiment of the present invention further includes a step of generating a first recognition model, where generating the first recognition model includes: obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the category of the sample represents whether the corresponding sample is a fixed store, a vending car, a stall or other types; and training to obtain a first recognition model according to the training sample.
In step 106, in one embodiment, according to the identification result obtained by the classification and identification, it is determined that the business scenario of the merchant is a fixed store, a vending car or a booth, or if the identification result is of another type, it is determined that the acquired business photograph of the merchant is not a business photograph.
In addition, besides the business scene of the merchant can be determined by identifying the merchant portal, the business industry of the merchant can be identified and determined according to the characters appearing on the merchant portal.
According to industry classifications, business industry classifications include, for example, catering, hotels, telecommunications, real estate, services, clothing, education, retail, wholesale, agriculture, travel, medical services, and the like.
For the identification of the business industry, on the basis of judging that the merchant business photograph obtained in the step 102 is the merchant door photograph, a character extraction function on the door photograph is newly added. The characters on the door photo are usually the characters which can most represent the business content of the merchant, such as 'Ruyi face shop', 'good luck supermarket', and the like, so that the acquisition of the information has very important effect on improving the accuracy and coverage rate of the identification of the merchant industry.
The fixed store, the vending car and the corresponding door pair may have door characters, such as the shop name of the fixed store, the hood or body of the vending car may also write the characters of the management type or content, and the stall itself does not have a part for carrying the door characters, but many vendors can stand up a signboard or a signboard beside the stall, and have characters describing the management type or content of the stall, such as breakfast stall, fruit and vegetable stall and the like.
Therefore, in one embodiment, the method for describing the merchant portrait according to the embodiment of the present invention further includes: and inputting the merchant portal photograph into a preset character extraction module to obtain character data corresponding to the merchant portal photograph. The text extraction module may be trained according to training samples, each training sample including a text image of the sample and text data of the corresponding sample.
In one embodiment, the determining of the business information of the merchant corresponding to the business photo based on the identification result is determining the business industry of the merchant corresponding to the business photo according to the text data, wherein the business industry of the merchant is used as the business information of the merchant.
In one embodiment, determining that the business class corresponding to the business door and the business head corresponds to the business according to the extracted text data is to input the extracted text data into a preset first business classification model, map and identify the text data through the preset business classification model to obtain the business class corresponding to the text data, and then determine the business class corresponding to the text data as the business class corresponding to the business door and the business head.
The character features extracted from the merchant portal may be sentences, before the first industry classification model is input, the extracted sentences can be split to obtain standard Chinese words, each word corresponds to a feature, and then the split words are spliced into a group of feature vectors and input into the first industry classification model for mapping identification.
The method for describing the merchant portraits further comprises the step of generating the first industry classification model, namely, obtaining training samples, wherein each training sample comprises character data of the sample and industry classification categories of the corresponding sample; and training to obtain a first industry classification model according to the training sample. For example, the text data of the sample is "noodle shop", and the sample label is "catering industry"; the text data of the sample is "fruit store", and the sample label is "individual business". Through training the sample, a corresponding first industry classification model can be obtained. The text data may be a vector value representation composed of a plurality of characteristic values, and the sample label may be a numerical value for identifying industry classification category, for example, the characteristic assignment values 0,1,2,3 and … … are used for classifying different industries.
In another embodiment, determining business of the business gate corresponding to the business according to the text data comprises obtaining business classification data of the database, mapping and comparing the extracted text data with the business classification data of the database, and determining business classification corresponding to the text data according to the mapping and comparing result; and determining the industry classification corresponding to the text data as the business industry corresponding to the business gate of the business. For example, the database includes an industry classification field and an industry keyword field, different industry classification data can be written under the industry classification field, and industry keywords corresponding to the industry classification can be written under the industry keyword field. For example, in the database, industries are classified as "catering industry", and corresponding industry keywords are "restaurant", or "home dish", and so on. If the extracted text data represents "restaurant", industry classification data corresponding to the map, i.e. "catering industry", can be obtained by searching for the industry keyword "restaurant" in the database.
In one embodiment, in the case that the business view is a business interior view of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business interior view. Specifically, inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph includes: first, the relevant information of the merchant is obtained. Here, the relevant information of the merchant includes, for example, big data corresponding to the merchant, such as transaction information (including purchase transactions and sales transactions), evaluation information about the merchant, business industry content described in business licenses provided by the merchant, and the like.
And then predicting at least one business industry corresponding to the merchant according to the relevant information of the merchant. In one embodiment, the predicted business information includes at least two. For example, the transaction information of the merchant includes food sales, the business industry of the merchant is predicted to be the catering industry, and the business industry of the merchant is predicted to be the baking industry by extracting key information according to the evaluation text information of the merchant. Although business execution parties specify the business scope to which the merchant corresponds, this business scope is typically very large and does not accurately determine the actual business industry for the merchant.
In one embodiment, the method for describing the merchant portrayal further comprises the step of generating a second recognition model, comprising: obtaining training samples, wherein each training sample comprises an image of the sample and a category of an object in the corresponding sample; and training to obtain the second recognition model according to the training sample.
The above-mentioned at least one business industry corresponding to the business according to the business related information predicts the business, and aims to collect object samples corresponding to the intra-industry sceneries according to the predetermined business industry, and train the second recognition model as the sample training set, so as to more accurately recognize the objects in the intra-industry sceneries.
Therefore, the business internal view is respectively and correspondingly input into at least one preset second identification model, and the at least one preset second identification model is respectively and correspondingly corresponding to the predicted at least one business industry so as to classify and identify the objects belonging to the corresponding predicted business industry in the business internal view. That is, if the business industry is predicted to be the catering industry, the corresponding second recognition model is trained according to objects possibly included in the restaurant of the catering industry as samples, for example, object images appearing in the restaurant such as tables, chairs, stools, plates, dishes, chopsticks and the like are used for training to obtain the corresponding classification model, namely the second recognition model.
If the predicted commercial industry is also a middle-and small-sized retail industry selling a large-sized fruit supermarket, the corresponding second recognition model is obtained by training according to objects possibly included in the fruit supermarket as samples, for example, by training object images appearing in the fruit supermarket such as apples, oranges, bananas and watermelons, so as to obtain a corresponding classification model, namely the second recognition model.
Taking predicting that the business corresponds to at least two business industries and training at least two second identification models corresponding to each business as an example, the embodiment of the invention inputs the business operation into the second identification models to obtain the identification result of the business operation, and determines the business industry of the business according to the identification result (hereinafter referred to as a business operation determining step for convenience of description).
FIG. 4 is a flowchart of the merchant business industry determination steps of one embodiment of the invention, including:
step 202, acquiring relevant information of a merchant;
step 204, predicting at least two business industries corresponding to the merchant according to the relevant information of the merchant;
step 206, inputting the business internal view into at least two preset second recognition models correspondingly, wherein the at least two preset second recognition models correspond to the predicted at least two business industries respectively, so as to classify and recognize objects belonging to the corresponding predicted business industries in the business internal view;
And step 208, classifying and identifying objects belonging to the corresponding predicted business industry in the business interior scene through at least two preset second identification models, and obtaining the identification similarity corresponding to each second identification model.
The similarity is an average value of the similarity of each identified object when the second identification model identifies the object corresponding to the predictive business. For example, taking a prediction industry corresponding to restaurants, fruit shops and flower shops as an example, when a second recognition model obtained by training by taking an object image in a restaurant as a sample is used for recognizing a scene in the restaurant, the similarity of a table is 80%, the similarity of a recognition bowl is 90%, and the similarity of a recognition chopstick is 100%, the average similarity is 90%, namely, the recognition similarity of the second recognition model corresponding to the business scene as the scene in the restaurant is 90%.
Similarly, the recognition similarity of the second recognition model corresponding to the recognized fruit store can be obtained to be 30%, and the recognition similarity of the second recognition model corresponding to the recognized flower store can be obtained to be 20%.
Step 210, determining a maximum value of the at least two recognition similarities, namely, a maximum similarity.
Still taking the restaurant, fruit store and flower store corresponding to the prediction industry as an example, the maximum similarity is 90%.
Step 212, determining whether the recognition similarity of the maximum value is greater than a predetermined threshold.
The predetermined threshold is the average similarity of each object corresponding to the second recognition model of each recognition corresponding business industry when the sample is trained, and when the result of the recognized sample is basically correct after the training is completed. Taking the restaurant object sample as an example, if the similarity of the recognition table sample is 70%, the similarity of the bowl sample is 100%, the similarity of the sample chopsticks is 70%, the average similarity is 80%, and on the premise of 80% average similarity, the recognition accuracy of these samples is 100%, the predetermined threshold may be set to 80%.
And step 214, taking the maximum recognition similarity as the recognition result in the case that the maximum recognition similarity is larger than the preset threshold value.
Step 216, determining the predicted business industry corresponding to the maximum recognition similarity as the business information of the merchant.
If the maximum recognition similarity is not greater than the predetermined threshold, step 218 indicates that the acquired business interior view may not meet the requirements, and the business interior view needs to be re-acquired. In one embodiment, if the predicted business industry is not enough, the business industry can be predicted again, and a second recognition model obtained by using the object corresponding to the business industry as a training sample is input to perform recognition and determination again.
FIG. 5 is a flowchart of a merchant business industry determination step according to another embodiment of the invention, comprising:
step 242, obtaining relevant information of the merchant;
step 244, predicting at least two business industries corresponding to the merchant according to the relevant information of the merchant;
step 246, inputting the business internal view into one of at least two preset second identification models, wherein the at least two second identification models respectively correspond to the at least two predicted business industries so as to classify and identify objects belonging to the corresponding predicted business industries in the business internal view;
step 248, classifying and identifying the object belonging to the corresponding predicted business industry in the business internal scene through one of the at least two second identification models, and obtaining the identification similarity of the second identification models. The identification similarity may be described with reference to the embodiment of fig. 4, and will not be described here.
Step 250 determines whether the recognition similarity is greater than a predetermined threshold. The predetermined threshold may be described with reference to the embodiment of fig. 4, and will not be described again here.
In step 252, in the case where the recognition similarity is greater than the predetermined threshold, the recognition similarity is taken as a recognition result.
Step 254, determining the predicted business industry corresponding to the identification similarity as the business information of the merchant.
Step 256, if the recognition similarity is not greater than the predetermined threshold, determining whether the second recognition model that currently recognizes the business-internal view is the last recognition model of the at least two second recognition models, that is, determining whether all the second recognition modules have been traversed currently;
if not, returning to step 246, and continuing to input the business internal scene into another second recognition model for recognition; or if it is determined that all the second recognition models have been traversed, that is, it is determined that all the second recognition models have recognized the business internal scene and the recognition similarity corresponding to all the second recognition models is smaller than the predetermined threshold, which indicates that the business internal scene may not meet the requirement, step 258 is entered;
step 258, re-acquiring the business internal view; or the business industry is predicted again, a second recognition model obtained by using the object corresponding to the business industry as a training sample is input, and recognition and determination are performed again.
When the predicted commercial tenant corresponds to one industry, only one second recognition model is used for classifying and recognizing objects possibly appearing in the business inner view of the industry commercial tenant. When two or more industries are predicted for the commercial tenant, two or more corresponding second recognition models are also provided for classifying and recognizing objects possibly appearing in the commercial tenant operation prospect of the corresponding industries respectively. For the case where there are two or more second recognition models, each recognition model may obtain a recognition similarity of the corresponding object, so that there are two or more recognition similarities at this time, the recognition similarities may be different in size.
When determining business industries corresponding to the internal sceneries of the merchants by utilizing the identification results, the predicted business industries corresponding to the identification similarity determined by exceeding a preset threshold are taken as business industries of the merchants, wherein the business industries of the merchants are taken as business information of the merchants.
In the following, in combination with another embodiment, description is made on the embodiment of the present invention, in which the business operation of the merchant is input into the second recognition model to obtain the recognition result of the business operation of the merchant, and the business industry development of the merchant is determined according to the recognition result.
Inputting the business internal view into a preset second recognition model, and classifying and recognizing a plurality of objects in the business internal view through the preset second recognition model to obtain a corresponding recognition result.
And then, determining the business information of the business corresponding to the business management photograph based on the identification result, namely respectively inputting the images of the identified objects into a preset second industry classification model, and respectively carrying out mapping identification on the objects through the preset second industry classification model to obtain industry classifications corresponding to the objects. Counting the number of corresponding objects belonging to the same industry classification, sorting according to the number and the size, and determining the industry classification corresponding to the object which belongs to the same industry classification and has the largest number of objects as the business industry of the business corresponding to the business in the business internal scene, wherein the business industry of the business is used as the business information of the business.
In one embodiment, the method for describing the merchant portrayal further comprises the step of generating a second recognition model, comprising: obtaining training samples, wherein each training sample comprises an image of the sample and a category of an object in the corresponding sample; and training to obtain the second recognition model according to the training sample.
Although there is currently no fully universal image recognition capability, the recognition capability for specific objects has been relatively sophisticated, and Artificial Intelligence (AI) has been able to recognize over 10 tens of thousands of common objects from photographs. Therefore, the objects in the merchant interior scene can be identified, and more accurate industry information can be obtained by mapping the identified objects with the business industry classification categories. For example, a table, chair, dinner plate, paper towel, etc. is identified in the interior view of a merchant, which is then a restaurant; if a flower vase and various flowers and plants are identified in the internal scene of a merchant, the merchant is a store.
In one embodiment, the method for describing the business portrait further comprises the step of generating the second industry classification model, which comprises the following steps: acquiring training samples, wherein each training sample comprises an image of the sample and an industry classification category of the corresponding sample; and training to obtain the second industry classification model according to the training sample. Similar to the first industry classification model described above, if the corresponding sample object image is "table and chair", "tableware", the sample label is defined as "catering"; if the corresponding sample object image is "apple", "banana", etc., the sample label is "individual business". Through training the sample, a corresponding second industry classification model can be obtained.
In one embodiment, in the case that the business operation photograph is the business street photograph of the business, the image recognition model of the corresponding category is a third recognition model, and the third recognition model is used for classifying and recognizing the landmark building of the business street photograph. A step of generating a third recognition model, comprising: obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the sample is a landmark building; and training to obtain a third recognition model according to the training sample. Training samples are taken, for example, from map software having a picture library storing pictures of the landmark buildings. The sample category is, for example, a label representing a landmark building category in the corresponding sample picture.
In one embodiment, step 104 of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph includes: and inputting the business street view into a preset third recognition model for image recognition, and determining a preset target in the business street view, wherein the preset target is used as a recognition result of the business street view.
The predetermined target, that is, the landmark building in the street view around the merchant, can be identified as to which landmark building the predetermined target belongs by inputting the predetermined target into the third identification model.
For example, after the surrounding street view provided by the commercial tenant is input into the third recognition model, the recognized landmark building is Shanghai Oriental pearl television tower.
In one embodiment, step 106 of determining the business information of the merchant in correspondence with the merchant in the business photograph based on the identification result includes: the geographic position of the preset target is obtained, the geographic position of the merchant is determined according to the relative geographic position of the preset target and the merchant in the business street view, and the business address of the merchant is determined according to the geographic position of the merchant, wherein the business address of the merchant is used as business information of the merchant.
For example, still taking the above-mentioned predetermined target as the eastern bright bead television tower as an example, after the geographic position of the television tower is identified, the shooting position of the street view, that is, the actual geographic position of the merchant, can be positioned according to the angular orientation, the relative distance, etc. of the television tower and the shot picture in the street view.
For merchants of a mobile business scenario, such as a mobile vending truck or a mobile booth, the business address of the merchant may vary, but the business address determined from the surrounding street view may also represent the active area or range of its business.
The street view can be used as an important supplement to the business door view and the business interior view, and the identification of the mark building in the street view can also help to judge whether the business is in the business district or not, so that the business address of the business can be effectively characterized. Although the specific business address of the merchant can be obtained from the background database through the LBS information of the transaction, the LBS information itself may drift by about 500 meters, and there may be a situation that the obtaining is not timely or even impossible. In this way, surrounding landmark buildings are identified through the street view around the merchant by the third identification model, and the geographic position of the merchant, including the position of the POI (Point of Interest, interest point), can be rapidly located.
According to the embodiment, the method for describing the merchant portraits can determine the actual business scene of the merchant through the acquired merchant portal photos, and can determine the business industry category of the merchant according to the characters extracted from the merchant portal photos; the business industry category of the merchant can be determined through the acquired internal view of the merchant business; the business address of the merchant can be determined through the acquired surrounding street view of the merchant.
When only one of the three types of merchant management photographs exists, the corresponding determined merchant management information is independent single-dimensional information. In some cases, information in a single dimension may not be fully trusted. For example, the business provided by a merchant may be subject to counterfeiting. For example, when the merchant submits the door shots, the merchant submits the door shots of other merchants for false, and the photo is truly a real door shot although the door shots are not managed by the merchant, so that the management information determined according to the door shots submitted by the merchant may not be found abnormal.
In addition, if the photo is submitted by C authentication (such as a buyer corresponding to a merchant), the accuracy is high, the possibility of cheating is low, but a certain uncertainty may exist due to less conflict between the buyer and the merchant.
Therefore, the business image method provided by the embodiment of the invention can be used for describing and obtaining a trusted and complete business image according to the determined business management information by cross-validation among other big data information including business management photos of different categories.
In one embodiment, the merchant image method of the embodiment of the invention further comprises the following steps: transaction information of a merchant corresponding to the merchant is obtained to determine transaction patterns and funds flow data of the merchant (i.e., big data information of the merchant is obtained). And verifying the business information of the merchant determined based on the identification result based on the transaction mode and the fund flow data, wherein the business information determined by the identification result of verification is taken as the business information of the merchant.
By acquiring transaction information of the merchant, the time and the fund amount of the transaction generated by the merchant can be known, for example, a certain merchant always generates a large amount of transaction data in the early, middle and late states, and the fund amount is relatively large. If the door photo of the identified merchant identifies that the merchant is a large-scale stall, the door photo of the identified merchant is possibly false photos if the door photo of the identified merchant is not in accordance with the transaction mode and the fund flow data embodied by the large data of the merchant.
In one embodiment, in the case that the business management photograph obtained in step 102 includes a business door photograph and a business interior photograph, the business imaging method according to the embodiment of the present invention may further perform cross-validation on the identification result corresponding to the business door photograph and the identification result corresponding to the business interior photograph; if the business industries respectively identified by the two types of business photographs are inconsistent, the fact that at least one false photo exists in the business door photograph and the business interior photograph is indicated. If the two types of business photographs are respectively identified to be consistent, the two types of business photographs can be considered to be real photographs, and if the two types of business photographs pass the verification, the business information determined by the identification result of the cross verification is taken as the business information of the merchant.
In one embodiment, the target object for submitting the merchant management photograph according to the instruction comprises at least one of the merchant, the buyer and the micro-customer, i.e. the merchant management photograph can be submitted by the merchant itself or by the micro-customer authenticated by the buyer or the micro-customer. When the target object includes at least two of a merchant, a buyer and a micro-customer, that is, when an instruction for submitting the merchant management photograph is sent to two or more target objects at the same time, recognition results corresponding to the merchant management photographs submitted by the two or more target objects respectively can be cross-validated. And if the business information determined by the identification result is consistent, taking the business information determined by the identification result through cross verification as the business information of the merchant.
And step 108, describing the merchant portrait according to the determined merchant management information.
The merchant management information determined here may be determined according to the acquired merchant management information or may be determined after cross-validation.
< example >
Referring now to fig. 6, fig. 6 is a flowchart illustrating an example of a method for describing a business image according to an embodiment of the present invention, and each step of the flowchart may be implemented by the server in fig. 2 or may be implemented by the terminal device 1200 in fig. 2.
As shown in FIG. 6, the method for describing the merchant portrait of the embodiment comprises the following steps:
step 302, determining the type of merchant management photograph required.
As described above, in one embodiment, the types of merchant business shots include merchant door shots, business interior shots, and surrounding street shots.
The determining of the required type may first obtain relevant information of the merchant, for example, performing big data mining of the merchant, obtaining the business information provided by the merchant, determining the lacking business information, and determining the required merchant business photograph according to the category of the supplementary information.
For example, as known by acquiring big data information of the merchant, the merchant lacks management scene information, so that the merchant can make up for submitting the merchant portal for determining the management scene information; if the merchant is known to lack business information, the merchant can be subjected to top-up submission and/or business internal view for determining business category; if the merchant is known to lack business address information, the merchant surrounding street view submitted for determining the business address can be completed.
Step 304, determining the submitting mode of the merchant management photograph.
As described above, in one embodiment, the manner of submission of the merchant's business hours may be via merchant, buyer, and/or micro-customer submissions. Wherein, the buyer submitting can be C-terminal questionnaire or short message, and the micro-client submitting is through micro-client authentication.
The determining and submitting mode can be determined according to the existing submitted business photo of the merchant, if the merchant does not submit the business photo before, an instruction can be sent to submit the merchant, or the instruction can be sent to be submitted by a buyer or a micro-customer so as to cross-verify the business photo submitted by the merchant. If the business photograph is submitted before, the business photograph corresponding to the submitted business photograph can be sent to instruct the buyer or the micropassenger to submit the business photograph corresponding to the business information so as to cross-verify the business information determined by the business photograph submitted by the business.
Step 306, obtaining merchant management photographs uploaded/submitted by merchants, buyers and/or microguests.
Step 308, determining whether the submitted merchant management photograph passes the model verification.
In one embodiment, the model check includes at least the following:
(1) If the submitted photo is the merchant portal photo, the verification is not passed, and the merchant portal photo needs to be uploaded again by the submitting party which is corresponding to the submitted photo.
In one embodiment, the verification may be performed by using a first recognition model for classifying and recognizing the object of the merchant portal, and if the submitted photo is recognized as other class by the first recognition model, the verification is not passed, and the merchant portal needs to be uploaded again by the submitting party that submitted the merchant portal.
(2) If the submitted photo is the operation internal scene, the check is not passed, and the submitted party corresponding to the submitted photo needs to upload the operation internal scene again.
(3) If the submitted photo is a surrounding street view, if not, checking is not passed, and the surrounding street view is uploaded again by the submitting party corresponding to the submitted photo.
In one embodiment, the cases (2) and (3) can be implemented by software or manually checked.
Step 310, whether the cross-validation is passed.
As described above, in one embodiment, cross-validation includes at least the following:
(1) And verifying the submitted merchant management information determined by the merchant management based on the transaction information big data of the merchant.
(2) Based on the mutual cross-validation of two or more merchant operations submitted by the merchant itself.
(3) And cross-verifying the business information determined by the business photo submitted by the merchant based on the business photo submitted by the buyer or the micropassenger.
If either of these conditions fails the cross-validation, then the process returns to step 302 where the type of merchant operation that is required is redetermined. If the cross-validation is passed, step 312 is entered, and the business information of the business, that is, business industry, business address and business scene, is determined by using the obtained category correspondence of the business management photograph, so as to describe or perfect the portrait of the business.
According to the method for describing the business portrait of the embodiment, the business management information is determined by image identification by using the business management information, and the business is described based on the business management information of the image identification. The merchant information can be comprehensively and completely acquired, the coverage rate of the merchant portrait is improved, and the accuracy of the merchant portrait is improved. Meanwhile, the data required by the portrait of the merchant can be automatically identified and collected without manual auditing, so that the portrait cost is reduced.
In addition, the existing C-terminal (buyer) authentication is related information of a merchant through the buyer authentication, but normalization of buyer submitted information is poor, such as a small store where the same buyer sells sundries in a district, some of the buyer submitted information is "small seller", some of the buyer submitted information is "food", and some of the buyer submitted information is "cigarette" only. This adds significant difficulty to subsequent industry identification.
According to the merchant portrait method provided by the embodiment of the invention, the buyer provides the business photograph of the merchant and automatically identifies and acquires the business information corresponding to the business photograph of the merchant, so that the business information of the merchant is subjected to cross verification, the mode of providing authentication information by the buyer is simplified, and the matching degree of submitting the verification information by the buyer is improved. And more standard and unified merchant management information can be obtained, and the accuracy and convenience of merchant management information verification are improved.
< method example two >
The present embodiment provides a method for verifying a merchant representation, which is implemented by a terminal device, such as terminal device 1200 in fig. 2. FIG. 7 is a flow chart showing a verification method of a merchant portrait of this embodiment. As shown in FIG. 7, the verification method of the merchant portrait of this embodiment may include the following steps:
step 502, in response to a verification request based on a merchant portrayal characterized according to information submitted by a merchant, sending an instruction to a buyer of the merchant to submit a merchant's business administration.
The information submitted by the merchant for describing the merchant portrait can be information provided by the merchant when an account is opened or when the information is subsequently completed, such as filled text information, or the information of the merchant for providing the merchant portrait in the merchant portrait describing method in the embodiment of the invention.
The buyer and the merchant conduct business transaction, in order to keep the memory of the buyer to the merchant and ensure the timeliness of the questionnaire, a short message or a mail is usually pushed on a successful payment page of the buyer, so that the buyer provides a specific merchant management photograph of the merchant. The merchant management photograph content provided by the buyer may be provided according to the requirements of the push content, such as providing a merchant portal photograph or a management insider photograph, or both, etc.
Step 504, obtaining the merchant management photograph submitted by the buyer based on the instruction.
Step 506, executing a setting operation for completing the verification of the business portrait based on the business management photograph, wherein the verification of the business portrait comprises the depiction of the business portrait, and the depiction of the business portrait comprises: inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph to obtain a recognition result of the merchant management photograph; determining business information of the merchant corresponding to the merchant management photograph based on the identification result; and describing the merchant portrait according to the business information of the merchant.
In one embodiment, in this step 506, performing the set-up operation for completing verification of the merchant representation may include: the verification of the merchant representation is performed, i.e., in this embodiment, the verification of the merchant representation may be performed by terminal device 1200.
In another embodiment, in the step 506, performing the setting operation for completing the verification of the merchant portrait may also include: and sending the merchant management photograph submitted by the buyer to a server, and verifying the merchant portrait by the server.
Step 508, comparing whether the business portrait rendered by the buyer and the business portrait rendered according to the business submitted information are consistent, and verifying the business portrait rendered according to the business submitted information.
If the business images are consistent, verifying the business images according to the business submitted information; if the business images are inconsistent, the business images marked according to the business submitted information are not verified, namely the accuracy of the business images is low, and the business can re-extract the corresponding business information, such as text information or business photo information.
In the embodiment of verifying the merchant portrait by the server, the method for verifying the merchant portrait further comprises: and receiving a verification result returned by the server after the verification of the merchant portrait is completed.
According to the method, the image recognition technology is introduced to recognize the merchant management photograph provided by the buyer, corresponding management information is determined, and cross verification is performed, so that the verification difficulty and accuracy of the merchant portrait caused by the fact that the existing buyer submits irregular text verification information can be effectively prevented. The method for providing the verification information by the buyer is simplified, and the matching degree of the verification information submitted by the buyer is improved.
< device example >
The present embodiment provides a device for describing a merchant portrait, for example, a device 2000 for describing a merchant portrait shown in fig. 8, where the device 2000 for describing a merchant portrait includes an acquisition module 2200, an identification module 2400, a determination module 2600, and a description module 2800.
The obtaining module 2200 is configured to obtain a merchant management photograph, and the identifying module 2400 is configured to input the merchant management photograph into an image identifying model of a corresponding category according to the category of the merchant management photograph, so as to obtain an identifying result of the merchant management photograph. The determining module 2600 is configured to determine business information of a business corresponding to the business based on the identification result, and the characterizing module 2800 is configured to characterize the business portrait according to the business information of the user.
In one embodiment, the apparatus 2000 for describing a business portrait further includes a sending module (not shown in the figure) configured to send an instruction for submitting the business operation photograph to the target object before the business operation photograph is obtained. The obtaining module 2200 may be configured to obtain a merchant management photograph submitted by the target object based on the instruction.
In one embodiment, the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is a first recognition model, the first recognition model is used for classifying and recognizing the object of the merchant portal photograph, and the recognition module 2400 is used for inputting the merchant portal photograph into a preset first recognition model; and classifying and identifying the objects in the merchant portal through the preset first identification model to obtain a corresponding identification result. The determining module 2600 is configured to determine, according to the identification result, a business scenario of the merchant corresponding to the merchant portal, where the business scenario of the merchant is used as business information of the merchant.
In one embodiment, the categories of the business scenario of the merchant include fixed stores, vending carts and booths, and the categories of the objects of the merchant headings identified by the first identification model include fixed stores, vending carts, booths and other categories. The determining module 2600 is configured to determine, according to the identification result obtained by the classification and identification, that the business scenario of the merchant is a fixed store, a vending car or a booth, or determine that the obtained business illumination is not a business illumination if the identification result is of other types.
In one embodiment, the determination module 2600 is further configured to: acquiring transaction LBS information of the merchant; determining the transaction geographic position of the merchant according to the transaction LBS information; determining an operation scene of the merchant based on the transaction geographic position; verifying the business scene of the merchant determined by the identification result according to the business scene of the merchant determined based on the transaction geographic position; and taking the verified business scene of the merchant as the business information of the merchant.
In one embodiment, the determination module 2600 is further configured to: determining whether the transaction geographic location has changed within a predetermined time period; if the transaction geographic position is unchanged, determining that the business scene of the merchant is a fixed-position business mode; or if the transaction geographic position changes, determining that the business scene of the merchant is a position flow business mode.
In one embodiment, the apparatus 2000 for describing a merchant portrait further includes an extraction module (not shown in the figure) for: and inputting the merchant portal photograph into a preset character extraction model to obtain character data corresponding to the merchant portal photograph. Wherein, the determining module 2600 is configured to: and determining the business industry of the corresponding merchant according to the text data, wherein the business industry of the merchant is used as the business information of the merchant.
In one embodiment, the determination module 2600 is configured to: inputting the extracted text data into a preset first industry classification model; mapping and identifying the text data through the preset industry classification model to obtain industry classification corresponding to the text data; and determining the industry classification corresponding to the text data as the business industry of the merchant corresponding to the merchant portal.
In one embodiment, the determination module 2600 is configured to: acquiring industry classification data of a database; mapping and comparing the extracted text data with industry classification data of the database; determining industry classification corresponding to the text data according to the mapping comparison result; and determining the industry classification corresponding to the text data as the business industry of the merchant corresponding to the merchant portal.
In one embodiment, the apparatus 2000 for describing a merchant portrait further includes a first recognition model generating module (not shown in the figure) for generating a first recognition model, where the first generating module is configured to: obtaining training samples, wherein each training sample comprises an image of the sample and a category of an object in the corresponding sample, and the category of the object represents whether the corresponding object is a fixed store, a vending car, a stall or other types; and training to obtain a first recognition model according to the training sample.
In one embodiment, the apparatus 2000 for describing a merchant portrait further includes a text extraction model generating module (not shown in the figure) for generating a text extraction model, where the text extraction model generating module is configured to: obtaining training samples, wherein each training sample comprises a text image of the sample and text data of the corresponding sample; and training to obtain the text extraction model according to the training sample.
In one embodiment, the apparatus 2000 for describing a merchant portrait further includes: the first industry classification model generation module is used for generating a first industry classification model, and the first industry classification model generation module is used for: acquiring training samples, wherein each training sample comprises text data of the sample and industry classification categories of the corresponding sample; and training to obtain the first industry classification model according to the training sample.
In one embodiment, the business illumination is a business interior illumination of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business interior illumination. Wherein the identification module 2400 is configured to: acquiring relevant information of the merchant; predicting at least one business industry corresponding to the merchant according to the relevant information of the merchant; inputting the business internal view into at least one preset second identification model correspondingly, wherein the at least one preset second identification model corresponds to the predicted at least one business industry respectively so as to identify objects belonging to the corresponding predicted business industry in the business internal view in a classified manner; and classifying and identifying the objects belonging to the corresponding predicted business industry in the business internal scene through the at least one preset second identification model to obtain at least one identification corresponding to the at least one preset second identification model, wherein the at least one identification is used as the identification result.
In one embodiment, the determination module 2600 is configured to: and determining the predicted business industry corresponding to the maximum identification in the at least one identification as the business industry of the merchant, wherein the business industry of the merchant is used as the business information of the merchant.
In one embodiment, the business illumination is a business interior illumination of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing the object of the business interior illumination. Wherein the identification module 2400 is configured to: inputting the operation internal scene into a preset second identification model; and classifying and identifying a plurality of objects in the business inner scene through the preset second identification model to obtain a corresponding identification result.
In one embodiment, the determination module 2600 is configured to: respectively inputting the images of the identified objects into a preset second industry classification model; mapping and identifying the objects respectively through the preset second industry classification model to obtain industry classifications corresponding to the objects respectively; counting the number of corresponding objects belonging to the same industry classification, and sorting according to the number and the size; and determining the industry classification corresponding to the object which belongs to the same industry classification and has the largest number of objects as the business industry corresponding to the business internal scene, wherein the business industry of the business is used as the business information of the business.
In one embodiment, the apparatus 2000 for characterizing a merchant portrait further includes a second recognition model generating module (not shown in the figure) for generating a second recognition model, where the second recognition model generating module is configured to: obtaining training samples, wherein each training sample comprises an image of the sample and a category of an object in the corresponding sample; and training to obtain the second recognition model according to the training sample.
In one embodiment, the apparatus 2000 for characterizing a business portrait further includes a second industry classification model generating module (not shown in the figure) for generating a second industry classification model, where the second industry classification model generating module is configured to: acquiring training samples, wherein each training sample comprises an image of the sample and an industry classification category of the corresponding sample; and training to obtain a second industry classification model according to the training sample.
In one embodiment, the business operation photograph is a business street photograph of the business, the image recognition model of the corresponding category is a third recognition model, and the third recognition model is used for classifying and recognizing the landmark building of the business street photograph. The identification module 2400 is configured to: and inputting the business street view into a preset third recognition model for image recognition, and determining a preset target in the business street view, wherein the preset target is used as a recognition result of the business street view.
In one embodiment, the determination module 2600 is configured to: obtaining the geographic position of the preset target; determining the geographic position of the merchant according to the relative geographic position of the preset target and the merchant in the business street view; and determining the business address of the merchant according to the geographic position of the merchant, wherein the business address of the merchant is used as business information of the merchant.
In one embodiment, the apparatus 2000 for characterizing a merchant portrait further includes a third recognition model generating module (not shown in the figure) for generating a third recognition model, where the third recognition model generating module is configured to: obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the sample is a landmark building; and training to obtain the third recognition model according to the training sample.
In one embodiment, the merchant portrait characterization device 2000 further includes a first verification module (not shown): the verification module is used for: acquiring transaction information of the merchant corresponding to the merchant management photograph to determine a transaction mode and fund flow data of the merchant; and verifying the business information of the merchant determined based on the identification result based on the transaction mode and the fund flow data, wherein the business information determined by the identification result of verification is taken as the business information of the merchant.
In one embodiment, the merchant management photograph includes a merchant portal photograph and a business interior photograph, and the apparatus 2000 for describing a merchant portrait further includes a second verification module (not shown in the figure), where the second verification module is configured to: cross-verifying the identification result corresponding to the merchant portal photograph and the identification result corresponding to the business internal scene photograph; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
In one embodiment, the target object includes at least one of a merchant, a buyer, and a micro-customer. In an embodiment where the target object includes at least two of a merchant, a buyer, and a micro-customer, the apparatus 2000 for characterizing a merchant representation further includes a third verification module (not shown in the figure), where the third verification module is configured to: cross-verifying the identification result corresponding to the merchant management picture submitted by the target object; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
< device example >
In this embodiment, there is also provided an electronic apparatus including the verification device 2000 of the merchant portrait described in the embodiment of the present description.
In further embodiments, as shown in fig. 9, the electronic device 3000 may include a memory 3200 and a processor 3400. The memory 3200 is used to store executable commands. The processor 3400 is configured to perform the methods described in any of the method embodiments of the present specification under the control of executable commands stored in the memory 3200.
The electronic device 3000 may be a server or a terminal device according to the implementation subject of the executed method embodiment, and is not limited herein.
In one embodiment, any of the modules of the apparatus embodiments above may be implemented by the processor 3400.
< System example >
In this embodiment, there is also provided a business portrait verification system, for example, a business portrait verification system as shown in fig. 2, including a server 1100 for executing the method according to the first method embodiment and a terminal device 1200 for executing the method according to the second method embodiment.
The server comprises a memory and a processor, wherein the memory of the server is used for storing executable commands; the processor of the server is configured to perform the method of any of the embodiments as in method one of the embodiments of the present specification under control of the executable command.
The terminal device comprises a memory and a processor, wherein the memory of the terminal device is used for storing executable commands; the processor of the terminal device is configured to execute the method according to any of the second embodiments of the method according to the present specification under the control of the executable command.
< computer-readable storage Medium embodiment >
The present embodiment provides a computer-readable storage medium having stored therein executable instructions that, when executed by a processor, perform the method described in any of the method embodiments of the present specification.
One or more embodiments of the present description may be a system, method, and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement aspects of the present description.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of embodiments of the present description may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present description are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer-readable program instructions, which may execute the computer-readable program instructions.
Various aspects of the present description are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present description. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, implementation by software, and implementation by a combination of software and hardware are all equivalent.
The embodiments of the present specification have been described above, and the above description is illustrative, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the technical improvement in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the application is defined by the appended claims.

Claims (31)

1. A method for describing a merchant portrait, comprising:
acquiring merchant management photographs; the categories of the merchant management photographs comprise merchant door photographs, management interior photographs and/or surrounding street photographs;
inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph to obtain a recognition result of the merchant management photograph; in the case that the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is used for classifying and recognizing the object of the merchant portal photograph; in the case that the business management photograph is the business interior photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business interior photograph; under the condition that the business management photograph is the business street photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the mark building of the business street photograph;
Determining business information of the merchant corresponding to the merchant management photograph based on the identification result;
and describing the business portrait according to the business information of the business.
2. The method of claim 1, prior to the acquiring the merchant management photograph, the method further comprising:
sending an instruction for submitting the merchant management photograph to a target object;
wherein, the obtaining the merchant management photograph includes:
and acquiring the merchant management photograph submitted by the target object based on the instruction.
3. The method according to claim 1 or 2, wherein the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is a first recognition model, and the first recognition model is used for classifying and recognizing objects of the merchant portal photograph;
the step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
inputting the merchant door head photo into a preset first identification model;
classifying and identifying the objects in the merchant portal through the preset first identification model to obtain corresponding identification results;
wherein the determining, based on the identification result, the business information of the merchant management corresponding to the merchant includes:
And determining the business scene of the business corresponding to the business door head according to the identification result, wherein the business scene of the business is used as the business information of the business.
4. The method of claim 3, wherein the categories of business scenarios for the merchant include fixed stores, vending carts, and booths, and the categories of objects for the merchant headings identified by the first identification model category include fixed stores, vending carts, booths, and other categories;
wherein, the determining, according to the identification result, the business scenario of the business corresponding to the business door photo comprises:
and determining that the business scene of the merchant is a fixed store, a vending car or a stall according to the identification result obtained by classification identification, or determining that the acquired business illumination of the merchant is not a merchant illumination when the identification result is of other types.
5. The method of claim 4, the determining, based on the recognition result, business information of the merchant business corresponding to the merchant further comprising:
acquiring transaction LBS information of the merchant;
determining the transaction geographic position of the merchant according to the transaction LBS information;
determining an operation scene of the merchant based on the transaction geographic position;
Verifying the business scene of the merchant determined by the identification result according to the business scene of the merchant determined based on the transaction geographic position;
and taking the verified business scene of the merchant as the business information of the merchant.
6. The method of claim 5, the determining a business scenario for the merchant based on the transaction geographic location comprising:
determining whether the transaction geographic location has changed within a predetermined time period;
if the transaction geographic position is unchanged, determining that the business scene of the merchant is a fixed-position business mode;
or alternatively, the process may be performed,
and if the transaction geographic position changes, determining that the business scene of the merchant is a position flowing business mode.
7. A method according to claim 3, the method further comprising:
inputting the merchant portal photograph into a preset character extraction model to obtain character data corresponding to the merchant portal photograph;
wherein the determining, based on the identification result, the business information of the merchant management corresponding to the merchant includes:
and determining the business industry of the corresponding merchant according to the text data, wherein the business industry of the merchant is used as the business information of the merchant.
8. The method of claim 7, the determining, from the text data, that the merchant portal corresponds to an business industry of a merchant comprises:
inputting the extracted text data into a preset first industry classification model;
mapping and identifying the text data through the preset industry classification model to obtain industry classification corresponding to the text data;
and determining the industry classification corresponding to the text data as the business industry of the merchant corresponding to the merchant portal.
9. The method of claim 7, the determining, from the text data, that the merchant portal corresponds to an business industry of a merchant comprises:
acquiring industry classification data of a database;
mapping and comparing the extracted text data with industry classification data of the database;
determining industry classification corresponding to the text data according to the mapping comparison result;
and determining the industry classification corresponding to the text data as the business industry of the merchant corresponding to the merchant portal.
10. The method of claim 4, further comprising the step of generating the first recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and a class of the corresponding sample, and the class of the sample represents whether the corresponding sample is a fixed store, a vending car, a stall or other classes;
And training to obtain the first recognition model according to the training sample.
11. The method of claim 7, further comprising the step of generating the text extraction model, comprising:
obtaining training samples, wherein each training sample comprises a text image of the sample and text data of the corresponding sample;
and training to obtain the text extraction model according to the training sample.
12. The method of claim 8, further comprising the step of generating the first industry classification model, comprising:
acquiring training samples, wherein each training sample comprises text data of the sample and industry classification categories of the corresponding sample;
and training to obtain the first industry classification model according to the training sample.
13. The method of claim 1, wherein the business photograph is a business interior photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing objects of the business interior photograph;
the step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
acquiring relevant information of the merchant;
Predicting at least two business industries corresponding to the merchant according to the related information of the merchant;
inputting the business internal view into at least two preset second identification models correspondingly, wherein the at least two preset second identification models correspond to the at least two predicted business industries respectively so as to identify objects belonging to the corresponding predicted business industries in the business internal view in a classified manner;
classifying and identifying objects belonging to the corresponding predicted business industry in the business internal scene through the at least two preset second identification models, and obtaining at least two identification similarities corresponding to the at least two preset second identification models; wherein the identified similarity is an average value of the identified similarity of each object;
determining a maximum value of the identification similarity in the at least two identification similarities;
determining whether the recognition similarity of the maximum value is greater than a predetermined threshold;
and taking the recognition similarity of the maximum value as the recognition result when the recognition similarity of the maximum value is larger than the preset threshold value.
14. The method of claim 1, wherein the business photograph is a business interior photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing objects of the business interior photograph;
The step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
acquiring relevant information of the merchant;
predicting at least two business industries corresponding to the merchant according to the related information of the merchant;
inputting the business internal view into one of at least two preset second identification models, wherein the at least two second identification models respectively correspond to the at least two predicted business industries so as to classify and identify objects belonging to the corresponding predicted business industries in the business internal view;
classifying and identifying objects belonging to the corresponding predicted business industry in the business internal scene through one of the at least two second identification models, and acquiring identification similarity of one of the at least two second identification models; wherein the identified similarity is an average value of the identified similarity of each object;
determining whether the recognition similarity is greater than a predetermined threshold;
and taking the identification similarity as the identification result when the identification similarity is larger than the preset threshold value.
15. The method of claim 13 or 14, the determining, based on the recognition result, business information of the merchant business corresponding to a merchant comprising:
And determining the predicted business industry corresponding to the identification similarity as the business industry of the merchant, wherein the business industry of the merchant is used as the business information of the merchant.
16. The method of claim 1, wherein the business photograph is a business interior photograph of the business, the image recognition model of the corresponding category is a second recognition model, and the second recognition model is used for classifying and recognizing objects of the business interior photograph;
the step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
inputting the operation internal scene into a preset second identification model;
and classifying and identifying a plurality of objects in the business inner scene through the preset second identification model to obtain a corresponding identification result.
17. The method of claim 16, the determining, based on the recognition result, business information of the merchant business corresponding to the merchant comprises:
respectively inputting the images of the identified objects into a preset second industry classification model;
mapping and identifying the objects respectively through the preset second industry classification model to obtain industry classifications corresponding to the objects respectively;
Counting the number of corresponding objects belonging to the same industry classification, and sorting according to the number and the size;
and determining the industry classification corresponding to the object which belongs to the same industry classification and has the largest number of objects as the business industry corresponding to the business internal scene, wherein the business industry of the business is used as the business information of the business.
18. The method of claim 13, 14 or 16, further comprising the step of generating the second recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and categories of a plurality of objects in the corresponding sample;
and training to obtain the second recognition model according to the training sample.
19. The method of claim 17, further comprising the step of generating the second industry classification model, comprising:
acquiring training samples, wherein each training sample comprises an image of the sample and an industry classification category of the corresponding sample;
and training to obtain the second industry classification model according to the training sample.
20. The method according to claim 1 or 2, wherein the business operation photograph is a business street photograph of a business, and the image recognition model of the corresponding category is a third recognition model, and the third recognition model is used for classifying and recognizing a landmark building of the business street photograph;
The step of inputting the merchant management photograph into the image recognition model of the corresponding category to obtain the recognition result of the merchant management photograph comprises the following steps:
and inputting the business street view into a preset third recognition model for image recognition, and determining a preset target in the business street view, wherein the preset target is used as a recognition result of the business street view.
21. The method of claim 20, the determining, based on the recognition result, business information of the merchant business corresponding to the merchant comprises:
obtaining the geographic position of the preset target;
determining the geographic position of the merchant according to the relative geographic position of the preset target and the merchant in the business street view;
and determining the business address of the merchant according to the geographic position of the merchant, wherein the business address of the merchant is used as business information of the merchant.
22. The method of claim 20, further comprising the step of generating the third recognition model, comprising:
obtaining training samples, wherein each training sample comprises an image of the sample and a category of the corresponding sample, and the sample is a landmark building;
and training to obtain the third recognition model according to the training sample.
23. The method of claim 1 or 2, further comprising:
acquiring transaction information of the merchant corresponding to the merchant management photograph to determine a transaction mode and fund flow data of the merchant;
and verifying the business information of the merchant determined based on the identification result based on the transaction mode and the fund flow data, wherein the business information determined by the identification result of verification is taken as the business information of the merchant.
24. The method of claim 1 or 2, the merchant management photograph comprising a merchant door photograph and a management interior photograph, the method further comprising:
cross-verifying the identification result corresponding to the merchant portal photograph and the identification result corresponding to the business internal scene photograph; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
25. The method of claim 2, the target object comprising at least one of a merchant, a buyer, and a micro-customer;
wherein when the target object comprises at least two of a merchant, a buyer, and a micro-customer, the method further comprises:
cross-verifying the identification result corresponding to the merchant management picture submitted by the target object; and taking the business information determined by the identification result of the cross verification as the business information of the merchant.
26. A verification method of a merchant portrait is implemented by a terminal device and comprises the following steps:
responding to a verification request based on a merchant portrait characterized according to information submitted by a merchant, and sending an instruction for submitting merchant management of the merchant to a buyer of the merchant;
acquiring the merchant management photograph submitted by the buyer based on the instruction;
performing a setting operation for completing verification of the merchant representation based on the merchant management photograph, wherein the verification of the merchant representation comprises a depiction of the merchant representation, the depiction of the merchant representation comprising: inputting the merchant management photograph into an image recognition model of a corresponding category according to the category of the merchant management photograph to obtain a recognition result of the merchant management photograph; determining business information of the merchant corresponding to the merchant management photograph based on the identification result; the business portrait is characterized according to the business information of the business;
and comparing whether the business portrait marked by the business management submitted by the buyer is consistent with the business portrait marked by the business submitted information, and verifying the business portrait marked by the business submitted information.
27. The method of claim 26, the performing a setup operation to complete verification of the merchant representation comprising:
Sending the merchant management photograph submitted by the buyer to a server for verification of the merchant portrait;
the method further comprises the steps of: and receiving a verification result returned by the server after the verification of the merchant portrait is completed.
28. A device for describing a merchant portrait, comprising:
the acquisition module is used for acquiring merchant management photographs; the categories of the merchant management photographs comprise merchant door photographs, management interior photographs and/or surrounding street photographs;
the identification module is used for inputting the merchant management photograph into an image identification model of a corresponding category according to the category of the merchant management photograph so as to obtain an identification result of the merchant management photograph; in the case that the merchant management photograph is a merchant portal photograph, the image recognition model of the corresponding category is used for classifying and recognizing the object of the merchant portal photograph; in the case that the business management photograph is the business interior photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the object of the business interior photograph; under the condition that the business management photograph is the business street photograph of the business, the image recognition model of the corresponding category is used for classifying and recognizing the mark building of the business street photograph;
the determining module is used for determining the business information of the merchant corresponding to the merchant management photograph based on the identification result;
And the depicting module is used for depicting the merchant portraits according to the business information of the merchant.
29. An electronic device comprising the characterization device of a business representation according to claim 28, or comprising:
a memory for storing executable commands;
a processor for performing the method of any of claims 1-27 under control of the executable command.
30. A merchant portrayal verification system comprising:
a server comprising a memory and a processor, the memory of the server for storing executable commands; a processor of the server for performing the method of any of claims 1-25 under control of the executable command; the method comprises the steps of,
the terminal equipment comprises a memory and a processor, wherein the memory of the terminal equipment is used for storing executable commands; the processor of the terminal device being adapted to perform the method of any of claims 26 to 27 under control of the executable command.
31. A computer readable storage medium storing executable instructions which, when executed by a processor, perform the method of any one of claims 1-27.
CN202310728364.0A 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system Pending CN116843375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310728364.0A CN116843375A (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310728364.0A CN116843375A (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system
CN202010078806.8A CN111311316B (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010078806.8A Division CN111311316B (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system

Publications (1)

Publication Number Publication Date
CN116843375A true CN116843375A (en) 2023-10-03

Family

ID=71159878

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202010078806.8A Active CN111311316B (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system
CN202310728364.0A Pending CN116843375A (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202010078806.8A Active CN111311316B (en) 2020-02-03 2020-02-03 Method and device for depicting merchant portrait, electronic equipment, verification method and system

Country Status (1)

Country Link
CN (2) CN111311316B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753496B (en) * 2020-06-22 2023-06-23 平安付科技服务有限公司 Industry category identification method and device, computer equipment and readable storage medium
CN115271879A (en) * 2020-07-01 2022-11-01 支付宝(杭州)信息技术有限公司 Method and system for processing geographical position
CN112990939A (en) * 2020-11-27 2021-06-18 中国银联股份有限公司 Method, apparatus and computer readable medium for verifying user data
CN112581271B (en) * 2020-12-21 2022-11-15 上海浦东发展银行股份有限公司 Merchant transaction risk monitoring method, device, equipment and storage medium
CN114298774B (en) * 2022-03-09 2022-06-07 广州鹰云信息科技有限公司 Business complex analysis method and system based on brand knowledge graph

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734184B (en) * 2017-04-17 2022-06-07 苏宁易购集团股份有限公司 Method and device for analyzing sensitive image
CN108717636A (en) * 2018-03-19 2018-10-30 杭州祐全科技发展有限公司 A kind of network ordering intelligent supervision method
CN108520058B (en) * 2018-03-30 2021-09-24 维沃移动通信有限公司 Merchant information recommendation method and mobile terminal
CN108876465B (en) * 2018-06-28 2022-02-01 创新先进技术有限公司 Method, device and server for business mode grouping of merchants
CN109214280B (en) * 2018-07-27 2021-10-01 北京三快在线科技有限公司 Shop identification method and device based on street view, electronic equipment and storage medium
CN109377240B (en) * 2018-08-21 2023-10-20 中国平安人寿保险股份有限公司 Commercial tenant management method and device based on neural network, computer equipment and storage medium
CN110009364B (en) * 2019-01-08 2021-08-24 创新先进技术有限公司 Industry identification model determining method and device
CN110046959B (en) * 2019-03-27 2021-04-06 拉扎斯网络科技(上海)有限公司 Method, device, equipment and storage medium for determining business category of merchant
CN110163204A (en) * 2019-04-15 2019-08-23 平安国际智慧城市科技股份有限公司 Businessman's monitoring and managing method, device and storage medium based on image recognition
CN110223050A (en) * 2019-06-24 2019-09-10 广东工业大学 A kind of verification method and relevant apparatus of merchant store fronts title
CN110728198B (en) * 2019-09-20 2021-02-19 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN111311316A (en) 2020-06-19
CN111311316B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
CN111311316B (en) Method and device for depicting merchant portrait, electronic equipment, verification method and system
JP6693502B2 (en) Information processing apparatus, information processing method, and program
CN107465741B (en) Information pushing method and device
KR101856120B1 (en) Discovery of merchants from images
WO2020077877A1 (en) Platform commodity stationing method and apparatus, and computer device and storage medium
CN109492122B (en) Method and device for acquiring merchant information, terminal and computer-readable storage medium
US20130141586A1 (en) System and method for associating an order with an object in a multiple lane environment
US20240046643A1 (en) Augmented Reality, Computer Vision, and Digital Ticketing Systems
WO2017066543A1 (en) Systems and methods for automatically analyzing images
US20170031952A1 (en) Method and system for identifying a property for purchase using image processing
US20220036443A1 (en) Method and system of electronic bartering
US10248982B2 (en) Automated extraction of product data from production data of visual media content
US20180075327A1 (en) Automated location based comparative market analysis
WO2020037762A1 (en) Product information identification method and system
KR102459466B1 (en) Integrated management method for global e-commerce based on metabus and nft and integrated management system for the same
Hatim et al. E-FoodCart: An Online Food Ordering Service
WO2009024990A1 (en) System of processing portions of video stream data
KR102467616B1 (en) Personal record integrated management service connecting to repository
Kaźmierczak Geoinformation support system for real estate market
JP6773346B1 (en) Tenant recommendation company proposal program, real estate transaction price proposal program
US20160034911A1 (en) Expert System for Rating Credence Goods&#39; Claims to Being Environmentally Friendly
CN112613891A (en) Shop registration information verification method, device and equipment
US20190213614A1 (en) Information integration displaying platform and displaying method thereof
KR20210155213A (en) Restaurant Information Service Providing System and Method
CN112348607A (en) Image-based shopping guide method, terminal, device, medium and system in commercial place

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination