CN113409041B - Electronic card selection method, device, terminal and storage medium - Google Patents

Electronic card selection method, device, terminal and storage medium Download PDF

Info

Publication number
CN113409041B
CN113409041B CN202010187020.XA CN202010187020A CN113409041B CN 113409041 B CN113409041 B CN 113409041B CN 202010187020 A CN202010187020 A CN 202010187020A CN 113409041 B CN113409041 B CN 113409041B
Authority
CN
China
Prior art keywords
scene
electronic card
candidate
card
selecting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010187020.XA
Other languages
Chinese (zh)
Other versions
CN113409041A (en
Inventor
万磊
王强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010187020.XA priority Critical patent/CN113409041B/en
Priority to PCT/CN2021/080488 priority patent/WO2021185174A1/en
Publication of CN113409041A publication Critical patent/CN113409041A/en
Application granted granted Critical
Publication of CN113409041B publication Critical patent/CN113409041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • G06Q20/351Virtual cards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • G06Q20/356Aspects of software for card payments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

The application is applicable to the technical field of information processing, and provides a method, a device, a terminal and a storage medium for selecting an electronic card, wherein the method comprises the following steps: acquiring current scene information, and determining a scene type according to the scene information; and selecting the candidate electronic card matched with the scene type as a target electronic card. According to the technical scheme, when the electronic card is required to be called for authentication, payment and other operations, the current scene information is acquired through the terminal equipment, the scene type is determined according to the scene object contained in the scene information, and the electronic card associated with the scene type is selected from all candidate electronic cards to serve as a target electronic card, so that the purpose of automatically selecting the electronic card is achieved, and the operation efficiency and response speed of the electronic card are improved.

Description

Electronic card selection method, device, terminal and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a method and an apparatus for selecting an electronic card, a terminal, and a storage medium.
Background
In daily life, a user can perform operations such as payment and authentication through the entity card, but as the service types are continuously increased, the number of corresponding entity cards is increased, so that the entity card can be converted into an electronic card and bound with the intelligent terminal due to the development of electronic technology, and related payment and authentication operations are performed. However, in the existing electronic card technology, when a user performs operations such as authentication and payment, the user needs to manually select the electronic card associated with the current operation, so that the operation difficulty is increased, and the operation efficiency is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for selecting an electronic card, which can solve the problems that the operation difficulty is increased and the operation efficiency is lower because the electronic card related to the current operation is required to be manually selected in the existing electronic card technology.
In a first aspect, an embodiment of the present application provides a method for selecting an electronic card, including:
acquiring current scene information, and determining a scene type according to the scene information;
and selecting the candidate electronic card matched with the scene type as a target electronic card.
In a possible implementation manner of the first aspect, the obtaining current scene information and determining a scene type according to the scene information include:
receiving a scene image fed back by the intelligent glasses;
identifying a subject contained within the scene image;
and determining the scene type according to all the shooting subjects.
In a possible implementation manner of the first aspect, the obtaining current scene information and determining a scene type according to the scene information include:
collecting environmental sound in the current scene;
acquiring a frequency domain spectrum of the environmental sound, and determining a sounding main body contained in a current scene according to a frequency value contained in the frequency domain spectrum;
And determining the scene type according to all the sounding bodies.
In a possible implementation manner of the first aspect, the obtaining current scene information and determining a scene type according to the scene information include:
acquiring current position information, and extracting scene keywords contained in the position information;
according to the confidence degrees of the candidate scenes associated with all the scene keywords, respectively calculating the confidence probabilities of the candidate scenes;
and selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
In a possible implementation manner of the first aspect, the selecting, as the target electronic card, the candidate electronic card that matches the scene type includes:
respectively calculating the matching degree between each candidate electronic card and the scene type;
and selecting the candidate electronic card with the highest matching degree as the target electronic card.
In a possible implementation manner of the first aspect, after the selecting, as the target electronic card, the candidate electronic card that matches the scene type, the method further includes:
executing card swiping authentication operation through the target electronic card and the card swiping equipment;
If the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation through the target electronic card and the card swiping equipment until the card swiping authentication is successful.
In a possible implementation manner of the first aspect, the selecting, as the target electronic card, the candidate electronic card that matches the scene type includes:
obtaining standard scenes of the candidate electronic cards;
and matching the scene type with each standard scene, and determining the target electronic card according to a matching result.
In a second aspect, an embodiment of the present application provides an electronic card selecting device, including:
the scene type determining unit is used for acquiring current scene information and determining the scene type according to the scene information;
and the electronic card selecting unit is used for selecting the candidate electronic card matched with the scene type as a target electronic card.
In a third aspect, an embodiment of the present application provides a terminal device, a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor executes the computer program to implement a method for selecting an electronic card according to any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements the method for selecting an electronic card according to any one of the first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which when run on a terminal device, causes the terminal device to perform the method for selecting an electronic card according to any one of the first aspects.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
When the electronic card needs to be called for authentication, payment and other operations, the terminal equipment is used for collecting the current scene information, determining the scene type according to the scene object contained in the scene information, and selecting the electronic card associated with the scene type from all candidate electronic cards as a target electronic card, so that the purpose of automatically selecting the electronic card is achieved, and the operation efficiency and response speed of the electronic card are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a block diagram of a part of the structure of a mobile phone according to an embodiment of the present application;
fig. 2 is a schematic software structure of a mobile phone according to an embodiment of the present application;
fig. 3 is a flowchart of an implementation of a method for selecting an electronic card according to a first embodiment of the present application;
FIG. 4 is a schematic diagram of scene type identification based on a scene image according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating selection of an electronic card according to an embodiment of the present application;
fig. 6 is a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a second embodiment of the present application;
fig. 7 is a schematic diagram of a shooting scene range of a terminal device in a card swiping process according to an embodiment of the present application;
fig. 8 is a schematic diagram of a shooting range of smart glasses in a card swiping process according to another embodiment of the present application;
fig. 9 is a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a third embodiment of the present application;
fig. 10 is a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a fourth embodiment of the present application;
fig. 11 is a flowchart of a specific implementation of a method S302 for selecting an electronic card according to a fifth embodiment of the present application;
fig. 12 is a flowchart of a specific implementation of a method S302 for selecting an electronic card according to a sixth embodiment of the present application;
FIG. 13 is a schematic diagram of a system for selecting an electronic card according to an embodiment of the present disclosure;
fig. 14 is a block diagram of a selecting device for an electronic card according to an embodiment of the present application;
fig. 15 is a schematic diagram of a terminal device according to another embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
In addition, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Aiming at the problems that the current operation related electronic card needs to be manually selected in the current electronic card technology, the operation difficulty is increased, and the operation efficiency is low, the embodiment of the application provides a method, a device, equipment and a storage medium for selecting the electronic card, when the electronic card needs to be called for authentication, payment and other operations, the current scene information is acquired through terminal equipment, the scene type is determined according to scene objects contained in the scene information, and the electronic card related to the scene type is selected from all candidate electronic cards as a target electronic card, so that the purpose of automatically selecting the electronic card is realized, and the operation efficiency and response speed of the electronic card are improved.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The method for selecting the electronic card provided by the embodiment of the application can be applied to terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, augmented reality (augmented reality, AR)/Virtual Reality (VR) devices, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and can also be applied to databases, servers and service response systems based on terminal artificial intelligence.
For example, the terminal device may be a Station (ST) in a WLAN, may be a cellular telephone, a cordless telephone, a Session initiation protocol (Session InitiationProtocol, SIP) telephone, a wireless local loop (Wireless Local Loop, WLL) station, a personal digital assistant (Personal Digital Assistant, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, a computer, a laptop computer, a handheld communication device, a handheld computing device, and/or other devices for communicating over a wireless system, as well as next generation communication systems, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
By way of example, but not limitation, when the terminal device is a wearable device, the wearable device may also be a generic name for applying wearable technology to intelligently design daily wear, developing wearable devices, such as gloves, watches, etc. configured with near field communication modules. The wearable device is a portable device which is directly worn on the body or integrated into the clothes or accessories of the user, and performs operations such as payment, authentication and the like through an electronic card which is bound in advance by being attached to the body of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size and complete or partial functions which can be realized independent of the intelligent mobile phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the intelligent mobile phone for use, such as various intelligent watches with display screens, smart bracelets and the like.
In this embodiment, the terminal device may be a mobile phone 100 having a hardware structure as shown in fig. 1, and as shown in fig. 1, the mobile phone 100 may specifically include: radio Frequency (RF) circuitry 110, memory 120, input unit 130, display unit 140, sensor 150, audio circuitry 160, short-range wireless communication module 170, processor 180, and power supply 190. It will be appreciated by those skilled in the art that the structure of the handset 100 shown in fig. 1 does not constitute a limitation of the terminal device, and the terminal device may include more or less components than illustrated, or may combine certain components, or may have a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 1:
the RF circuit 110 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, specifically, after receiving downlink information of the base station, the downlink information is processed by the processor 180; in addition, the data of the design uplink is sent to the base station. Typically, RF circuitry includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (Low Noise Amplifier, LNAs), diplexers, and the like. In addition, RF circuit 110 may also communicate with networks and other devices via wireless communications. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE)), email, short message service (Short Messaging Service, SMS), and the like.
The memory 120 may be used to store software programs and modules, and the processor 180 performs various functional applications and data processing of the cellular phone by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Specifically, the memory 120 may store card information of the electronic cards and a correspondence between each electronic card and an associated scene type, and the mobile phone may determine, through the memory 120, a target electronic card associated with the current scene.
The input unit 130 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile phone 100. In particular, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 131 or thereabout by using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 131 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 180, and can receive commands from the processor 180 and execute them. In addition, the touch panel 131 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 130 may include other input devices 132 in addition to the touch panel 131. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 140 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 140 may include a display panel 141, and alternatively, the display panel 141 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 131 may cover the display panel 141, and when the touch panel 131 detects a touch operation thereon or thereabout, the touch panel is transferred to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in fig. 1, the touch panel 131 and the display panel 141 are two independent components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 131 and the display panel 141 may be integrated to implement the input and output functions of the mobile phone.
The handset 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein. Optionally, the mobile phone can acquire the measured values of each sensor through a learning algorithm when the obtained user performs the card swiping action, so that before the mobile phone approaches the card swiping device, whether the user needs to perform the card swiping operation or not is determined in advance, current scene information is acquired, the scene type is determined, and the selection efficiency of the electronic card is further improved.
Audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the handset. The audio circuit 160 may transmit the received electrical signal converted from audio data to the speaker 161, and the electrical signal is converted into a sound signal by the speaker 161 to be output; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, which is received by the audio circuit 160 and converted into audio data, which is processed by the audio data output processor 180 and sent to, for example, another cell phone via the RF circuit 110, or which is output to the memory 120 for further processing.
Communication technologies such as WiFi, bluetooth, and near field communication (Near Field Communication, NFC) belong to short-range wireless transmission technologies, and a mobile phone can help a user to send and receive e-mails, browse web pages, access streaming media, and the like through a short-range wireless module 170, so that wireless broadband internet access is provided for the user. The short-distance wireless module 170 may include a WiFi chip, a bluetooth chip, and an NFC chip, through which a function of WiFi Direct connection between the mobile phone 100 and other terminal devices may be implemented, or the mobile phone 100 may be made to operate in an AP mode (Access Point mode) capable of providing a wireless Access service and allowing other wireless devices to Access, or in a STA mode (Station mode) capable of being connected to an AP and not accepting Access of the wireless devices, so as to establish peer-to-peer communication between the mobile phone 100 and other WiFi devices; the mobile phone can establish a short-distance communication link with the card swiping device through the NFC chip, send the card information of the electronic card written in advance to the card swiping device according to the short-distance communication link, execute the subsequent card swiping operation, feed back the card swiping result to the mobile phone, and output the card swiping result through the display module of the mobile phone.
The processor 180 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions and processes data of the mobile phone by running or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the mobile phone. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The handset 100 further includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 180 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system.
The handset 100 may also include a camera. Optionally, the position of the camera on the mobile phone may be front or rear, which is not limited in this embodiment of the present application. The mobile phone can acquire a scene image of a current scene through the camera, and determine scene information and scene types through analyzing the scene image.
The software system of the mobile phone 100 may employ a layered architecture, an event driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. In the embodiment of the invention, taking an Android system with a layered architecture as an example, a software structure of the mobile phone 100 is illustrated.
Fig. 2 is a software configuration block diagram of the mobile phone 100 according to the embodiment of the present application. The Android system is divided into four layers, namely an application program layer, an application framework layer (FWK), a system layer and a hardware abstraction layer, and the layers are communicated through software interfaces.
The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications for cameras, gallery, calendar, phone calls, maps, navigation, WLAN, bluetooth, music, video, short messages, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layer may include a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. In some embodiments, the kernel layer further includes PCIE drivers.
In the embodiment of the present application, the execution body of the flow is a device configured with a near field communication module. As an example and not by way of limitation, the above-mentioned device configured with a near field communication module may specifically be a terminal device, which may be a mobile terminal such as a smart phone, a tablet computer, a notebook, etc. used by a user. Fig. 3 shows a flowchart of an implementation of the method for selecting an electronic card according to the first embodiment of the present application, which is described in detail below:
In S301, current scene information is acquired, and a scene type is determined according to the scene information.
In this embodiment, the terminal device may acquire current scene information through an information acquisition module such as an internal sensor, or may establish a data link with an external information acquisition device to receive scene information acquired by other information acquisition devices.
In one possible implementation manner, the terminal device is internally provided with a camera module, the camera module can be a front camera module and/or a rear camera module, the camera module is used for collecting scene images of a current scene, identifying the scene images as scene information, analyzing the scene information and determining scene types; the terminal equipment is internally provided with a microphone module, the microphone module can collect scene audio of a current scene, identify the scene audio as scene information, perform audio analysis on the scene audio and determine the scene type; the terminal equipment is internally provided with a positioning module, positioning information is acquired through the positioning module, the positioning information is used as scene information, and the associated scene type is determined according to the positioning information.
In one possible implementation manner, the specific implementation manner of determining the scene type by the terminal device through the scene image may be: the terminal device may be configured with corresponding standard images for different scene types. The terminal equipment can match the currently acquired scene image with each standard image, and the scene type associated with the scene image is determined according to the matching result. Specifically, the process of matching the scene image with the standard image may specifically be: the terminal equipment can perform gray processing on the scene image, convert the scene image into a monochromatic image, generate an image array corresponding to the monochromatic image according to the pixel value and the pixel coordinates of each pixel point, guide the image array into a preset convolutional neural network, perform pooling dimension reduction operation on the convolutional array through a preset convolutional check, generate an image feature vector corresponding to the image array, calculate a vector distance between the image feature vector and a standard feature vector corresponding to each standard image, take the vector distance as a matching probability value between the standard image and each standard image, and select a scene type associated with the standard image with the maximum probability value as the scene type of the scene image. The standard feature vector of the standard image can be acquired through a self-learning algorithm, and the implementation mode of the self-learning algorithm can be as follows: when each electronic card is bound initially, the terminal equipment collects standard images corresponding to the electronic card under the use scene, generates the standard feature vector based on the standard images, and guides the scene image corresponding to the card swiping operation into the neural network when the electronic card is used for executing the card swiping operation in the subsequent use process, so as to adjust the generated standard feature vector, and the configured standard feature vector can be subjected to posterior adjustment in the use process each time, thereby improving the accuracy of the standard feature vector.
In one possible implementation, each electronic card corresponds to a cloud server. The cloud server may be used to store operational records of the electronic card and scene images associated with each operational record. The cloud server extracts historical scene images from each operation record and generates the standard feature vectors through all the historical scene images. The cloud server can send the standard feature vector to each terminal device in a preset updating period, the standard feature vector can be associated with an electronic card identifier, the terminal device stores the received electronic card identifier and the standard feature vector in a storage unit, and in a subsequent matching operation, the standard feature vector can be extracted to execute the matching operation.
Illustratively, fig. 4 shows a schematic diagram for identifying scene types based on a scene image according to an embodiment of the present application. Referring to fig. 4, the mobile terminal 41 includes a terminal device 41 and a card swiping device 42, and a camera module 411 and a near field communication module 412 are configured on the mobile terminal 41. When the terminal device 41 approaches the card swiping device 42, the near field communication module 412 detects a near field communication signal sent by the card swiping device 42 and establishes communication connection with the card swiping device 42, at this time, the terminal device may activate the camera module 411, collect a scene image in a current scene through the camera module 411, and determine a scene type according to the scene image.
In one possible implementation, the terminal device collects a plurality of different types of scene information, and determines the current scene type according to the different types of scene information. Specifically, the terminal device may collect a scene image and scene audio of a current scene, identify a plurality of candidate object types through the scene image, screen a target object type from the candidate object types according to the scene audio, and determine the scene type according to the target object type. The recognition process of the scene type can be calibrated by screening the invalid candidate object types through the scene audio, so that the recognition efficiency is improved, and of course, candidate sound production objects in the current scene can be recognized through the scene audio, target sound production objects are screened out from the candidate sound production objects according to the scene image, and the scene type is determined according to the target sound production objects. For example, the terminal device obtains a scene image through the camera module, and due to factors such as too far shooting distance or obstacle shielding, part of scene objects cannot be obtained through scene image recognition, so that the accuracy of scene type recognition is reduced. In order to solve the above problems, the terminal device may acquire environmental sound in a current scene through the microphone module while acquiring a scene image, determine a sounding subject through the environmental sound, and determine a photographing object through image recognition of the scene image, and determine a scene type through the sounding subject and the photographing object.
Specifically, in one possible implementation manner, the manner of determining the scene type through the sounding main body and the shooting object may be: the terminal equipment determines first confidence degrees of all candidate scene types through all sounding subjects, determines second confidence degrees of all candidate scene types through all shooting objects, weights the first confidence degrees based on voice weights, weights the second confidence degrees based on image weights, calculates matching degrees of all candidate scene types according to the weighted first confidence degrees and the second confidence degrees, and selects the candidate scene type with the highest matching degree as the scene type of the current scene.
For example, the scene types associated with the electronic card stored in the terminal device can be classified into three different scene types, namely a bank type, a bus type and an entrance guard type. When the terminal equipment detects that the electronic card needs to be called, a scene image of a current scene can be acquired through the camera module, and three shooting objects including a teller machine, a bank mark and a shielding door are determined in the scene image through an image recognition technology, so that first confidence coefficients corresponding to three candidate scene types are as follows: (bank type, 80%), (entrance guard type, 50%), (traffic type, 20%), and by collecting environmental sounds, it is determined that the sounding main body included in the scene includes a case, a mechanical operation sound, so that the second confidence degrees corresponding to the three candidate scene types are: (bank type, 60%), (entrance guard type, 50%), (traffic type, 60%), and the preset image weight value is 1, and the voice weight value is 0.8, so the matching degree of the three candidate scene types is (bank type, 80% ×1+60×0.8=1.28), (entrance guard type, 50% ×1+50×0.8=0.9), (traffic type, 20% ×1+60×0.8=0.68), and the candidate scene with the highest matching degree is the bank type, so the bank type is used as the current scene type.
It should be noted that, the above description uses two types of scene information, i.e. a voice type and an image type, in combination with determining a scene type as an example, and more than two types of scene information may be adopted in actual use, or other scene information not limited to the above two types of information may be adopted in combination with determining a scene type, which is not described in detail herein.
In one possible implementation manner, the user may trigger the selection process of the electronic card by clicking an electronic card activation control or opening an electronic card application, and the terminal device may trigger the selection process of the electronic card when detecting that the near field communication signal exists through the near field communication module.
In a possible implementation manner, the terminal device may acquire, through a built-in learning algorithm, an action track of a user when executing a card swiping operation through the terminal device, so as to automatically activate a selection procedure of the electronic card when detecting that a movement track of the current terminal is consistent with the learned action track, thereby achieving the purpose of selecting the electronic card in advance, and improving a subsequent response speed. The specific implementation flow is as follows: the terminal equipment can continuously acquire the parameter values of the motion sensor, store the parameter values corresponding to the acquisition moments in the motion parameter queue according to the sequence of the acquisition moments, and continuously update the motion parameter queue according to the sequence of first-in first-out. If the terminal equipment detects that the user executes the card swiping operation, acquiring card swiping operation time, all parameter values in the motion parameter queue, and generating a motion track corresponding to the motion parameter queue about the card swiping time. The terminal device may import the motion trail corresponding to the historical card swiping operation into the machine learning model, so as to generate an identification model about the card swiping operation. In the using process of the terminal equipment, each parameter value in the parameter motion queue is imported into the card swiping operation identification model to judge whether card swiping action exists, if so, the selecting flow of the electronic card is executed; otherwise, continuously collecting the parameter value of the motion sensor, and updating the parameter motion queue. It should be noted that, when the terminal device performs the card swiping operation once, the identification model of the card swiping operation may be updated, so as to improve the accuracy of identification.
In S302, a candidate electronic card matching the scene type is selected as a target electronic card.
In this embodiment, a user may bind a plurality of electronic cards on the terminal device, and each bound electronic card is the candidate electronic card. The binding manner of the electronic card may be: the user can input the identity of the entity card to the terminal device, and send the authorization information of the entity card to the cloud server of the entity card through the electronic card control of the terminal device, for example, input the bound mobile phone number or user identity information, etc., after detecting that the authorization information is legal, the cloud server can feed back the corresponding authorization code to the terminal device, and the terminal device associates the authorization code with the electronic card corresponding to the entity card generated in the terminal device, so that the electronic card corresponding to the entity card is created in the terminal device.
In this embodiment, the terminal device may configure the associated scene types for different candidate electronic cards. After determining the scene type corresponding to the scene information, whether the current scene type is matched with the scene type of each candidate electronic card or not can be judged, namely, whether the scene type associated with the electronic card is consistent with the current scene type or not is judged, the candidate electronic card with the consistent scene type is taken as the target electronic card, and the subsequent card swiping operation is executed.
In one possible implementation manner, if the scene types associated with the candidate electronic cards are the same, the candidate electronic card with the highest priority may be selected as the target electronic card according to the priority of each candidate electronic card. For example, fig. 5 shows a schematic selection diagram of an electronic card according to an embodiment of the present application. The terminal equipment is bound with four electronic cards, namely a bank card A, a bank card B, a bus card and an access card. The terminal device determines that the current scene type is a bank type by collecting the current scene information, and the scene types associated with the bank card A and the bank card B are both bank types, namely, the scene types of the two electronic cards are matched with the current scene type, in this case, the priorities corresponding to the two bank cards can be obtained, and if the priority of the bank card A is higher than the priority of the bank card B, the bank card A can be selected as the target electronic card.
In one possible implementation manner, if the scene types associated with the candidate electronic cards are the same, the matching degree of the candidate electronic cards with the same scene type can be calculated according to the current card swiping time and the card swiping place, and one candidate electronic card with the highest matching degree is selected as the target electronic card. Specifically, different electronic cards have corresponding usage habits, for example, a user uses the electronic card a to perform a card swiping operation in the morning and uses the electronic card B to perform a card swiping operation in the afternoon, and the terminal device can calculate the matching degree with the current scene according to the history time and the history place in the history card swiping record of each electronic card, and select one candidate electronic card with higher matching degree as the target electronic card.
As can be seen from the above, when the electronic card needs to be invoked to perform operations such as authentication and payment, the method for selecting an electronic card provided by the embodiment of the present application collects current scene information through the terminal device, determines a scene type according to the scene object included in the scene information, and selects an electronic card associated with the scene type from all candidate electronic cards as a target electronic card, thereby achieving the purpose of automatically selecting the electronic card and improving the operation efficiency and response speed of the electronic card.
Fig. 6 shows a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a second embodiment of the present application. Referring to fig. 6, with respect to the embodiment described in fig. 3, in a method for selecting an electronic card provided in this embodiment, S301 includes: s601 to S603, specifically described below:
further, the obtaining the current scene information and determining the scene type according to the scene information includes:
in S601, a scene image fed back by smart glasses is received.
In this embodiment, the terminal device establishes communication connection with an external intelligent glasses, and acquires a scene image in a current scene through a camera module built in the intelligent glasses. Because the intelligent glasses are worn near eyes of a user, compared with the situation that a camera module built in the terminal equipment is used for collecting scene images, the vision is clear, the consistency between the intelligent glasses and the scene watched by the user is high, the situation that a main scene main body is blocked by other objects during shooting is reduced, and therefore the accuracy of scene type identification is improved. For some scenes, for example, when a user uses an electronic card to take a bus under a traffic scene type, the user often takes out a mobile phone from a pocket of clothes or trousers and then directly performs a card swiping operation, and under a moving path from the pocket to a position close to the card swiping machine, a camera module built in the terminal equipment cannot acquire a scene image containing the card swiping machine in a large probability.
Fig. 7 is a schematic diagram illustrating a shooting scene range of a terminal device in a card swiping process according to an embodiment of the present application. Referring to fig. 7, the initial position of the terminal device is in the pocket, and when a card is required to be swiped, the terminal device needs to be taken out of the pocket and close to the card swiper, i.e., the target position is near the card swiper. During the movement, the photographed area is shown as a sector area in fig. 7. Therefore, the terminal equipment only contains the card swiping equipment when the terminal equipment is close to the card swiping machine, and only partial images of the card swiping equipment are included, so that the recognition accuracy is low.
Fig. 8 is a schematic diagram illustrating a shooting range of smart glasses in a card swiping process according to another embodiment of the present application. Referring to fig. 8, the smart glasses are worn in the eye area of the user, so that the shooting range of the smart glasses is basically consistent with the visual range of eyes of the user, and the user can continuously record the images by the smart glasses in the advancing direction process, namely, the process that the user approaches to the card swiping device, so that compared with the built-in camera module using the terminal device, the environment images acquired by the smart glasses have better recognition effect.
In one possible implementation manner, when the terminal device detects that the preset scene information acquisition condition is met, an acquisition instruction can be sent to the intelligent glasses, the intelligent glasses can execute image acquisition operation after receiving the acquisition instruction, and the acquired image is fed back to the terminal device, so that the terminal device can acquire the scene image. Specifically, the above-mentioned scene information acquisition conditions may be: when the terminal equipment detects that the near field communication signal is contained in the current scene, the terminal equipment recognizes that the scene information acquisition condition is met; or the terminal equipment records a plurality of card swiping places according to the historical card swiping operation, and recognizes that the scene information acquisition condition is met when the current position is detected to reach the stored card swiping places.
In one possible implementation manner, the smart glasses may acquire the current scene image in a preset acquisition period, and feed back the acquired scene image to the terminal device, where the terminal device may determine, by identifying a shooting subject in the scene image, whether the shooting subject includes a target subject related to the card swiping operation, and if so, execute the operation of S603.
In this embodiment, wireless communication may be established between the terminal device and the smart glasses, specifically, the smart glasses have a wireless communication module, such as a WiFi module, a bluetooth module, a ZigBee module, and the like, and correspondingly, the terminal device may also have a corresponding wireless communication module. The terminal device establishes a wireless communication link with the smart glasses by searching for and joining the wireless network of the smart glasses.
In S602, a subject included in the scene image is identified.
In this embodiment, the terminal device may analyze, through an image analysis algorithm, a shooting subject included in a scene image, where a manner of determining the shooting subject may specifically be: the method comprises the steps of dividing a scene image into a plurality of main body areas by identifying contour lines contained in the scene image, and determining the main body type of a shooting main body corresponding to each main body area according to the contour shape and the color characteristics of the main body area.
In one possible implementation, the terminal device may be configured with a list of principal types, and for each principal type a corresponding principal model is associated. The terminal device can match each main body area with each main body model, and select the main body type of the main body model with the highest matching degree as the shooting main body corresponding to the main body area.
In one possible implementation manner, the terminal device may perform a preprocessing operation on the scene image before parsing the scene image, so that accuracy of recognition of the shooting subject can be improved. Specifically, the pretreatment operation may be as follows: the terminal device performs gray processing on the scene image, that is, converts the color image into a monochromatic image, adjusts the monochromatic image through a filter and the actual light intensity of the shooting scene, for example, improves the pixel value of a highlight region, reduces the pixel value of a shadow region and the like, determines the contour lines contained in the scene image through a contour recognition algorithm, and deepens the contour line region, thereby being convenient for separating each shooting subject and determining the contour characteristics of each shooting subject.
In S603, the scene type is determined according to all the photographing subjects.
In this embodiment, the terminal device may calculate the matching factors of each candidate type according to the identified shooting subjects, and superimpose the matching factors of all the shooting subjects to determine the matching degree of each candidate type. And selecting one candidate scene with highest matching degree as the scene type corresponding to the scene image.
In one possible implementation manner, the terminal device may determine the weight value corresponding to each shooting subject according to the area occupied by each shooting subject in the scene image. The larger the area occupied by the shooting subject in the scene image is, the higher the corresponding weight value is; otherwise, the smaller the occupied area is, the lower the corresponding weight value is, and the matching degree of each candidate type is determined based on the matching factors and the weight values between each shooting subject and the candidate type by weighting and superposition.
For example, a subject photographed within a scene image includes: the cash dispenser, the shielding door, the bank mark and the entity person, and the area of the shooting main body occupying the whole scene image is respectively: 25%, 30%, 8% and 15%, the terminal device may convert the area occupation ratio into corresponding weight values, which are respectively: 2,1 and 1.5. The matching factors between the four shooting subjects and the bank scene types are respectively as follows: 100%, 80%, 100% and 30%, so the degree of matching between the scene image and the bank scene type is specifically: 2 x 100% +2 x 80% +1 x 100% +1.5 x 30% = 5.05.
In the embodiment of the application, the scene image is acquired through the intelligent glasses, and the shooting subject contained in the scene image is analyzed, so that the current scene type is determined, the automatic identification of the scene type is realized, the identification accuracy of the scene type is further improved, and the accuracy of electronic card selection is further improved.
Fig. 9 shows a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a third embodiment of the present application. Referring to fig. 9, with respect to the embodiment described in fig. 3, in a method for selecting an electronic card provided in this embodiment, S301 includes: s901 to S903, the details are as follows:
further, the obtaining the current scene information and determining the scene type according to the scene information includes:
in S901, ambient sound in the current scene is collected.
In this embodiment, the terminal device may collect the ambient sound of the current scene through an internal or external microphone module. Specifically, when the terminal device detects that a preset scene information acquisition condition is met, a scene information acquisition instruction can be sent to the microphone module. The process of triggering the scene type identification operation based on the scene information collection condition can be referred to the related description of the previous embodiment, and will not be described herein.
In one possible implementation, the user wears an earpiece control that includes the first microphone module, and a communication link is established between the terminal device and the earpiece control. In this case, the terminal device may control the first microphone module and the built-in second microphone module of the headset control to collect the ambient sound, and determine the ambient sound in the current scene based on the ambient sound collected by the two microphone modules. Specifically, the manner of determining the ambient sound of the current scene based on the two paths of ambient sound may be: the terminal equipment detects a first signal-to-noise ratio of a first environmental sound collected by the first microphone module, determines a second signal-to-noise ratio of a second environmental sound collected by the second microphone module, judges the magnitudes of the two signal-to-noise ratios, and selects the environmental sound with the larger signal-to-noise ratio as the environmental sound in the current scene. The larger the signal-to-noise ratio is, the smaller the influence of noise signals is when the environmental sound is acquired, so that the accuracy is higher in the process of determining the sounding main body later.
In S902, a frequency domain spectrum of the ambient sound is acquired, and a sounding body included in the current scene is determined according to a frequency value included in the frequency domain spectrum.
In this embodiment, the terminal device may perform fourier transform on the ambient sound, convert the time domain signal into the frequency domain signal, obtain a frequency domain spectrum corresponding to the ambient sound, and determine the sounding body included in the scene based on the frequency value included in the frequency domain spectrum and the frequency domain amplitude corresponding to each frequency value. Since different objects have fixed sounding frequencies, the terminal device can determine different sounding bodies through the difference of frequency values. For example, the sounding frequency of the human body is 8-10KHz, while the sounding frequency of the buzzer is fixed at 2KHz. Therefore, by converting the environmental sound into a frequency domain signal, the sounding body corresponding to the environmental sound can be determined.
In one possible implementation manner, the terminal device may determine a weight value corresponding to each sounding body, where the manner of determining the weight value may be: the terminal equipment identifies the amplitude magnitude of each sounding body in the frequency domain spectrum and determines the weight value of each sounding body based on the amplitude magnitude.
In S903, the scene type is determined from all the sound emission subjects.
In this embodiment, after determining the sounding main body, the terminal device may calculate, according to the sounding main body obtained by recognition, a matching factor of each candidate type, and superimpose the matching factors of all the sounding main bodies, to determine the matching degree of each candidate type. And selecting one candidate scene with the highest matching degree as the scene type corresponding to the environmental sound.
In the embodiment of the application, the microphone is used for collecting the environmental sound and analyzing the sounding main body contained in the environmental sound, so that the current scene type is determined, the automatic identification of the scene type is realized, and the accuracy rate of selecting the electronic card is improved.
Fig. 10 shows a flowchart of a specific implementation of a method S301 for selecting an electronic card according to a fourth embodiment of the present application. Referring to fig. 10, with respect to the embodiment described in fig. 3, in a method for selecting an electronic card provided in this embodiment, S301 includes: s1001 to S1003 are specifically described as follows:
further, the obtaining the current scene information and determining the scene type according to the scene information includes:
in S1001, current location information is acquired, and scene keywords contained in the location information are extracted.
In this embodiment, the terminal device is built with a positioning module, and the current positioning coordinate of the terminal device can be determined by the positioning module, and the position information associated with the positioning coordinate can be obtained by a third party map server or a map application and the like. For example, if the current positioning coordinate obtained by the terminal device is (113.300562,23.143292), the current positioning coordinate may be input to a corresponding map application, and the position information associated with the positioning coordinate is obtained, for example, the position information corresponding to the positioning coordinate is: bank a in the B area of city a can determine the current scene type by the location information containing text content.
In this embodiment, the terminal device may extract the scene keyword from the location information through a semantic recognition algorithm. In one possible implementation manner, the terminal device may delete the characters related to the region, retain the characters related to the scene, and use the characters related to the scene as the scene keyword. For example, the above determined location information is: the method comprises the steps that A, B, area C, street XX and bank G are determined to be characters related to areas through a semantic recognition algorithm, deleting is carried out, characters related to scenes are left, namely, bank G is left, and the bank G is used as a scene keyword of a mulberry.
In S1002, confidence probabilities of the candidate scenes are calculated according to the confidence degrees of the candidate scenes associated with all the scene keywords.
In this embodiment, the terminal device may calculate the confidence degrees between each scene keyword and each candidate scene, and calculate the confidence probabilities between the position information and each candidate scene according to the confidence degrees of all the scene keywords. For example, if the location information includes the scene keyword a and the scene keyword B, the confidence coefficient between the location information and the first candidate scene is divided into 80% and 60%, the terminal device may superimpose the two confidence coefficients, or calculate a mean value between the two confidence coefficients, and use the calculation result as the confidence probability of the first candidate scene.
In one possible implementation manner, the terminal device may be configured with corresponding keyword lists for different candidate scenes, and the terminal device may determine whether the scene keywords are in the keyword list of the candidate scenes, and determine the confidence level based on the determination result. Specifically, if the scene keyword is in the keyword list of the candidate scene, identifying that the confidence between the scene keyword and the candidate scene is 100%; otherwise, judging whether any character exists in the scene keywords in the keyword list of the candidate scene, and determining the confidence coefficient between the scene keywords and the candidate scene based on the number of the contained characters.
In S1003, the candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
In this embodiment, the terminal device may select a candidate scene with the highest confidence as the scene type matched with the location information.
In the embodiment of the application, the scene keywords are determined by determining the position information and carrying out semantic analysis on the position information, and the confidence probability of each candidate scene is determined based on the scene keywords, so that the current scene type is determined, the automatic identification of the scene type is realized, and the accuracy rate of electronic card selection is improved.
Fig. 11 is a flowchart illustrating a specific implementation of a method S302 for selecting an electronic card according to a fifth embodiment of the present application. Referring to fig. 11, with respect to any of the embodiments shown in fig. 3, 6, 9 and 10, the method for selecting an electronic card provided in this embodiment includes: s1101 to S1102 are specifically described as follows:
further, the selecting the candidate electronic card matched with the scene type as the target electronic card includes:
in S1101, the degree of matching between each of the candidate electronic cards and the scene type is calculated, respectively.
In this embodiment, after determining the scene type, the terminal device may calculate the matching degree between each existing candidate electronic card in the terminal device. Specifically, the terminal device may store standard scenes of each candidate electronic card, each standard scene may correspond to at least one scene tag, and establish a corresponding tag tree based on the range size of the scene tag. For example, for a certain traffic electronic card, the following scene tags are associated: the "regional bus", "public transportation" and "traffic", according to the range size included in the scene tag, can be determined that the "bus" is a generic term covering various regional bus types such as "regional bus", "city bus", and the like, that is, the range of the "bus" is larger than the "regional bus", so that the "bus" is a father node of the "regional bus", and so on, and can be constructed into a tag tree. The terminal device may configure the corresponding matching degree according to the size of the range, where the smaller the range is, the higher the corresponding matching degree is. The terminal device can judge whether the current scene type is matched with any scene tag of the candidate electronic card or not, and based on the matching degree of the matched scene tag association, the matching degree between the scene type and the candidate electronic card is used as the matching degree.
In S1102, the candidate electronic card with the highest matching degree is selected as the target electronic card.
In this embodiment, since the matching degree is used to identify the association relationship between each candidate electronic card and the current scene, the higher the matching degree is, the stronger the association relationship between the candidate electronic card and the current scene is; otherwise, the lower the matching degree is, the weaker the association relationship between the candidate electronic card and the current scene is. Based on the method, the terminal equipment can select the candidate electronic card with the highest matching degree as the target electronic card, and the automatic selection of the electronic card is realized.
In the embodiment of the application, the matching degree between each candidate electronic card and the scene type is calculated, and the candidate electronic card with the highest matching degree is selected as the target electronic card, so that the accuracy of selecting the target electronic card is improved.
Further, as another embodiment of the present application, after S302, S1103 and S1104 may also be included:
in S1103, a card swiping authentication operation is performed by the target electronic card and the card swiping device.
In this embodiment, after determining the target electronic card, the terminal device may send card information of the target electronic card to the card swiping device through a near field communication link between the terminal device and the card swiping device, so as to perform card swiping authentication on the target electronic card, and determine whether the target electronic card is matched with the card swiping device. If the matching is successful, performing subsequent operations such as authentication, authorization, fee deduction and the like, wherein the subsequent operations are related to the operation type initiated by the user, for example, the target electronic card is an electronic card of a traffic type, and the traffic fee can be paid through the traffic electronic card; if the target electronic card is an access control type electronic card, the door opening authorization can be performed through the access control electronic card. If the card swiping authentication failure is detected, the operation of S1104 is performed.
In S1104, if the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation through the target electronic card and the card swiping device until the card swiping authentication is successful.
In this embodiment, if the terminal device receives the authentication failure information fed back by the card swiping device, it indicates that the currently selected target electronic card is not matched with the current scene type, so that the target electronic card needs to be redetermined from the candidate electronic cards. Therefore, the terminal device can select the candidate electronic card with the matching degree value higher than the matching degree value as the target electronic card, and re-execute the card swiping authentication operation until the card swiping authentication is successful.
In the embodiment of the application, when the card swiping fails, the candidate electronic card with the next highest matching degree is automatically selected as the target electronic card, so that the purpose of automatically replacing the electronic card is realized, and the operation of a user is reduced.
Fig. 12 is a flowchart showing a specific implementation of a method S302 for selecting an electronic card according to a sixth embodiment of the present application. Referring to fig. 12, with respect to any of the embodiments shown in fig. 3, 6, 9 and 10, the method for selecting an electronic card provided in this embodiment includes: s1201 to S1202 are specifically described as follows:
Further, the selecting the candidate electronic card matched with the scene type as the target electronic card includes:
in S1201, standard scenes of the respective candidate electronic cards are acquired.
In this embodiment, when the terminal device stores each candidate electronic card, the terminal device may determine the associated standard scene according to the user setting or based on the type of the electronic card, and establish a standard scene index table, and after determining the scene type of the current scene, obtain the standard scene associated with each candidate electronic card in advance based on the above standard scene index table.
In S1202, the scene type is matched with each standard scene, and the target electronic card is determined according to the matching result.
In this embodiment, the terminal device may match the currently identified scene type with each standard scene, determine whether any candidate electronic card exists in the standard scene and the current scene type is consistent, and if so, identify the candidate electronic card as the target electronic card.
In the embodiment of the application, the standard scenes are associated with different candidate electronic cards, the standard scenes are matched with the scene types, and the target electronic card is determined, so that the automatic selection of the target electronic card is realized, the operation difficulty of a user is reduced, and the card swiping efficiency is improved.
Fig. 13 is a schematic structural diagram of an electronic card selecting system according to an embodiment of the present application. Referring to fig. 13, the electronic card selecting system includes a mobile terminal 131, an intelligent glasses 132, an external microphone 133 and a card swiping device 134, wherein a communication connection is established between the mobile terminal 131 and the intelligent glasses 132 as well as between the mobile terminal 131 and the external microphone 133, and a communication connection is established between the mobile terminal 131 and the card swiping device 134 through a near field communication module. The mobile terminal 131 is internally provided with a camera module 1311, a positioning module 1312 and a built-in microphone module 1313, and the mobile terminal 131 can collect different types of scene information through the above modules, and it should be noted that the mobile terminal 131 can call any one module or external equipment to collect one scene information, can also collect a plurality of scene information through two or more modules and external equipment, and determine a scene type based on the scene information, and select a target electronic card based on the scene type.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Corresponding to the method for selecting an electronic card described in the above embodiments, fig. 14 is a block diagram illustrating a structure of an apparatus for selecting an electronic card according to an embodiment of the present application, and for convenience of explanation, only a portion related to the embodiment of the present application is illustrated.
Referring to fig. 14, the electronic card selecting device includes:
a scene type determining unit 141, configured to obtain current scene information, and determine a scene type according to the scene information;
and an electronic card selecting unit 142, configured to select, as a target electronic card, a candidate electronic card that matches the scene type.
Alternatively, the scene type determining unit 141 includes:
the scene image acquisition unit is used for receiving the scene image fed back by the intelligent glasses;
a scene image analysis unit configured to identify a subject included in the scene image;
and the shooting subject analysis unit is used for determining the scene type according to all the shooting subjects.
Alternatively, the scene type determining unit 141 includes:
the environment sound acquisition unit is used for acquiring environment sound in the current scene;
the sounding main body determining unit is used for acquiring the frequency domain spectrum of the environmental sound and determining a sounding main body contained in the current scene according to the frequency value contained in the frequency domain spectrum;
And the sounding main body analysis unit is used for determining the scene type according to all the sounding main bodies.
Alternatively, the scene type determining unit 141 includes:
the scene keyword extraction unit is used for acquiring current position information and extracting scene keywords contained in the position information;
the confidence probability calculation unit is used for calculating the confidence probabilities of the candidate scenes according to the confidence degrees of the candidate scenes associated with all the scene keywords;
and the scene type selection unit is used for selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
Optionally, the electronic card selecting unit 142 includes:
the matching degree calculating unit is used for calculating the matching degree between each candidate electronic card and the scene type;
and the matching degree selecting unit is used for selecting the candidate electronic card with the highest matching degree as the target electronic card.
Optionally, the selecting device of the electronic card further includes:
the card swiping authentication unit is used for executing card swiping authentication operation through the target electronic card and the card swiping equipment;
and the authentication failure response unit is used for selecting the candidate electronic card with the highest matching degree from all candidate electronic cards except the target electronic card as a new target electronic card if the card swiping authentication fails, and returning to execute the card swiping operation through the target electronic card and the card swiping equipment until the card swiping authentication is successful.
Optionally, the electronic card selecting unit 142 includes:
a standard scene acquisition unit, configured to acquire standard scenes of the candidate electronic cards;
and the standard scene matching unit is used for matching the scene type with each standard scene and determining the target electronic card according to a matching result.
Therefore, the selecting device of the electronic card provided by the embodiment of the application can determine the quantization precision corresponding to different network levels by acquiring the network information of the target neural network before generating the target neural network, configure the preprocessing function for converting the data format between different precision based on the quantization precision of the current level and the quantization precision of the previous level, and generate the target neural network according to the preprocessing function, thereby realizing the processing of the data with different precision in the same target neural network, solving the compatibility problem of the neural network with mixed precision and improving the operation efficiency.
Fig. 15 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 15, the terminal device 15 of this embodiment includes: at least one processor 150 (only one shown in fig. 15), a memory 151, and a computer program 152 stored in the memory 151 and executable on the at least one processor 150, the processor 150 implementing the steps in any of the various electronic card selection method embodiments described above when executing the computer program 152.
The terminal device 15 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The terminal device may include, but is not limited to, a processor 150, a memory 151. It will be appreciated by those skilled in the art that fig. 15 is merely an example of the terminal device 15 and is not meant to be limiting as to the terminal device 15, and may include more or fewer components than shown, or may combine certain components, or different components, such as may also include input-output devices, network access devices, etc.
The processor 150 may be a central processing unit (Central Processing Unit, CPU), and the processor 150 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 151 may in some embodiments be an internal storage unit of the terminal device 15, such as a hard disk or a memory of the terminal device 15. The memory 151 may also be an external storage device of the terminal device 15 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 15. Further, the memory 151 may also include both an internal storage unit and an external storage device of the terminal device 15. The memory 151 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, such as program codes of the computer programs. The memory 151 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the application also provides a network device, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps that may implement the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that may be performed in the various method embodiments described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A method of selecting an electronic card, comprising:
acquiring current scene information, and determining a scene type according to the scene information;
selecting a candidate electronic card matched with the scene type as a target electronic card;
The selecting the candidate electronic card matched with the scene type as the target electronic card comprises the following steps:
obtaining standard scenes of each candidate electronic card, wherein the standard scenes comprise: establishing a standard scene index table according to user setting or based on the electronic card type to determine the associated standard scene, and acquiring the pre-associated standard scene of each candidate electronic card based on the standard scene index table after determining the scene type of the current scene;
matching the scene type with each standard scene, and determining the target electronic card according to a matching result;
the obtaining the current scene information and determining the scene type according to the scene information comprises the following steps:
collecting environmental sound in a current scene, including: the method comprises the steps that a first microphone module of an earphone control and a second microphone module built in terminal equipment are controlled to collect environmental sound, and the environmental sound collected by the first microphone module and the second microphone module is used as the environmental sound in the current scene;
acquiring a frequency domain spectrum of the environmental sound, and determining a sounding main body contained in a current scene according to a frequency value contained in the frequency domain spectrum;
and determining the scene type according to all the sounding bodies.
2. The selection method according to claim 1, wherein the obtaining current scene information and determining a scene type according to the scene information includes:
receiving a scene image fed back by the intelligent glasses;
identifying a subject contained within the scene image;
and determining the scene type according to all the shooting subjects.
3. The selection method according to claim 1, wherein the obtaining current scene information and determining a scene type according to the scene information includes:
acquiring current position information, and extracting scene keywords contained in the position information;
according to the confidence degrees of the candidate scenes associated with all the scene keywords, respectively calculating the confidence probabilities of the candidate scenes;
and selecting the candidate scene with the highest confidence probability as the scene type corresponding to the position information.
4. A selection method according to any one of claims 1-3, said selecting as a target electronic card a candidate electronic card matching the scene type, comprising:
respectively calculating the matching degree between each candidate electronic card and the scene type;
and selecting the candidate electronic card with the highest matching degree as the target electronic card.
5. The selection method according to claim 4, further comprising, after the selecting as the target electronic card, a candidate electronic card matching the scene type:
executing card swiping authentication operation through the target electronic card and the card swiping equipment;
if the card swiping authentication fails, selecting the candidate electronic card with the highest matching degree from all candidate electronic cards except the target electronic card as a new target electronic card, and returning to execute the card swiping operation through the target electronic card and the card swiping equipment until the card swiping authentication is successful.
6. An electronic card selecting device, comprising:
the scene type determining unit is used for acquiring current scene information and determining the scene type according to the scene information;
the electronic card selecting unit is used for selecting candidate electronic cards matched with the scene type as target electronic cards;
the electronic card selecting unit includes:
a standard scene acquisition unit, configured to acquire standard scenes of the candidate electronic cards, including: establishing a standard scene index table according to user setting or based on the electronic card type to determine the associated standard scene, and acquiring the pre-associated standard scene of each candidate electronic card based on the standard scene index table after determining the scene type of the current scene;
The standard scene matching unit is used for matching the scene type with each standard scene and determining the target electronic card according to a matching result;
the scene type determination unit includes:
the environmental sound collection unit is used for collecting environmental sound under the current scene, and comprises: the method comprises the steps that a first microphone module of an earphone control and a second microphone module built in terminal equipment are controlled to collect environmental sound, and the environmental sound collected by the first microphone module and the second microphone module is used as the environmental sound in the current scene;
the sounding main body determining unit is used for acquiring the frequency domain spectrum of the environmental sound and determining a sounding main body contained in the current scene according to the frequency value contained in the frequency domain spectrum;
and the sounding main body analysis unit is used for determining the scene type according to all the sounding main bodies.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 5.
CN202010187020.XA 2020-03-17 2020-03-17 Electronic card selection method, device, terminal and storage medium Active CN113409041B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010187020.XA CN113409041B (en) 2020-03-17 2020-03-17 Electronic card selection method, device, terminal and storage medium
PCT/CN2021/080488 WO2021185174A1 (en) 2020-03-17 2021-03-12 Electronic card selection method and apparatus, terminal, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010187020.XA CN113409041B (en) 2020-03-17 2020-03-17 Electronic card selection method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN113409041A CN113409041A (en) 2021-09-17
CN113409041B true CN113409041B (en) 2023-08-04

Family

ID=77677276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010187020.XA Active CN113409041B (en) 2020-03-17 2020-03-17 Electronic card selection method, device, terminal and storage medium

Country Status (2)

Country Link
CN (1) CN113409041B (en)
WO (1) WO2021185174A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703391B (en) * 2022-09-23 2024-04-26 荣耀终端有限公司 Electronic card activation method and device
TWI833519B (en) * 2022-12-23 2024-02-21 華南商業銀行股份有限公司 Electronic payment system and method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN102204231A (en) * 2010-02-04 2011-09-28 华为终端有限公司 Method and device for controlling working mode of data card and data card
CN103456301A (en) * 2012-05-28 2013-12-18 中兴通讯股份有限公司 Ambient sound based scene recognition method and device and mobile terminal
WO2018010337A1 (en) * 2016-07-15 2018-01-18 乐视控股(北京)有限公司 Display method and device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007048976A1 (en) * 2007-06-29 2009-01-02 Voice.Trust Ag Virtual prepaid or credit card and method and system for providing such and for electronic payments
TWI476718B (en) * 2012-12-12 2015-03-11 Insyde Software Corp Automatic Screening Method and Device for Electronic Card of Handheld Mobile Device
KR101330962B1 (en) * 2012-12-27 2013-11-18 신한카드 주식회사 Payment device control method for selecting card settlement
CN107330687A (en) * 2017-06-06 2017-11-07 深圳市金立通信设备有限公司 A kind of near field payment method and terminal
CN108600634B (en) * 2018-05-21 2020-07-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN109919600A (en) * 2019-03-04 2019-06-21 出门问问信息科技有限公司 A kind of virtual card call method, device, equipment and storage medium
CN110536274B (en) * 2019-08-06 2022-11-25 拉卡拉支付股份有限公司 NFC device control method and device, NFC device and storage medium
CN110795949A (en) * 2019-09-25 2020-02-14 维沃移动通信(杭州)有限公司 Card swiping method and device, electronic equipment and medium
CN110557742A (en) * 2019-09-26 2019-12-10 珠海市魅族科技有限公司 Default binding card switching method, device, equipment and storage medium for near field communication

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593522A (en) * 2009-07-08 2009-12-02 清华大学 A kind of full frequency domain digital hearing aid method and apparatus
CN102204231A (en) * 2010-02-04 2011-09-28 华为终端有限公司 Method and device for controlling working mode of data card and data card
CN103456301A (en) * 2012-05-28 2013-12-18 中兴通讯股份有限公司 Ambient sound based scene recognition method and device and mobile terminal
WO2018010337A1 (en) * 2016-07-15 2018-01-18 乐视控股(北京)有限公司 Display method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于矢量量化的音频场景分析方法;韩纪庆 等;《电声技术》(第3期);第8-10 *

Also Published As

Publication number Publication date
WO2021185174A1 (en) 2021-09-23
CN113409041A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN113163470B (en) Method for identifying specific position on specific route and electronic equipment
CN113794801B (en) Method and device for processing geo-fence
CN110795007B (en) Method and device for acquiring screenshot information
CN112130714B (en) Keyword search method capable of learning and electronic equipment
CN112269853B (en) Retrieval processing method, device and storage medium
CN113220848B (en) Automatic question and answer method and device for man-machine interaction and intelligent equipment
CN112287234B (en) Information retrieval method, device and storage medium
CN113392954B (en) Data processing method and device of terminal network model, terminal and storage medium
CN113822322B (en) Image processing model training method and text processing model training method
CN113409041B (en) Electronic card selection method, device, terminal and storage medium
CN109917988B (en) Selected content display method, device, terminal and computer readable storage medium
CN112835495B (en) Method and device for opening application program and terminal equipment
CN113806469B (en) Statement intention recognition method and terminal equipment
CN112740148A (en) Method for inputting information into input box and electronic equipment
CN116033069B (en) Notification message display method, electronic device and computer readable storage medium
CN115134453A (en) Riding information display method and electronic equipment
CN116861066A (en) Application recommendation method and electronic equipment
CN113377976A (en) Resource searching method and device, computer equipment and storage medium
CN114445522A (en) Brush effect graph generation method, image editing method, device and storage medium
CN113495967A (en) Multimedia data pushing method, equipment, server and storage medium
CN114764300B (en) Window page interaction method and device, electronic equipment and readable storage medium
CN117635466B (en) Image enhancement method, device, electronic equipment and readable storage medium
CN116723460B (en) Method for creating personal geofence and related equipment thereof
CN116089368B (en) File searching method and related device
CN116311311B (en) Electronic form generation method, electronic form generation device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant