WO2021185174A1 - Electronic card selection method and apparatus, terminal, and storage medium - Google Patents

Electronic card selection method and apparatus, terminal, and storage medium Download PDF

Info

Publication number
WO2021185174A1
WO2021185174A1 PCT/CN2021/080488 CN2021080488W WO2021185174A1 WO 2021185174 A1 WO2021185174 A1 WO 2021185174A1 CN 2021080488 W CN2021080488 W CN 2021080488W WO 2021185174 A1 WO2021185174 A1 WO 2021185174A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
electronic card
candidate
card
type
Prior art date
Application number
PCT/CN2021/080488
Other languages
French (fr)
Chinese (zh)
Inventor
万磊
王强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021185174A1 publication Critical patent/WO2021185174A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • G06Q20/351Virtual cards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/34Payment architectures, schemes or protocols characterised by the use of specific devices or networks using cards, e.g. integrated circuit [IC] cards or magnetic cards
    • G06Q20/356Aspects of software for card payments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to the field of information processing technology, and in particular to an electronic card selection method, device, terminal, and storage medium.
  • the embodiments of the application provide an electronic card selection method, device, terminal, and storage medium, which can solve the existing electronic card technology and need to manually select the electronic card associated with the current operation, thereby increasing the operation difficulty and lower operation efficiency.
  • the problem can be implemented using any combination of the above-mentioned techniques.
  • an embodiment of the present application provides a method for selecting an electronic card, including:
  • the candidate electronic card matching the scene type is selected as the target electronic card.
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • the scene type is determined according to all the photographing subjects.
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • the scene type is determined according to all the speaking subjects.
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • the candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
  • the selecting a candidate electronic card matching the scene type as a target electronic card includes:
  • the candidate electronic card with the highest matching degree is selected as the target electronic card.
  • the method further includes:
  • the target electronic card and the card swiping device perform a card swiping operation until the swiping authentication is successful.
  • the selecting a candidate electronic card matching the scene type as a target electronic card includes:
  • the scene type is matched with each of the standard scenes, and the target electronic card is determined according to the matching result.
  • an electronic card selection device including:
  • a scene type determining unit configured to obtain current scene information, and determine the scene type according to the scene information
  • the electronic card selection unit is used to select a candidate electronic card matching the scene type as the target electronic card.
  • the embodiments of the present application provide a terminal device, a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the The computer program implements the method for selecting the electronic card described in any one of the above-mentioned first aspects.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, any of the above-mentioned aspects of the first aspect is implemented.
  • One method for selecting the electronic card One method for selecting the electronic card.
  • the embodiments of the present application provide a computer program product that, when the computer program product runs on a terminal device, causes the terminal device to execute the method for selecting an electronic card in any one of the above-mentioned first aspects.
  • the embodiment of the application collects current scene information through the terminal device, determines the scene type according to the scene object contained in the scene information, and selects the scene type from all candidate electronic cards.
  • the type-associated electronic card is used as the target electronic card, which realizes the purpose of automatically selecting the electronic card, and improves the operation efficiency and response speed of the electronic card.
  • FIG. 1 is a block diagram of a part of the structure of a mobile phone provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of the software structure of a mobile phone according to an embodiment of the present application.
  • FIG. 3 is an implementation flowchart of an electronic card selection method provided by the first embodiment of the present application.
  • FIG. 4 is a schematic diagram of scene type recognition based on scene images provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of selecting an electronic card provided by an embodiment of the present application.
  • FIG. 6 is a specific implementation flowchart of an electronic card selection method S301 provided by the second embodiment of the present application.
  • FIG. 7 is a schematic diagram of a shooting scene range of a terminal device during a card swiping process according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a photographing range of smart glasses during a card swiping process according to another embodiment of the present application.
  • FIG. 10 is a specific implementation flowchart of an electronic card selection method S301 provided by the fourth embodiment of the present application.
  • 11 is a specific implementation flowchart of an electronic card selection method S302 provided by the fifth embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of an electronic card selection system provided by an embodiment of the present application.
  • FIG. 14 is a structural block diagram of an electronic card selection device provided by an embodiment of the present application.
  • FIG. 15 is a schematic diagram of a terminal device according to another embodiment of the present application.
  • the term “if” can be construed as “when” or “once” or “in response to determination” or “in response to detecting “.
  • the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “in response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
  • the embodiments of the present application provide an electronic card selection method, device, equipment and storage medium.
  • you need to call an electronic card for authentication, payment, etc. collect the current scene information through the terminal device, determine the scene type according to the scene object contained in the scene information, and select the electronic card associated with the scene type from all candidate electronic cards
  • the purpose of automatically selecting the electronic card is realized, and the operation efficiency and response speed of the electronic card are improved.
  • the method for selecting an electronic card provided by the embodiments of this application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra mobile devices.
  • Personal computers ultra-mobile personal computers, UMPC
  • netbooks personal digital assistants (personal digital assistants, PDAs) and other terminal devices can also be applied to databases, servers, and service response systems based on terminal artificial intelligence. Examples of this application There are no restrictions on the specific types of terminal equipment.
  • the terminal device may be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, Personal Digital Assistant (PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, computers, laptops, handheld communication devices, handheld computing devices, and /Or other devices used to communicate on the wireless system and next-generation communication systems, for example, mobile terminals in 5G networks or mobile terminals in the future evolved Public Land Mobile Network (PLMN) network, etc.
  • STAION, ST station
  • WLAN Wireless Local Loop
  • PDA Personal Digital Assistant
  • the wearable device can also be a general term for the application of wearable technology to intelligently design daily wear and develop wearable devices, such as near-field devices. Gloves, watches, etc. for communication modules.
  • a wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories. It is attached to the user's body and performs operations such as payment and authentication through a pre-bound electronic card. Wearable devices are not only a kind of hardware device, but also realize powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-sized, complete or partial functions that can be implemented without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, and need to be used in conjunction with other devices such as smart phones. , Such as various types of smart watches with display screens, smart bracelets, etc.
  • the aforementioned terminal device may be a mobile phone 100 having a hardware structure as shown in FIG. 1.
  • the mobile phone 100 may specifically include: a radio frequency (RF) circuit 110, a memory 120, and an input The unit 130, the display unit 140, the sensor 150, the audio circuit 160, the short-range wireless communication module 170, the processor 180, and the power supply 190 and other components.
  • RF radio frequency
  • the terminal device may include more or less components than those shown in the figure, or a combination of certain components, or different components. Component arrangement.
  • the RF circuit 110 can be used for receiving and sending signals during information transmission or communication. In particular, after receiving the downlink information of the base station, it is processed by the processor 180; in addition, the designed uplink data is sent to the base station.
  • the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like.
  • the RF circuit 110 can also communicate with the network and other devices through wireless communication.
  • the above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), Email, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile Communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • Email Short Messaging Service
  • the memory 120 may be used to store software programs and modules.
  • the processor 180 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 120.
  • the memory 120 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc.
  • the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 120 may store the card information of the electronic card and the corresponding relationship between each electronic card and the associated scene type.
  • the mobile phone may determine the target electronic card associated with the current scene through the memory 120.
  • the input unit 130 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the mobile phone 100.
  • the input unit 130 may include a touch panel 131 and other input devices 132.
  • the touch panel 131 also known as a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 131 or near the touch panel 131. Operation), and drive the corresponding connection device according to the preset program.
  • the touch panel 131 may include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 180, and can receive and execute the commands sent by the processor 180.
  • the touch panel 131 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave.
  • the input unit 130 may also include other input devices 132.
  • the other input device 132 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
  • the display unit 140 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 140 may include a display panel 141.
  • the display panel 141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc.
  • the touch panel 131 can cover the display panel 141. When the touch panel 131 detects a touch operation on or near it, it transmits it to the processor 180 to determine the type of the touch event, and then the processor 180 responds to the touch event. The type provides corresponding visual output on the display panel 141.
  • the touch panel 131 and the display panel 141 are used as two independent components to realize the input and input functions of the mobile phone, but in some embodiments, the touch panel 131 and the display panel 141 can be integrated. Realize the input and output functions of the mobile phone.
  • the mobile phone 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display panel 141 according to the brightness of the ambient light.
  • the proximity sensor can close the display panel 141 and/or when the mobile phone is moved to the ear. Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary.
  • the mobile phone can use the learning algorithm to obtain the measured value of each sensor when the user performs the card swiping action, so as to determine in advance whether the user needs to perform the card swiping operation before the mobile phone approaches the card swiping device, and collect the current scene information to determine Scene type, thereby further improving the selection efficiency of electronic cards.
  • the learning algorithm can be used to obtain the measured value of each sensor when the user performs the card swiping action, so as to determine in advance whether the user needs to perform the card swiping operation before the mobile phone approaches the card swiping device, and collect the current scene information to determine Scene type, thereby further improving the selection efficiency of electronic cards.
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 160 can transmit the electrical signal converted from the received audio data to the speaker 161, which is converted into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, and the audio circuit 160 After being received, it is converted into audio data, and then processed by the audio data output processor 180, and then sent to, for example, another mobile phone via the RF circuit 110, or the audio data is output to the memory 120 for further processing.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the short-range wireless module 170.
  • the user provides wireless broadband Internet access.
  • the aforementioned short-range wireless module 170 may include a WiFi chip, a Bluetooth chip, and an NFC chip. Through the WiFi chip, the function of the mobile phone 100 to perform WiFi Direct connection with other terminal devices can also be realized, and the mobile phone 100 can also work to provide wireless access services.
  • the AP mode that allows other wireless devices to access or the STA mode that can connect to the AP and does not accept wireless device access (Station mode) to establish point-to-point communication between the mobile phone 100 and other WiFi devices; the mobile phone can use
  • the NFC chip establishes a short-distance communication link with the card swiping device, and sends the pre-written card information of the electronic card to the card swiping device according to the above-mentioned short-distance communication link, performs subsequent swiping operations, and feeds back the swiping result to the mobile phone , Output the credit card result through the display module of the mobile phone.
  • the processor 180 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole.
  • the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 180.
  • the mobile phone 100 also includes a power source 190 (such as a battery) for supplying power to various components.
  • a power source 190 such as a battery
  • the power source can be logically connected to the processor 180 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
  • the mobile phone 100 may also include a camera.
  • the position of the camera on the mobile phone may be front-mounted or rear-mounted, which is not limited in the embodiment of the present application.
  • the mobile phone can collect the scene image of the current scene through the camera, and determine the scene information and the scene type by analyzing the scene image.
  • the software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes the layered Android system as an example to illustrate the software structure of the mobile phone 100.
  • FIG. 2 is a block diagram of the software structure of the mobile phone 100 according to an embodiment of the present application.
  • the Android system is divided into four layers, which are application layer, application framework layer (framework, FWK), system layer, and hardware abstraction layer.
  • the layers communicate through software interfaces.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem, and provides a combination of 2D and 3D graphics for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the above-mentioned kernel layer further includes a PCIE driver.
  • the execution subject of the process is a device equipped with a near field communication module.
  • the above-mentioned device equipped with a near field communication module may specifically be a terminal device, and the terminal device may be a mobile terminal such as a smart phone, a tablet computer, and a laptop used by the user.
  • Fig. 3 shows the implementation flow chart of the method for selecting an electronic card provided by the first embodiment of the present application, and the details are as follows:
  • the current scene information is acquired, and the scene type is determined according to the scene information.
  • the terminal device can obtain current scene information through built-in sensors and other information collection modules, or it can establish a data link with an external information collection device to receive scene information collected by other information collection devices.
  • the terminal device has a built-in camera module.
  • the camera module can be a front camera module and/or a rear camera module.
  • the camera module collects the scene image of the current scene and recognizes the above scene image as the scene. Information, the scene information is analyzed to determine the scene type;
  • the terminal device has a built-in microphone module, which can collect the scene audio of the current scene, and recognize the above scene audio as scene information, perform audio analysis on the scene audio, and determine the scene type;
  • the terminal device has a built-in positioning module, and obtains positioning information through the positioning module, uses the positioning information as scene information, and determines the associated scene type according to the positioning information.
  • the specific implementation manner for the terminal device to determine the scene type through the scene image may be: the terminal device may be configured with corresponding standard images for different scene types. The terminal device can match the currently acquired scene image with each standard image, and determine the scene type associated with the scene image according to the matching result.
  • the process of matching the scene image with the standard image can specifically be as follows: the terminal device can perform grayscale processing on the scene image, convert the scene image into a monochrome image, and generate and The image array corresponding to the monochrome image is imported into the preset convolutional neural network, and the above-mentioned convolutional array is pooled and reduced by the preset convolution kernel to generate the image feature vector corresponding to the above-mentioned image array, and Calculate the vector distance between the image feature vector and the standard feature vector corresponding to each standard image, use the vector distance as the matching probability value between each standard image, and select the scene type associated with the standard image with the largest probability value as the The scene type of the scene image.
  • the standard feature vector of the above-mentioned standard image can be collected by a self-learning algorithm.
  • the implementation of the above-mentioned self-learning algorithm can be as follows: when each electronic card is initially bound, the terminal device collects information about the corresponding use scenario of the electronic card. Standard image, based on the above standard image to generate the above standard feature vector, in the process of subsequent use, every time the electronic card is used to perform a card swiping operation, the scene image corresponding to the card swiping operation is imported into the above-mentioned neural network to adjust the generated standard The feature vector, so that the configured standard feature vector can be adjusted posteriorly during each use, thereby improving the accuracy of the standard feature vector.
  • each electronic card corresponds to a cloud server.
  • the cloud server can be used to store the operation record of the electronic card and the scene image associated with each operation record.
  • the cloud server extracts historical scene images from each operation record, and generates the aforementioned standard feature vector from all historical scene images.
  • the cloud server can send the aforementioned standard feature vector to each terminal device at a preset update cycle.
  • the aforementioned standard feature vector can be associated with an electronic card identifier.
  • the terminal device stores the received electronic card identifier and standard feature vector in the storage unit. In the subsequent matching operation, the above-mentioned standard feature vector can be extracted to perform the matching operation.
  • FIG. 4 shows a schematic diagram of scene type recognition based on scene images provided by an embodiment of the present application.
  • a terminal device 41 and a card swiping device 42 are included.
  • the mobile terminal 41 is configured with a camera module 411 and a near field communication module 412.
  • the near field communication module 412 will detect the near field communication signal sent by the swiping device 42 and establish a communication connection with the card swiping device 42.
  • the terminal device can activate the camera module 411 through the camera
  • the module 411 collects a scene image in the current scene, and determines the scene type according to the scene image.
  • the terminal device collects multiple different types of scene information, and determines the current scene type according to the different types of scene information.
  • the terminal device can collect scene images and scene audio of the current scene, identify multiple candidate object types from the scene images, filter out the target object types from the candidate object types according to the scene audio, and determine the scene type according to the target object types. Screening invalid candidate object types through scene audio can calibrate the recognition process of scene types, thereby improving the recognition efficiency.
  • the target utterance object is filtered out of the objects, and the scene type is determined according to the target utterance object.
  • the terminal device obtains a scene image through the camera module. Due to factors such as too far a shooting distance or obstruction by obstacles, some scene objects cannot be recognized through the scene image, thereby reducing the accuracy of scene type recognition.
  • the terminal device can collect the ambient sound in the current scene through the microphone module while acquiring the scene image, determine the sound subject through the ambient sound, and determine the subject through image recognition of the scene image, through the sound subject and shooting The object determines the type of scene.
  • the method of determining the scene type by the sounding subject and the shooting object may be: the terminal device determines the first confidence level of each candidate scene type through all the sounding subjects, and determining each scene type by all the shooting objects The second confidence of the candidate scene type is weighted based on the voice weight, the first confidence is weighted based on the image weight, and the second confidence is weighted based on the image weight, and calculations about each candidate are calculated based on the weighted first confidence and second confidence For the matching degree of the scene type, the candidate scene type with the highest matching degree is selected as the scene type of the current scene.
  • the scene type associated with the electronic card stored in the terminal device can be divided into three different scene types: bank type, bus type, and access control type.
  • the terminal device detects that the electronic card needs to be called, it can obtain the scene image of the current scene through the camera module.
  • the first confidence level corresponding to the scene type is: (bank type, 80%), (access control type, 50%), (traffic type, 20%), and by collecting environmental sounds, it is determined that the sounding subjects included in the scene include Case, mechanical operation sound, so that the second confidence level corresponding to the three candidate scene types are: (bank type, 60%), (access control type, 50%), (traffic type, 60%), and the preset image weight The value is 1, and the voice weight value is 0.8.
  • the user can trigger the selection process of the electronic card by clicking the electronic card activation control or opening the electronic card application.
  • the terminal device can also use the near field communication module to detect a near field communication signal, Trigger the selection process of the electronic card.
  • the terminal device can obtain the user's movement trajectory when performing a card swiping operation through the terminal device through a built-in learning algorithm, so that when it is detected that the current terminal's movement trajectory is consistent with the aforementioned learned movement trajectory , Automatically activate the electronic card selection process, so as to achieve the purpose of selecting the electronic card in advance and improve the subsequent response speed.
  • the specific implementation process is as follows: the terminal device will continue to obtain the parameter value of the motion sensor, and according to the sequence of the collection time, store the parameter value corresponding to each collection time in the motion parameter queue, and continuously update it according to the first-in-first-out order The above mentioned motion parameter queue.
  • the terminal device detects that the user performs a card swiping operation, it acquires all the parameter values in the motion parameter queue at the time of the card swiping operation, and generates a motion trajectory corresponding to the motion parameter queue at the time of the card swipe.
  • the terminal device can import the movement trajectory corresponding to the historical card swiping operation into the machine learning model, so that a recognition model for the card swiping operation can be generated.
  • the terminal device When the terminal device is in use, it will import each parameter value in the parameter motion queue into the above-mentioned swiping operation recognition model to determine whether there is a swiping action, if it is, then execute the electronic card selection process; otherwise, continue to collect the motion sensor And update the above-mentioned parameter motion queue. It should be noted that each time the terminal device performs a card swiping operation, it can update the above-mentioned card swiping operation recognition model, thereby improving the accuracy of recognition.
  • a candidate electronic card matching the scene type is selected as the target electronic card.
  • the user can bind multiple electronic cards to the terminal device, and each bound electronic card is the aforementioned candidate electronic card.
  • the method of binding the electronic card may be: the user can input the identification of the physical card into the terminal device, and send the authorization information of the physical card to the cloud server of the entity card owner through the electronic card control of the terminal device, for example, input The bound mobile phone number or user identity information, etc., after the cloud server detects that the authorization information is legal, it can feed back the corresponding authorization code to the terminal device, and the terminal device sends the authorization code to the electronic card corresponding to the physical card generated in the terminal device.
  • the card is associated, so that an electronic card corresponding to the physical card is created in the terminal device.
  • the terminal device can configure associated scene types for different candidate electronic cards. After the scene type corresponding to the scene information is determined, it can be judged whether the current scene type matches the scene type of each candidate electronic card, that is, whether the scene type associated with the electronic card is consistent with the current scene type, and the candidate electronic cards with the same scene type The card is used as the target electronic card, and subsequent swiping operations are performed.
  • FIG. 5 shows a schematic diagram of selecting an electronic card provided by an embodiment of the present application.
  • the terminal equipment is bound with four electronic cards: bank card A, bank card B, bus card, and access card.
  • the terminal device determines that the current scene type is the bank type by collecting the current scene information, and the scene types associated with bank card A and bank card B are all bank types, that is, the scene types of the above two electronic cards are the same as the current scene type In this case, the priority corresponding to the above two bank cards can be obtained. If the priority of bank card A is higher than the priority of bank card B, bank card A can be selected as the target electronic card.
  • the matching degree of multiple candidate electronic cards with the same scene type can be calculated according to the current card swiping time and location, and the matching degree can be selected
  • the highest candidate electronic card is used as the target electronic card.
  • different electronic cards have corresponding usage habits. For example, a user uses electronic card A for swiping operations in the morning, and electronic card B for swiping operations in the afternoon.
  • the terminal device can use the historical time in the history of each electronic card to swipe the card. As well as the historical location, the matching degree with the current scene is calculated, and a candidate electronic card with a higher matching degree is selected as the target electronic card.
  • the method for selecting an electronic card collects current scene information through the terminal device when the electronic card needs to be invoked for authentication, payment, and other operations, and determines according to the scene objects contained in the scene information.
  • Scene type and select the electronic card associated with the scene type from all candidate electronic cards as the target electronic card, which realizes the purpose of automatically selecting the electronic card and improves the operation efficiency and response speed of the electronic card.
  • FIG. 6 shows a specific implementation flowchart of an electronic card selection method S301 provided by the second embodiment of the present application.
  • S301 in an electronic card selection method provided in this embodiment includes: S601 to S603, which are detailed as follows:
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • S601 a scene image fed back by the smart glasses is received.
  • the terminal device establishes a communication connection with an external smart glasses, and collects scene images in the current scene through the camera module built into the smart glasses. Since smart glasses are worn near the user's eyes, compared with using the camera module built in the terminal device to capture scene images, the line of sight is clearer, and the consistency between the scene viewed by the user is higher, and the main scene subjects are reduced. It is blocked by other objects during shooting, which improves the accuracy of scene type recognition. For some scenes, such as the type of traffic scenes, when users use an electronic card to take a bus, they will often take out their mobile phone from the bag of clothes or trousers, and then directly perform the card swipe operation, in the moving path from the pocket to the card machine. , The camera module built into the terminal device is likely to be unable to collect scene images containing the card reader.
  • FIG. 7 shows a schematic diagram of a shooting scene range of a terminal device during a card swiping process according to an embodiment of the present application.
  • the initial position of the terminal device is in the pocket.
  • the card needs to be swiped, it needs to be taken out of the pocket and close to the credit card machine, that is, the target location is near the credit card machine.
  • the captured area is shown as the fan-shaped area in Fig. 7. It can be seen that only when the terminal device is close to the card swiping machine, the captured scene image will contain the card swiping device, and only part of the image of the swiping device, so the recognition accuracy is low.
  • FIG. 8 shows a schematic diagram of a photographing range of smart glasses provided by another embodiment of the present application during a card swiping process.
  • the smart glasses are worn on the user's eye area, their photographable range is basically the same as the visual range of the human eye, and the user swipes the card while the user is moving forward, that is, when the user approaches the card swiping device
  • the camera can be continuously recorded by the smart glasses, so that compared to using the built-in camera module of the terminal device, the collection of environmental images through the smart glasses has a better recognition effect.
  • the terminal device when the terminal device detects that the preset scene information collection conditions are met, it can send a collection instruction to the smart glasses, and the smart glasses can perform image collection operations after receiving the collection instruction, and The collected images are fed back to the terminal device, so that the terminal device can obtain the above-mentioned scene image.
  • the aforementioned scene information collection condition may be: when the terminal device detects that the current scene contains a near field communication signal, it recognizes that the scene information collection condition is satisfied; or, the terminal device records multiple card swiping operations based on historical card swiping operations Location, when it is detected that the current location reaches the above-mentioned stored card swiping location, it is recognized that the scene information collection condition is satisfied.
  • the smart glasses can acquire the current scene image in a preset collection period, and feed the collected scene image back to the terminal device, and the terminal device can recognize the subject in the scene image, It is determined whether the photographing subject contains a target subject related to the card swiping operation, and if it exists, the operation of S603 is executed.
  • wireless communication can be established between the terminal device and the smart glasses.
  • the smart glasses have built-in wireless communication modules, such as a WiFi module, a Bluetooth module, and a ZigBee module.
  • the terminal device can also Corresponding wireless communication module is built-in. The terminal device searches for the wireless network of the smart glasses and joins the wireless network, thereby establishing a wireless communication link with the smart glasses.
  • the terminal device can analyze the shooting subject contained in the scene image through an image analysis algorithm.
  • the method of determining the shooting subject can be specifically as follows: by identifying the contour lines contained in the scene image, the scene image is divided into multiple For each subject area, the subject type of the shooting subject corresponding to each subject area is determined according to the contour shape and color characteristics of the subject area.
  • the terminal device may be configured with a list of subject types, and a corresponding subject model is associated with each subject type.
  • the terminal device may match each subject area with each subject model, and select the subject type of the subject model with the highest matching degree as the photographing subject corresponding to the subject area.
  • the terminal device may perform a preprocessing operation on the scene image before analyzing the scene image, so that the accuracy of the subject identification can be improved.
  • the preprocessing operation may be as follows: the terminal device performs grayscale processing on the scene image, that is, converts the color image into a monochrome image, and adjusts the monochrome image through the filter and the actual light intensity of the shooting scene. For example, increase the pixel value of the highlight area and reduce the pixel value of the shadow area, and use the contour recognition algorithm to determine the contour line contained in the scene image, and deepen the contour area, so as to facilitate the separation of each subject and determine each shot The contour feature of the main body.
  • the scene type is determined according to all the photographing subjects.
  • the terminal device may calculate the matching factors of each candidate type according to the identified photographing subject, and superimpose the matching factors of all photographing subjects to determine the matching degree of each candidate type.
  • the candidate scene with the highest matching degree is selected as the scene type corresponding to the scene image.
  • the terminal device may determine the weight value corresponding to each photographic subject according to the area occupied by each photographic subject in the scene image. Among them, the larger the area occupied by the subject in the scene image, the higher the corresponding weight value; conversely, the smaller the occupied area, the lower the corresponding weight value, and is based on the relationship between each subject and the candidate type.
  • the matching factor and weight value are weighted and superimposed to determine the matching degree of each candidate type.
  • the subjects captured in a scene image include: cash machines, screen doors, bank signs, and physical persons, and the above-mentioned subjects occupy 25%, 30%, and 8% of the entire scene image.
  • the terminal device can convert the above-mentioned area area ratio into the corresponding weight value, respectively: 2, 2, 1, and 1.5.
  • the scene image is collected through smart glasses, and the subject contained in the scene image is analyzed to determine the current scene type, realize the automatic recognition of the scene type, and further improve the accuracy of the scene type recognition Therefore, the accuracy of electronic card selection is improved.
  • FIG. 9 shows a specific implementation flowchart of an electronic card selection method S301 provided by the third embodiment of the present application.
  • S301 in an electronic card selection method provided in this embodiment includes: S901 to S903, and the details are as follows:
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • the terminal device can collect the ambient sound of the current scene through a built-in or external microphone module. Specifically, when the terminal device detects that a preset scene information collection condition is met, it can send a scene information collection instruction to the microphone module.
  • the process of triggering the scene type recognition operation based on the scene information collection condition can refer to the related description of the previous embodiment, which is not repeated here.
  • the user wears a headset control
  • the headset control includes a first microphone module
  • a communication link is established between the terminal device and the headset control.
  • the terminal device can control the first microphone module and the built-in second microphone module of the headset control to collect ambient sound, and determine the ambient sound in the current scene based on the ambient sound collected by the two microphone modules.
  • the method for determining the environmental sound of the current scene based on the two environmental sounds may be: the terminal device detects the first signal-to-noise ratio of the first environmental sound collected by the first microphone module, and determines the second environment collected by the second microphone module The second signal-to-noise ratio of the sound, and the magnitude of the two signal-to-noise ratios are judged, and the environmental sound with the larger signal-to-noise ratio is selected as the environmental sound in the current scene.
  • the signal-to-noise ratio is larger, it means that the influence of the noise signal is smaller when collecting the environmental sound, so that the accuracy rate is higher in the subsequent process of determining the main body of the sound.
  • the frequency domain spectrum of the environmental sound is acquired, and the sounding subject contained in the current scene is determined according to the frequency value contained in the frequency domain spectrum.
  • the terminal device can perform Fourier transform on the environmental sound, convert the time domain signal into the frequency domain signal, and obtain the frequency domain spectrum corresponding to the environmental sound, and based on the frequency value contained in the spectrum spectrum and each frequency value The corresponding frequency domain amplitude determines the vocal subjects contained in the scene. Since different objects have fixed sounding frequencies, the terminal device can determine different sounding subjects through different frequency values. For example, the sound frequency of the human body is 8-10KHz, while the sound frequency of the buzzer is fixed at 2KHz. Therefore, by converting the environmental sound into a frequency domain signal, the sounding subject corresponding to the environmental sound can be determined.
  • the terminal device can determine the weight value corresponding to each vocal subject, wherein the method of determining the weight value may be: the terminal device recognizes the amplitude of each vocal subject in the frequency domain spectrum, and determines based on the amplitude The weight value of each vocal subject.
  • the scene type is determined according to all the speaking subjects.
  • the terminal device may calculate the matching factor of each candidate type according to the identified utterance subject, and superimpose the matching factors of all the utterance subjects to determine the matching degree of each candidate type. .
  • the candidate scene with the highest matching degree is selected as the scene type corresponding to the environmental sound.
  • the environmental sound is collected by the microphone, and the vocalization subject contained in the environmental sound is analyzed to determine the current scene type, which realizes the automatic recognition of the scene type and improves the accuracy of electronic card selection.
  • FIG. 10 shows a specific implementation flowchart of an electronic card selection method S301 provided by the fourth embodiment of the present application.
  • S301 in an electronic card selection method provided in this embodiment includes: S1001 to S1003, which are detailed as follows:
  • the acquiring current scene information and determining the scene type according to the scene information includes:
  • the terminal device has a built-in positioning module, through which the current positioning coordinates of the terminal device can be determined, and the location information associated with the positioning coordinates can be obtained through a third-party map server or map application.
  • a third-party map server or map application For example, if the current location coordinates obtained by the terminal device are (113.300562, 23.143292), it can be input into the corresponding map application to obtain the location information associated with the location coordinates, such as the location information corresponding to the aforementioned location coordinates. It is: Bank A in area B of city A, so that the current scene type can be determined by the location information containing the text content.
  • the terminal device can extract scene keywords from the location information through a semantic recognition algorithm.
  • the terminal device may delete the characters related to the area, retain the characters related to the scene, and use the characters related to the scene as the aforementioned scene keywords.
  • the above determined location information is: Bank G, No. XX, Street C, Area A, City B, through a semantic recognition algorithm, it can be determined that "No. XX No. C Street, Area A, City B" is a region-related character, and then delete it.
  • the remaining characters related to the scene namely "Bank G”, use "Bank G" as the key word of the scene in Mulberry.
  • the confidence probability of each candidate scene is calculated according to the confidence of all candidate scenes associated with the scene keywords.
  • the terminal device may calculate the confidence levels between each scene keyword and each candidate scene respectively, and calculate the confidence probability between the location information and each candidate scene according to the confidence levels of all the scene keywords. For example, if the location information contains scene keyword A and scene keyword B, the confidence level with the first candidate scene is divided into 80% and 60%, and the terminal device can superimpose the above two confidence levels , It is also possible to calculate the mean value between the above two confidence levels, and use the calculation result as the confidence probability of the first candidate scene.
  • the terminal device may be configured with corresponding keyword lists for different candidate scenes.
  • the terminal device may determine whether the above scene keywords are in the candidate scene keyword list, and determine the above based on the judgment result. Confidence level. Specifically, if the scene keyword is in the keyword list of the candidate scene, the confidence between the scene keyword and the candidate scene identified above is 100%; otherwise, it is judged whether there is any character in the scene keyword In the keyword list of the candidate scene, and based on the number of characters contained, determine the confidence level with the candidate scene.
  • the candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
  • the terminal device may select a candidate scene with the highest confidence as the scene type matching the location information.
  • the scene keywords are determined, and the confidence probability of each candidate scene is determined based on the scene keywords, thereby determining the current scene type, and realizing the scene type Automatic recognition improves the accuracy of electronic card selection.
  • FIG. 11 shows a specific implementation flowchart of an electronic card selection method S302 provided by the fifth embodiment of the present application. Referring to FIG. 11, with respect to any one of the embodiments described in FIG. 3, FIG. 6, FIG. 9 and FIG.
  • the selecting the candidate electronic card matching the scene type as the target electronic card includes:
  • the matching degree between each existing candidate electronic card in the terminal device can be calculated separately.
  • the terminal device may store standard scenes of each candidate electronic card, and each standard scene may correspond to at least one scene tag, and a corresponding tag tree is established based on the range of the scene tag.
  • scene tags are associated: "District Bus”, “Bus”, “Bus” and "Traffic”.
  • the scope of the scene tag it can be determined, "Bus” It covers a variety of regional bus types such as “district-level buses” and "city-level buses”.
  • bus types that is, "buses” are larger than “district-level buses", so “buses” are “district-level buses”
  • the parent node of, and so on can be constructed into a tag tree.
  • the terminal device can configure the corresponding matching degree according to the size of the range, where the smaller the range, the higher the corresponding matching degree.
  • the terminal device can determine whether the current scene type matches any scene tag of the candidate electronic card, and use the matching degree associated with the matched scene tag as the matching degree between the scene type and the candidate electronic card.
  • the candidate electronic card with the highest matching degree is selected as the target electronic card.
  • the terminal device can select the candidate electronic card with the highest matching degree as the target electronic card, realizing the automatic selection of the electronic card.
  • the accuracy of selecting the target electronic card is improved.
  • S1103 and S1104 may also be included:
  • a card swiping authentication operation is performed through the target electronic card and the card swiping device.
  • the terminal device after the terminal device determines the target electronic card, it can send the card information of the target electronic card to the card-swiping device through the near field communication link with the card-swiping device to perform card-swiping authentication on the target electronic card
  • the terminal device can send the card information of the target electronic card to the card-swiping device through the near field communication link with the card-swiping device to perform card-swiping authentication on the target electronic card
  • subsequent operations such as authentication, authorization, and deduction, where the subsequent operations are related to the type of operation initiated by the user.
  • the target electronic card is a transportation electronic card
  • the transportation electronic card can be used to pay for transportation
  • the target electronic card is an access control type electronic card
  • the door opening authorization can be performed through the access control electronic card. If it is detected that the card swiping authentication fails, the operation of S1104 is executed.
  • the terminal device can select the candidate electronic card with the second highest matching degree value as the target electronic card, and re-execute the card swipe authentication operation until the swipe authentication succeeds.
  • the candidate electronic card with the second highest matching degree is automatically selected as the target electronic card, thereby achieving the purpose of automatically replacing the electronic card and reducing user operations.
  • FIG. 12 shows a specific implementation flowchart of an electronic card selection method S302 provided by the sixth embodiment of the present application. Referring to FIG. 12, with respect to any one of the embodiments described in FIG. 3, FIG. 6, FIG. 9 and FIG.
  • the selecting the candidate electronic card matching the scene type as the target electronic card includes:
  • the terminal device when storing each candidate electronic card, can determine the associated standard scene according to user settings or based on the electronic card type, and establish a standard scene index table, and after determining the scene type of the current scene, Based on the above-mentioned standard scene index table, the standard scenes pre-associated with each candidate electronic card are obtained.
  • the scene type is matched with each of the standard scenes, and the target electronic card is determined according to the matching result.
  • the terminal device can match the currently recognized scene type with each standard scene, determine whether there is any candidate electronic card whose standard scene is consistent with the current scene type, and if so, identify the candidate electronic card For the target electronic card.
  • the automatic selection of the target electronic card is realized, which reduces the user's operational difficulty and improves Swipe efficiency.
  • FIG. 13 shows a schematic structural diagram of an electronic card selection system provided by an embodiment of the present application.
  • the electronic card selection system includes a mobile terminal 131, smart glasses 132, an external microphone 133, and a card swiping device 134, wherein communication is established between the mobile terminal 131, the smart glasses 132 and the external microphone 133 Connected, the mobile terminal 131 and the card swiping device 134 establish a communication connection through the near field communication module.
  • the mobile terminal 131 has a camera module 1311, a positioning module 1312, and a built-in microphone module 1313.
  • the mobile terminal 131 can collect different types of scene information through the above multiple modules.
  • the mobile terminal 131 can call any module or external device.
  • To collect one scene information it is also possible to collect multiple scene information through two or more modules and external devices, determine the scene type based on the scene information, and select the target electronic card based on the scene type.
  • FIG. 14 shows a structural block diagram of the electronic card selection device provided in the embodiment of the present application. For ease of description, only the information related to the embodiment of the present application is shown part.
  • the selection device of the electronic card includes:
  • the scene type determining unit 141 is configured to obtain current scene information, and determine the scene type according to the scene information;
  • the electronic card selection unit 142 is configured to select a candidate electronic card matching the scene type as the target electronic card.
  • the scene type determining unit 141 includes:
  • the scene image acquisition unit is used to receive the scene image fed back by the smart glasses;
  • a scene image analysis unit for identifying the shooting subject contained in the scene image
  • the photographing subject analysis unit is configured to determine the scene type according to all the photographing subjects.
  • the scene type determining unit 141 includes:
  • the environmental sound collection unit is used to collect the environmental sound in the current scene
  • the utterance subject determining unit is configured to obtain the frequency domain spectrum of the environmental sound, and determine the utterance subject contained in the current scene according to the frequency value contained in the frequency domain spectrum;
  • the utterance subject analysis unit is configured to determine the scene type according to all the utterance subjects.
  • the scene type determining unit 141 includes:
  • the scene keyword extraction unit is used to obtain the current location information and extract the scene keywords contained in the location information
  • a confidence probability calculation unit configured to calculate the confidence probability of each candidate scene according to the confidence of all candidate scenes associated with the scene keywords
  • the scene type selection unit is configured to select the candidate scene with the highest confidence probability as the scene type corresponding to the location information.
  • the electronic card selection unit 142 includes:
  • a matching degree calculation unit configured to calculate the matching degree between each candidate electronic card and the scene type
  • the matching degree selecting unit is configured to select the candidate electronic card with the highest matching degree as the target electronic card.
  • the electronic card selection device further includes:
  • a card swiping authentication unit configured to perform a card swiping authentication operation through the target electronic card and the card swiping device;
  • the authentication failure response unit is configured to, if the card swiping authentication fails, select the candidate electronic card with the highest matching degree as the new target electronic card from all the candidate electronic cards except the target electronic card , And return to execute the card swiping operation through the target electronic card and the card swiping device until the card swiping authentication is successful.
  • the electronic card selection unit 142 includes:
  • a standard scene acquiring unit configured to acquire the standard scene of each candidate electronic card
  • the standard scene matching unit is configured to match the scene type with each of the standard scenes, and determine the target electronic card according to the matching result.
  • the electronic card selection device provided by the embodiment of the present application can also determine the quantization accuracy corresponding to different network levels by acquiring the network information of the target neural network before generating the target neural network, and based on the quantization accuracy of the current level and the previous one.
  • Hierarchical quantization precision configure the preprocessing function used to convert the data format between different precisions, and generate the target neural network according to the preprocessing function, so as to realize the processing of different precision data in the same target neural network, and solve the problem of mixed precision
  • the compatibility problem of neural network improves the efficiency of calculation.
  • FIG. 15 is a schematic structural diagram of a terminal device provided by an embodiment of this application.
  • the terminal device 15 of this embodiment includes: at least one processor 150 (only one is shown in FIG. 15), a processor, a memory 151, and a processor stored in the memory 151 and capable of being processed in the at least one processor.
  • the computer program 152 running on the processor 150 when the processor 150 executes the computer program 152, implements the steps in any of the above-mentioned electronic card selection method embodiments.
  • the terminal device 15 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 150 and a memory 151.
  • FIG. 15 is only an example of the terminal device 15 and does not constitute a limitation on the terminal device 15. It may include more or less components than shown in the figure, or a combination of certain components, or different components. , For example, can also include input and output devices, network access devices, and so on.
  • the so-called processor 150 may be a central processing unit (Central Processing Unit, CPU), and the processor 150 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the memory 151 may be an internal storage unit of the terminal device 15 in some embodiments, such as a hard disk or a memory of the terminal device 15. In other embodiments, the memory 151 may also be an external storage device of the terminal device 15, for example, a plug-in hard disk equipped on the terminal device 15, a smart media card (SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc. Further, the memory 151 may also include both an internal storage unit of the terminal device 15 and an external storage device.
  • the memory 151 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 151 can also be used to temporarily store data that has been output or will be output.
  • An embodiment of the present application also provides a network device, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes The computer program implements the steps in any of the foregoing method embodiments.
  • the embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
  • the embodiments of the present application provide a computer program product.
  • the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium.
  • ROM read-only memory
  • RAM random access memory
  • electric carrier signal telecommunications signal and software distribution medium.
  • U disk mobile hard disk, floppy disk or CD-ROM, etc.
  • computer-readable media cannot be electrical carrier signals and telecommunication signals.
  • the disclosed apparatus/network equipment and method may be implemented in other ways.
  • the device/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
  • components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

An electronic card selection method and apparatus, a terminal, and a storage medium. The method comprises: obtaining current scene information, and determining a scene type according to the scene information (S301); and selecting a candidate electronic card matching the scene type as a target electronic card (S302). According to the method, when there is a need to call an electronic card for an operation such as authentication and payment, current scene information is acquired by a terminal device, the scene type is determined according to scene objects comprised in the scene information, and an electronic card associated with the scene type is selected from all candidate electronic cards as a target electronic card; thus, automatic electronic card selection is implemented, and electronic card operation efficiency and response speed are improved.

Description

一种电子卡的选取方法、装置、终端以及存储介质Method, device, terminal and storage medium for selecting electronic card
本申请要求于2020年03月17日提交国家知识产权局、申请号为202010187020.X、申请名称为“一种电子卡的选取方法、装置、终端以及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application requires the priority of a Chinese patent application filed with the State Intellectual Property Office, the application number is 202010187020.X, and the application title is "A method, device, terminal, and storage medium for selecting an electronic card" on March 17, 2020. The entire content is incorporated into this application by reference.
技术领域Technical field
本申请涉及信息处理技术领域,尤其涉及一种电子卡的选取方法、装置、终端以及存储介质。This application relates to the field of information processing technology, and in particular to an electronic card selection method, device, terminal, and storage medium.
背景技术Background technique
在日常生活中,用户可以通过实体卡进行支付、认证等操作,但随着服务种类的不断增加,对应的实体卡的数量也随之增加,而得益于电子技术的发展,实体卡可以转换为电子卡,并与智能终端进行绑定,从而执行相关的支付、认证操作。然而现有的电子卡技术,用户在进行认证、支付等操作时,需要手动选取当前操作关联的电子卡,从而增加了操作难度,操作效率较低。In daily life, users can use physical cards for payment, authentication and other operations, but as the types of services continue to increase, the number of corresponding physical cards also increases, and thanks to the development of electronic technology, physical cards can be converted It is an electronic card and is bound to a smart terminal to perform related payment and authentication operations. However, with the existing electronic card technology, users need to manually select the electronic card associated with the current operation when performing operations such as authentication and payment, which increases the difficulty of operation and lowers the operation efficiency.
发明内容Summary of the invention
本申请实施例提供了一种电子卡的选取方法、装置、终端以及存储介质,可以解决现有的电子卡技术,需要手动选取当前操作关联的电子卡,从而增加了操作难度,操作效率较低的问题。The embodiments of the application provide an electronic card selection method, device, terminal, and storage medium, which can solve the existing electronic card technology and need to manually select the electronic card associated with the current operation, thereby increasing the operation difficulty and lower operation efficiency. The problem.
第一方面,本申请实施例提供了一种电子卡的选取方法,包括:In the first aspect, an embodiment of the present application provides a method for selecting an electronic card, including:
获取当前的场景信息,并根据所述场景信息确定场景类型;Acquiring current scene information, and determining a scene type according to the scene information;
选取与所述场景类型匹配的候选电子卡作为目标电子卡。The candidate electronic card matching the scene type is selected as the target electronic card.
在第一方面的一种可能的实现方式中,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:In a possible implementation manner of the first aspect, the acquiring current scene information and determining the scene type according to the scene information includes:
接收智能眼镜反馈的场景图像;Receive scene images fed back by smart glasses;
识别所述场景图像内包含的拍摄主体;Identifying the shooting subject contained in the scene image;
根据所有所述拍摄主体确定所述场景类型。The scene type is determined according to all the photographing subjects.
在第一方面的一种可能的实现方式中,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:In a possible implementation manner of the first aspect, the acquiring current scene information and determining the scene type according to the scene information includes:
采集当前场景下的环境声;Collect the ambient sound in the current scene;
获取所述环境声的频域频谱,并根据所述频域频谱内包含的频率值确定当前场景内包含的发声主体;Acquiring a frequency domain spectrum of the environmental sound, and determining a sounding subject contained in the current scene according to the frequency value contained in the frequency domain spectrum;
根据所有所述发声主体确定所述场景类型。The scene type is determined according to all the speaking subjects.
在第一方面的一种可能的实现方式中,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:In a possible implementation manner of the first aspect, the acquiring current scene information and determining the scene type according to the scene information includes:
获取当前的位置信息,并提取所述位置信息内包含的场景关键词;Acquiring current location information, and extracting scene keywords contained in the location information;
根据所有所述场景关键词关联的候选场景的置信度,分别计算各个候选场景的置信概率;Calculate the confidence probability of each candidate scene according to the confidence of all candidate scenes associated with the scene keywords;
选取所述置信概率最高的候选场景作为所述位置信息对应的场景类型。The candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
在第一方面的一种可能的实现方式中,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:In a possible implementation manner of the first aspect, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
分别计算各个所述候选电子卡与所述场景类型之间的匹配度;Respectively calculating the matching degree between each of the candidate electronic cards and the scene type;
选取所述匹配度最高的所述候选电子卡作为所述目标电子卡。The candidate electronic card with the highest matching degree is selected as the target electronic card.
在第一方面的一种可能的实现方式中,在所述选取与所述场景类型匹配的候选电子卡作为目标电子卡之后,还包括:In a possible implementation manner of the first aspect, after the selecting the candidate electronic card matching the scene type as the target electronic card, the method further includes:
通过所述目标电子卡与刷卡设备执行刷卡认证操作;Perform a card swiping authentication operation through the target electronic card and the card swiping device;
若刷卡认证失败,则从除所述目标电子卡外的所有所述候选电子卡中,选取所述匹配度最高的所述候选电子卡作为新的所述目标电子卡,并返回执行所述通过所述目标电子卡与刷卡设备执行刷卡操作,直到刷卡认证成功。If the credit card authentication fails, from all the candidate electronic cards except the target electronic card, select the candidate electronic card with the highest matching degree as the new target electronic card, and return to execute the pass The target electronic card and the card swiping device perform a card swiping operation until the swiping authentication is successful.
在第一方面的一种可能的实现方式中,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:In a possible implementation manner of the first aspect, the selecting a candidate electronic card matching the scene type as a target electronic card includes:
获取各个所述候选电子卡的标准场景;Acquiring the standard scene of each candidate electronic card;
将所述场景类型与各个所述标准场景进行匹配,并根据匹配结果确定所述目标电子卡。The scene type is matched with each of the standard scenes, and the target electronic card is determined according to the matching result.
第二方面,本申请实施例提供了一种电子卡的选取装置,包括:In the second aspect, an embodiment of the present application provides an electronic card selection device, including:
场景类型确定单元,用于获取当前的场景信息,并根据所述场景信息确定场景类型;A scene type determining unit, configured to obtain current scene information, and determine the scene type according to the scene information;
电子卡选取单元,用于选取与所述场景类型匹配的候选电子卡作为目标电子卡。The electronic card selection unit is used to select a candidate electronic card matching the scene type as the target electronic card.
第三方面,本申请实施例提供了一种终端设备,存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述第一方面中任一项所述电子卡的选取方法。In a third aspect, the embodiments of the present application provide a terminal device, a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the The computer program implements the method for selecting the electronic card described in any one of the above-mentioned first aspects.
第四方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述第一方面中任一项所述电子卡的选取方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and is characterized in that, when the computer program is executed by a processor, any of the above-mentioned aspects of the first aspect is implemented. One method for selecting the electronic card.
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在终端设备上运行时,使得终端设备执行上述第一方面中任一项所述电子卡的选取方法。In a fifth aspect, the embodiments of the present application provide a computer program product that, when the computer program product runs on a terminal device, causes the terminal device to execute the method for selecting an electronic card in any one of the above-mentioned first aspects.
可以理解的是,上述第二方面至第五方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It is understandable that, for the beneficial effects of the second aspect to the fifth aspect described above, reference may be made to the relevant description in the first aspect described above, and details are not repeated here.
本申请实施例在需要调用电子卡进行认证、支付等操作时,通过终端设备采集当前的场景信息,并根据场景信息内包含的场景对象确定场景类型,并从所有候选电子卡中选取与该场景类型关联的电子卡作为目标电子卡,实现了自动选取电子卡的目的,提高了电子卡的操作效率以及响应速度。When an electronic card needs to be invoked for authentication, payment and other operations, the embodiment of the application collects current scene information through the terminal device, determines the scene type according to the scene object contained in the scene information, and selects the scene type from all candidate electronic cards. The type-associated electronic card is used as the target electronic card, which realizes the purpose of automatically selecting the electronic card, and improves the operation efficiency and response speed of the electronic card.
附图说明Description of the drawings
图1是本申请实施例提供的手机的部分结构的框图;FIG. 1 is a block diagram of a part of the structure of a mobile phone provided by an embodiment of the present application;
图2是本申请实施例的手机的软件结构示意图;FIG. 2 is a schematic diagram of the software structure of a mobile phone according to an embodiment of the present application;
图3是本申请第一实施例提供的一种电子卡的选取方法的实现流程图;FIG. 3 is an implementation flowchart of an electronic card selection method provided by the first embodiment of the present application;
图4是本申请一实施例提供的基于场景图像的场景类型的识别示意图;4 is a schematic diagram of scene type recognition based on scene images provided by an embodiment of the present application;
图5是本申请一实施例提供的电子卡的选取示意图;FIG. 5 is a schematic diagram of selecting an electronic card provided by an embodiment of the present application;
图6是本申请第二实施例提供的一种电子卡的选取方法S301的具体实现流程图;6 is a specific implementation flowchart of an electronic card selection method S301 provided by the second embodiment of the present application;
图7是本申请一实施例提供的终端设备在刷卡过程中的拍摄场景范围的示意图;FIG. 7 is a schematic diagram of a shooting scene range of a terminal device during a card swiping process according to an embodiment of the present application;
图8是本申请另一实施例提供的智能眼镜在刷卡过程中的拍摄范围的示意图;FIG. 8 is a schematic diagram of a photographing range of smart glasses during a card swiping process according to another embodiment of the present application;
图9是本申请第三实施例提供的一种电子卡的选取方法S301的具体实现流程图;9 is a specific implementation flowchart of an electronic card selection method S301 provided by the third embodiment of the present application;
图10是本申请第四实施例提供的一种电子卡的选取方法S301的具体实现流程图;10 is a specific implementation flowchart of an electronic card selection method S301 provided by the fourth embodiment of the present application;
图11是本申请第五实施例提供的一种电子卡的选取方法S302的具体实现流程图;11 is a specific implementation flowchart of an electronic card selection method S302 provided by the fifth embodiment of the present application;
图12是本申请第六实施例提供的一种电子卡的选取方法S302的具体实现流程图;12 is a specific implementation flowchart of an electronic card selection method S302 provided by the sixth embodiment of the present application;
图13是本申请一实施例提供的电子卡的选取系统的结构示意图;FIG. 13 is a schematic structural diagram of an electronic card selection system provided by an embodiment of the present application;
图14是本申请一实施例提供的一种电子卡的选取设备的结构框图;14 is a structural block diagram of an electronic card selection device provided by an embodiment of the present application;
图15本申请另一实施例提供的一种终端设备的示意图。FIG. 15 is a schematic diagram of a terminal device according to another embodiment of the present application.
具体实施方式Detailed ways
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for the purpose of illustration rather than limitation, specific details such as a specific system structure and technology are proposed for a thorough understanding of the embodiments of the present application. However, it should be clear to those skilled in the art that the present application can also be implemented in other embodiments without these specific details. In other cases, detailed descriptions of well-known systems, devices, circuits, and methods are omitted to avoid unnecessary details from obstructing the description of this application.
应当理解,当在本申请说明书和所附权利要求书中使用时,术语“包括”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。It should be understood that when used in the specification and appended claims of this application, the term "comprising" indicates the existence of the described features, wholes, steps, operations, elements and/or components, but does not exclude one or more other The existence or addition of features, wholes, steps, operations, elements, components, and/or collections thereof.
还应当理解,在本申请说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合。It should also be understood that the term "and/or" used in the specification and appended claims of this application refers to any combination of one or more of the associated listed items and all possible combinations, and includes these combinations.
如在本申请说明书和所附权利要求书中所使用的那样,术语“如果”可以依据上下文被解释为“当...时”或“一旦”或“响应于确定”或“响应于检测到”。类似地,短语“如果确定”或“如果检测到[所描述条件或事件]”可以依据上下文被解释为意指“一旦确定”或“响应于确定”或“一旦检测到[所描述条件或事件]”或“响应于检测到[所描述条件或事件]”。As used in the description of this application and the appended claims, the term "if" can be construed as "when" or "once" or "in response to determination" or "in response to detecting ". Similarly, the phrase "if determined" or "if detected [described condition or event]" can be interpreted as meaning "once determined" or "in response to determination" or "once detected [described condition or event]" depending on the context ]" or "in response to detection of [condition or event described]".
另外,在本申请说明书和所附权利要求书的描述中,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。In addition, in the description of the specification of this application and the appended claims, the terms "first", "second", "third", etc. are only used to distinguish the description, and cannot be understood as indicating or implying relative importance.
在本申请说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。Reference to "one embodiment" or "some embodiments" described in the specification of this application means that one or more embodiments of this application include a specific feature, structure, or characteristic described in combination with the embodiment. Therefore, the sentences "in one embodiment", "in some embodiments", "in some other embodiments", "in some other embodiments", etc. appearing in different places in this specification are not necessarily All refer to the same embodiment, but mean "one or more but not all embodiments" unless it is specifically emphasized otherwise. The terms "including", "including", "having" and their variations all mean "including but not limited to", unless otherwise specifically emphasized.
针对目前电子卡技术,需要手动选取当前操作关联的电子卡,从而增加了操作难度,操作效率较低的问题,本申请实施例提供一种电子卡的选取方法、装置、设备及存储介质,在需要调用电子卡进行认证、支付等操作时,通过终端设备采集当前的场景信息,并根据场景信息内包含的场景对象确定场景类型,并从所有候选电子卡中选 取与该场景类型关联的电子卡作为目标电子卡,实现了自动选取电子卡的目的,提高了电子卡的操作效率以及响应速度。In view of the current electronic card technology, it is necessary to manually select the electronic card associated with the current operation, which increases the difficulty of the operation and the problem of low operation efficiency. The embodiments of the present application provide an electronic card selection method, device, equipment and storage medium. When you need to call an electronic card for authentication, payment, etc., collect the current scene information through the terminal device, determine the scene type according to the scene object contained in the scene information, and select the electronic card associated with the scene type from all candidate electronic cards As the target electronic card, the purpose of automatically selecting the electronic card is realized, and the operation efficiency and response speed of the electronic card are improved.
下面以具体地实施例对本申请的技术方案进行详细说明。下面这几个具体的实施例可以相互结合,对于相同或相似的概念或过程可能在某些实施例不再赘述。The technical solution of the present application will be described in detail below with specific embodiments. The following specific embodiments can be combined with each other, and the same or similar concepts or processes may not be repeated in some embodiments.
本申请实施例提供的电子卡的选取方法可以应用于手机、平板电脑、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等终端设备上,还可以应用于数据库、服务器以及基于终端人工智能的服务响应系统,本申请实施例对终端设备的具体类型不作任何限制。The method for selecting an electronic card provided by the embodiments of this application can be applied to mobile phones, tablet computers, wearable devices, in-vehicle devices, augmented reality (AR)/virtual reality (VR) devices, notebook computers, and ultra mobile devices. Personal computers (ultra-mobile personal computers, UMPC), netbooks, personal digital assistants (personal digital assistants, PDAs) and other terminal devices can also be applied to databases, servers, and service response systems based on terminal artificial intelligence. Examples of this application There are no restrictions on the specific types of terminal equipment.
例如,所述终端设备可以是WLAN中的站点(STAION,ST),可以是蜂窝电话、无绳电话、会话启动协议(Session InitiationProtocol,SIP)电话、无线本地环路(Wireless Local Loop,WLL)站、个人数字处理(Personal Digital Assistant,PDA)设备、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、电脑、膝上型计算机、手持式通信设备、手持式计算设备、和/或用于在无线系统上进行通信的其它设备以及下一代通信系统,例如,5G网络中的移动终端或者未来演进的公共陆地移动网络(Public Land Mobile Network,PLMN)网络中的移动终端等。For example, the terminal device may be a station (STAION, ST) in a WLAN, a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a wireless local loop (Wireless Local Loop, WLL) station, Personal Digital Assistant (PDA) devices, handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, computers, laptops, handheld communication devices, handheld computing devices, and /Or other devices used to communicate on the wireless system and next-generation communication systems, for example, mobile terminals in 5G networks or mobile terminals in the future evolved Public Land Mobile Network (PLMN) network, etc.
作为示例而非限定,当所述终端设备为可穿戴设备时,该可穿戴设备还可以是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如配置有近场通信模块的手套、手表等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备,通过附着与用户身上,通过预先绑定的电子卡,执行支付、认证等操作。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,如智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行具有显示屏的智能手表、智能手环等。As an example and not a limitation, when the terminal device is a wearable device, the wearable device can also be a general term for the application of wearable technology to intelligently design daily wear and develop wearable devices, such as near-field devices. Gloves, watches, etc. for communication modules. A wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories. It is attached to the user's body and performs operations such as payment and authentication through a pre-bound electronic card. Wearable devices are not only a kind of hardware device, but also realize powerful functions through software support, data interaction, and cloud interaction. In a broad sense, wearable smart devices include full-featured, large-sized, complete or partial functions that can be implemented without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, and need to be used in conjunction with other devices such as smart phones. , Such as various types of smart watches with display screens, smart bracelets, etc.
在本实施例中,上述终端设备可以是具备如图1所示的硬件结构的手机100,如图1所示,手机100具体可以包括:射频(Radio Frequency,RF)电路110、存储器120、输入单元130、显示单元140、传感器150、音频电路160、短距离无线通信模块170、处理器180、以及电源190等部件。本领域技术人员可以理解,图1中示出的手机100的结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。In this embodiment, the aforementioned terminal device may be a mobile phone 100 having a hardware structure as shown in FIG. 1. As shown in FIG. 1, the mobile phone 100 may specifically include: a radio frequency (RF) circuit 110, a memory 120, and an input The unit 130, the display unit 140, the sensor 150, the audio circuit 160, the short-range wireless communication module 170, the processor 180, and the power supply 190 and other components. Those skilled in the art can understand that the structure of the mobile phone 100 shown in FIG. 1 does not constitute a limitation on the terminal device. The terminal device may include more or less components than those shown in the figure, or a combination of certain components, or different components. Component arrangement.
下面结合图1对手机的各个构成部件进行具体的介绍:The following describes the components of the mobile phone in detail with reference to Figure 1:
RF电路110可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器180处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路110还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务 (General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE))、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 110 can be used for receiving and sending signals during information transmission or communication. In particular, after receiving the downlink information of the base station, it is processed by the processor 180; in addition, the designed uplink data is sent to the base station. Generally, the RF circuit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 110 can also communicate with the network and other devices through wireless communication. The above-mentioned wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile Communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), Email, Short Messaging Service (SMS), etc.
存储器120可用于存储软件程序以及模块,处理器180通过运行存储在存储器120的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器120可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器120可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。具体地,该存储器120可以存储有电子卡的卡信息,以及各个电子卡与关联的场景类型之间的对应关系,手机可以通过存储器120确定当前场景关联的目标电子卡。The memory 120 may be used to store software programs and modules. The processor 180 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 120. The memory 120 may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of mobile phones (such as audio data, phone book, etc.), etc. In addition, the memory 120 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices. Specifically, the memory 120 may store the card information of the electronic card and the corresponding relationship between each electronic card and the associated scene type. The mobile phone may determine the target electronic card associated with the current scene through the memory 120.
输入单元130可用于接收输入的数字或字符信息,以及产生与手机100的用户设置以及功能控制有关的键信号输入。具体地,输入单元130可包括触控面板131以及其他输入设备132。触控面板131,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板131上或在触控面板131附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板131可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器180,并能接收处理器180发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板131。除了触控面板131,输入单元130还可以包括其他输入设备132。具体地,其他输入设备132可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 130 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the mobile phone 100. Specifically, the input unit 130 may include a touch panel 131 and other input devices 132. The touch panel 131, also known as a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 131 or near the touch panel 131. Operation), and drive the corresponding connection device according to the preset program. Optionally, the touch panel 131 may include two parts: a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it To the processor 180, and can receive and execute the commands sent by the processor 180. In addition, the touch panel 131 can be implemented in multiple types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 131, the input unit 130 may also include other input devices 132. Specifically, the other input device 132 may include, but is not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick.
显示单元140可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元140可包括显示面板141,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板141。进一步的,触控面板131可覆盖显示面板141,当触控面板131检测到在其上或附近的触摸操作后,传送给处理器180以确定触摸事件的类型,随后处理器180根据触摸事件的类型在显示面板141上提供相应的视觉输出。虽然在图1中,触控面板131与显示面板141是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板131与显示面板141集成而实现手机的输入和输出功能。The display unit 140 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 140 may include a display panel 141. Optionally, the display panel 141 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), etc. Further, the touch panel 131 can cover the display panel 141. When the touch panel 131 detects a touch operation on or near it, it transmits it to the processor 180 to determine the type of the touch event, and then the processor 180 responds to the touch event. The type provides corresponding visual output on the display panel 141. Although in FIG. 1, the touch panel 131 and the display panel 141 are used as two independent components to realize the input and input functions of the mobile phone, but in some embodiments, the touch panel 131 and the display panel 141 can be integrated. Realize the input and output functions of the mobile phone.
手机100还可包括至少一种传感器150,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板141的亮度,接近传感器可在手机移动到耳边时,关闭显示面板141和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手 机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。可选地,手机可以通过学习算法,获取得到的用户执行刷卡动作时,各个传感器的测量值,从而在手机接近刷卡设备之前,提前确定用户是否需要执行刷卡操作,并采集当前的场景信息,确定场景类型,从而进一步提高了电子卡的选取效率。The mobile phone 100 may also include at least one sensor 150, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display panel 141 according to the brightness of the ambient light. The proximity sensor can close the display panel 141 and/or when the mobile phone is moved to the ear. Or backlight. As a kind of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary. It can be used to identify mobile phone posture applications (such as horizontal and vertical screen switching, related Games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; as for other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which can also be configured in mobile phones, I will not here Go into details. Optionally, the mobile phone can use the learning algorithm to obtain the measured value of each sensor when the user performs the card swiping action, so as to determine in advance whether the user needs to perform the card swiping operation before the mobile phone approaches the card swiping device, and collect the current scene information to determine Scene type, thereby further improving the selection efficiency of electronic cards.
音频电路160、扬声器161,传声器162可提供用户与手机之间的音频接口。音频电路160可将接收到的音频数据转换后的电信号,传输到扬声器161,由扬声器161转换为声音信号输出;另一方面,传声器162将收集的声音信号转换为电信号,由音频电路160接收后转换为音频数据,再将音频数据输出处理器180处理后,经RF电路110以发送给比如另一手机,或者将音频数据输出至存储器120以便进一步处理。The audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the mobile phone. The audio circuit 160 can transmit the electrical signal converted from the received audio data to the speaker 161, which is converted into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signal into an electrical signal, and the audio circuit 160 After being received, it is converted into audio data, and then processed by the audio data output processor 180, and then sent to, for example, another mobile phone via the RF circuit 110, or the audio data is output to the memory 120 for further processing.
WiFi、蓝牙以及近距离无线通信(Near Field Communication,NFC)等通信技术属于短距离无线传输技术,手机通过短距离无线模块170可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。上述短距离无线模块170可以包括WiFi芯片、蓝牙芯片以及NFC芯片,通过该WiFi芯片可以实现手机100与其他终端设备进行WiFi Direct连接的功能,也可以使手机100工作在能够提供无线接入服务,允许其它无线设备接入的AP模式(Access Point模式)或工作在可以连接到AP不接受无线设备接入的STA模式(Station模式),从而建立手机100与其他WiFi设备的点对点通信;手机可以通过NFC芯片与刷卡设备建立短距离通信链路,并根据上述的短距离通信链路将预先写入的电子卡的卡信息发送给刷卡设备,并执行后续的刷卡操作,并将刷卡结果反馈给手机,通过手机的显示模块输出刷卡结果。Communication technologies such as WiFi, Bluetooth, and Near Field Communication (NFC) are short-range wireless transmission technologies. The mobile phone can help users send and receive emails, browse web pages, and access streaming media through the short-range wireless module 170. The user provides wireless broadband Internet access. The aforementioned short-range wireless module 170 may include a WiFi chip, a Bluetooth chip, and an NFC chip. Through the WiFi chip, the function of the mobile phone 100 to perform WiFi Direct connection with other terminal devices can also be realized, and the mobile phone 100 can also work to provide wireless access services. The AP mode that allows other wireless devices to access (Access Point mode) or the STA mode that can connect to the AP and does not accept wireless device access (Station mode) to establish point-to-point communication between the mobile phone 100 and other WiFi devices; the mobile phone can use The NFC chip establishes a short-distance communication link with the card swiping device, and sends the pre-written card information of the electronic card to the card swiping device according to the above-mentioned short-distance communication link, performs subsequent swiping operations, and feeds back the swiping result to the mobile phone , Output the credit card result through the display module of the mobile phone.
处理器180是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器180可包括一个或多个处理单元;优选的,处理器180可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器180中。The processor 180 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. Various functions and processing data of the mobile phone can be used to monitor the mobile phone as a whole. Optionally, the processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 180.
手机100还包括给各个部件供电的电源190(比如电池),优选的,电源可以通过电源管理系统与处理器180逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The mobile phone 100 also includes a power source 190 (such as a battery) for supplying power to various components. Preferably, the power source can be logically connected to the processor 180 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
手机100还可以包括摄像头。可选地,摄像头在手机上的位置可以为前置的,也可以为后置的,本申请实施例对此不作限定。其中,手机可以通过摄像头采集当前场景的场景图像,并通过对场景图像进行解析,确定场景信息以及场景类型。The mobile phone 100 may also include a camera. Optionally, the position of the camera on the mobile phone may be front-mounted or rear-mounted, which is not limited in the embodiment of the present application. Among them, the mobile phone can collect the scene image of the current scene through the camera, and determine the scene information and the scene type by analyzing the scene image.
手机100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明手机100的软件结构。The software system of the mobile phone 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. The embodiment of the present invention takes the layered Android system as an example to illustrate the software structure of the mobile phone 100.
图2是本申请实施例的手机100的软件结构框图。将Android系统分为四层,分别为应用程序层、应用程序框架层(framework,FWK)、系统层以及硬件抽象层,层与层之间通过软件接口通信。FIG. 2 is a block diagram of the software structure of the mobile phone 100 according to an embodiment of the present application. The Android system is divided into four layers, which are application layer, application framework layer (framework, FWK), system layer, and hardware abstraction layer. The layers communicate through software interfaces.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
应用程序层可以包括一系列应用程序包。The application layer can include a series of application packages.
如图2所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。As shown in Figure 2, the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer. The application framework layer includes some predefined functions.
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in Figure 2, the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。The window manager is used to manage window programs. The window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。The content provider is used to store and retrieve data and make these data accessible to applications. The data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls that display text, controls that display pictures, and so on. The view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connecting, hanging up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify download completion, message reminders, and so on. The notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and application framework layer run in a virtual machine. The virtual machine executes the java files of the application layer and the application framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。The system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图 层的融合。The surface manager is used to manage the display subsystem, and provides a combination of 2D and 3D graphics for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
2D图形引擎是2D绘图的绘图引擎。The 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。在一些实施例中,上述内核层还包含PCIE驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least display driver, camera driver, audio driver, and sensor driver. In some embodiments, the above-mentioned kernel layer further includes a PCIE driver.
在本申请实施例中,流程的执行主体为配置有近场通信模块的设备。作为示例而非限定,上述配置有近场通信模块的设备具体可以为终端设备,该终端设备可以为用户使用的智能手机、平板电脑、笔记本电等移动终端。图3示出了本申请第一实施例提供的电子卡的选取方法的实现流程图,详述如下:In the embodiment of the present application, the execution subject of the process is a device equipped with a near field communication module. As an example and not a limitation, the above-mentioned device equipped with a near field communication module may specifically be a terminal device, and the terminal device may be a mobile terminal such as a smart phone, a tablet computer, and a laptop used by the user. Fig. 3 shows the implementation flow chart of the method for selecting an electronic card provided by the first embodiment of the present application, and the details are as follows:
在S301中,获取当前的场景信息,并根据所述场景信息确定场景类型。In S301, the current scene information is acquired, and the scene type is determined according to the scene information.
在本实施例中,终端设备可以通过内置的传感器等信息采集模块,获取当前的场景信息,也可以通过与外置的信息采集设备建立数据链接,接收其他信息采集设备采集得到的场景信息。In this embodiment, the terminal device can obtain current scene information through built-in sensors and other information collection modules, or it can establish a data link with an external information collection device to receive scene information collected by other information collection devices.
在一种可能的实现方式中,终端设备内置有摄像模块,该摄像模块可以为前置摄像模块和/或后置摄像模块,通过摄像模块采集当前场景的场景图像,并识别上述场景图像为场景信息,对场景信息进行分析,确定场景类型;终端设备内置有麦克风模块,该麦克风模块可以采集当前场景的场景音频,并识别上述场景音频为场景信息,对场景音频进行音频解析,确定场景类型;终端设备内置有定位模块,并通过定位模块获取定位信息,将定位信息作为场景信息,并根据定位信息确定关联的场景类型。In a possible implementation, the terminal device has a built-in camera module. The camera module can be a front camera module and/or a rear camera module. The camera module collects the scene image of the current scene and recognizes the above scene image as the scene. Information, the scene information is analyzed to determine the scene type; the terminal device has a built-in microphone module, which can collect the scene audio of the current scene, and recognize the above scene audio as scene information, perform audio analysis on the scene audio, and determine the scene type; The terminal device has a built-in positioning module, and obtains positioning information through the positioning module, uses the positioning information as scene information, and determines the associated scene type according to the positioning information.
在一种可能的实现方式中,终端设备通过场景图像确定场景类型的具体实现方式可以为:终端设备可以为不同的场景类型配置有对应的标准图像。终端设备可以将当前获取到的场景图像与各个标准图像进行匹配,根据匹配结果确定该场景图像关联的场景类型。具体地,场景图像与标准图像进行匹配的过程具体可以为:终端设备可以对场景图像进行灰度处理,将场景图像转换为单色图像,并根据各个像素点的像素值以及像素坐标,生成与单色图像对应的图像阵列,将图像阵列导入预设的卷积神经网络,通过预设的卷积核对上述的卷积阵列进行池化降维操作,生成上述图像阵列对应的图像特征向量,并计算该图像特征向量与各个标准图像对应的标准特征向量之间的向量距离,将上述向量距离作为与各个标准图像之间的匹配概率值,选取概率值最大的标准图像关联的场景类型,作为该场景图像的场景类型。其中,上述标准图像的标准特征向量可以通过自学习算法采集得到,上述的自学习算法的实现方式可以为:在初始绑定各个电子卡时,终端设备采集关于该电子卡对应的使用场景下的标准图像,基于上述标准图像生成上述的标准特征向量,在后续使用的过程中,每使用该电子卡执行刷卡操作,则将刷卡操作对应的场景图像导入上述的神经网络中,调整已生成的标准特征向量,从而每次在使用过程中,均可以对已配置的标准特征向量进行后验调整,从而提高了标准特征向量的准确性。In a possible implementation manner, the specific implementation manner for the terminal device to determine the scene type through the scene image may be: the terminal device may be configured with corresponding standard images for different scene types. The terminal device can match the currently acquired scene image with each standard image, and determine the scene type associated with the scene image according to the matching result. Specifically, the process of matching the scene image with the standard image can specifically be as follows: the terminal device can perform grayscale processing on the scene image, convert the scene image into a monochrome image, and generate and The image array corresponding to the monochrome image is imported into the preset convolutional neural network, and the above-mentioned convolutional array is pooled and reduced by the preset convolution kernel to generate the image feature vector corresponding to the above-mentioned image array, and Calculate the vector distance between the image feature vector and the standard feature vector corresponding to each standard image, use the vector distance as the matching probability value between each standard image, and select the scene type associated with the standard image with the largest probability value as the The scene type of the scene image. Among them, the standard feature vector of the above-mentioned standard image can be collected by a self-learning algorithm. The implementation of the above-mentioned self-learning algorithm can be as follows: when each electronic card is initially bound, the terminal device collects information about the corresponding use scenario of the electronic card. Standard image, based on the above standard image to generate the above standard feature vector, in the process of subsequent use, every time the electronic card is used to perform a card swiping operation, the scene image corresponding to the card swiping operation is imported into the above-mentioned neural network to adjust the generated standard The feature vector, so that the configured standard feature vector can be adjusted posteriorly during each use, thereby improving the accuracy of the standard feature vector.
在一种可能的实现方式中,每个电子卡对应有一个云端服务器。该云端服务器可以用于存储该电子卡的操作记录以及与各个操作记录关联的场景图像。云端服务器从各个操作记录中提取历史场景图像,并通过所有的历史场景图像生成上述的标准特征向量。云端服务器可以预设的更新周期向各个终端设备发送上述的标准特征向量,上述标准特征向量可以关联有电子卡标识,终端设备将接收到的电子卡标识以及标准特征向量存储于存储单元内,在后续的匹配操作中,可以提取上述的标准特征向量执行匹配操作。In a possible implementation, each electronic card corresponds to a cloud server. The cloud server can be used to store the operation record of the electronic card and the scene image associated with each operation record. The cloud server extracts historical scene images from each operation record, and generates the aforementioned standard feature vector from all historical scene images. The cloud server can send the aforementioned standard feature vector to each terminal device at a preset update cycle. The aforementioned standard feature vector can be associated with an electronic card identifier. The terminal device stores the received electronic card identifier and standard feature vector in the storage unit. In the subsequent matching operation, the above-mentioned standard feature vector can be extracted to perform the matching operation.
示例性地,图4示出了本申请一实施例提供的基于场景图像的场景类型的识别示意图。参见图4所示,包含有终端设备41以及刷卡设备42,该移动终端41上配置有摄像模块411以及近场通信模块412。在终端设备41接近刷卡设备42时,近场通信模块412会检测到刷卡设备42发送的近场通信信号,并与刷卡设备42建立通信连接,此时,终端设备可以激活摄像模块411,通过摄像模块411采集当前场景下的场景图像,并根据场景图像确定场景类型。Exemplarily, FIG. 4 shows a schematic diagram of scene type recognition based on scene images provided by an embodiment of the present application. As shown in FIG. 4, a terminal device 41 and a card swiping device 42 are included. The mobile terminal 41 is configured with a camera module 411 and a near field communication module 412. When the terminal device 41 approaches the card swiping device 42, the near field communication module 412 will detect the near field communication signal sent by the swiping device 42 and establish a communication connection with the card swiping device 42. At this time, the terminal device can activate the camera module 411 through the camera The module 411 collects a scene image in the current scene, and determines the scene type according to the scene image.
在一种可能的实现方式中,终端设备采集多种不同类型的场景信息,根据不同类型的场景信息确定当前的场景类型。具体地,终端设备可以采集当前场景的场景图像以及场景音频,通过场景图像识别出多个候选对象类型,并根据场景音频从候选对象类型中筛选出目标对象类型,根据目标对象类型确定场景类型。通过场景音频对无效的候选对象类型进行筛选,能够对场景类型的识别过程进行校准,从而提高识别效率,当然,也可以通过场景音频识别出当前场景中候选发声对象,并根据场景图像从候选发声对象中筛选出目标发声对象,并根据目标发声对象确定场景类型。举例性,终端设备通过摄像模块获取得到一个场景图像,由于拍摄距离太远或障碍物遮挡等因素,导致了部分场景对象无法通过场景图像识别得到,从而降低了场景类型识别的准确性。为了解决上述问题,终端设备在获取场景图像的同时,可以通过麦克风模块采集当前场景下的环境声,通过环境声确定发声主体,以及通过对场景图像进行图像识别确定拍摄对象,通过发声主体以及拍摄对象确定场景类型。In a possible implementation manner, the terminal device collects multiple different types of scene information, and determines the current scene type according to the different types of scene information. Specifically, the terminal device can collect scene images and scene audio of the current scene, identify multiple candidate object types from the scene images, filter out the target object types from the candidate object types according to the scene audio, and determine the scene type according to the target object types. Screening invalid candidate object types through scene audio can calibrate the recognition process of scene types, thereby improving the recognition efficiency. Of course, you can also use scene audio to identify candidate vocal objects in the current scene and utter the candidates according to the scene image. The target utterance object is filtered out of the objects, and the scene type is determined according to the target utterance object. For example, the terminal device obtains a scene image through the camera module. Due to factors such as too far a shooting distance or obstruction by obstacles, some scene objects cannot be recognized through the scene image, thereby reducing the accuracy of scene type recognition. In order to solve the above problems, the terminal device can collect the ambient sound in the current scene through the microphone module while acquiring the scene image, determine the sound subject through the ambient sound, and determine the subject through image recognition of the scene image, through the sound subject and shooting The object determines the type of scene.
具体地,在一种可能的实现方式下,通过发声主体以及拍摄对象确定场景类型的方式可以为:终端设备通过所有发声主体确定各个候选场景类型的第一置信度,以及通过所有拍摄对象确定各个候选场景类型的第二置信度,基于语音权重对第一置信度进行加权,以及基于图像权重对第二置信度进行加权,并根据加权后的第一置信度以及第二置信度计算关于各个候选场景类型的匹配度,选取匹配度最高的候选场景类型作为当前场景的场景类型。Specifically, in a possible implementation manner, the method of determining the scene type by the sounding subject and the shooting object may be: the terminal device determines the first confidence level of each candidate scene type through all the sounding subjects, and determining each scene type by all the shooting objects The second confidence of the candidate scene type is weighted based on the voice weight, the first confidence is weighted based on the image weight, and the second confidence is weighted based on the image weight, and calculations about each candidate are calculated based on the weighted first confidence and second confidence For the matching degree of the scene type, the candidate scene type with the highest matching degree is selected as the scene type of the current scene.
举例性地,根据终端设备存储有的电子卡所关联的场景类型,可以划分为银行类型、公交类型以及门禁类型三种不同的场景类型。终端设备在检测到需要调用电子卡时,可以通过摄像模块获取当前场景的场景图像,通过图像识别技术,确定了场景图像中包含有柜员机、银行标志以及屏蔽门三个拍摄对象,从而三个候选场景类型对应的第一置信度为:(银行类型,80%)、(门禁类型,50%)、(交通类型,20%),而通过采集环境声,确定了场景中包含的发声主体包括有案件、机械运作声,从而三个候选场景类型对应的第二置信度为:(银行类型,60%)、(门禁类型,50%)、(交通类型,60%),而预设的图像权重值为1,语音权重值为0.8,因此,上述三个 候选场景类型对应匹配度分别为,(银行类型,80%*1+60*0.8=1.28),(门禁类型,50%*1+50*0.8=0.9),(交通类型,20%*1+60*0.8=0.68),因此,匹配度最高的候选场景为银行类型,因此将银行类型作为当前的场景类型。For example, according to the scene type associated with the electronic card stored in the terminal device, it can be divided into three different scene types: bank type, bus type, and access control type. When the terminal device detects that the electronic card needs to be called, it can obtain the scene image of the current scene through the camera module. Through image recognition technology, it is determined that the scene image contains three photographic objects of the teller machine, the bank sign and the screen door, thus three candidates The first confidence level corresponding to the scene type is: (bank type, 80%), (access control type, 50%), (traffic type, 20%), and by collecting environmental sounds, it is determined that the sounding subjects included in the scene include Case, mechanical operation sound, so that the second confidence level corresponding to the three candidate scene types are: (bank type, 60%), (access control type, 50%), (traffic type, 60%), and the preset image weight The value is 1, and the voice weight value is 0.8. Therefore, the corresponding matching degrees of the above three candidate scene types are respectively, (bank type, 80%*1+60*0.8=1.28), (access control type, 50%*1+50 *0.8=0.9), (traffic type, 20%*1+60*0.8=0.68), therefore, the candidate scene with the highest matching degree is the bank type, so the bank type is taken as the current scene type.
需要说明的是,上面使用了语音类型以及图像类型两种场景信息结合确定场景类型作为例子进行说明,实际使用时可以采用两种以上的场景信息,或者采用不限于以上两种信息类型的其他场景信息结合确定场景类型,在此不一一赘述。It should be noted that the above uses the combination of voice type and image type of scene information to determine the scene type as an example for illustration. In actual use, more than two types of scene information can be used, or other scenes that are not limited to the above two types of information can be used. The information is combined to determine the scene type, so I won't repeat them here.
在一种可能的实现方式中,用户可以通过点击电子卡激活控件或者开启电子卡应用来触发电子卡的选取流程,终端设备还可以通过近场通信模块,在检测到有近场通信信号时,触发电子卡的选取流程。In a possible implementation, the user can trigger the selection process of the electronic card by clicking the electronic card activation control or opening the electronic card application. The terminal device can also use the near field communication module to detect a near field communication signal, Trigger the selection process of the electronic card.
在一种可能的实现方式中,终端设备可以通过内置的学习算法,获取用户在通过终端设备执行刷卡操作时的动作轨迹,从而在检测到当前终端的移动轨迹与上述学习到的动作轨迹一致时,自动激活电子卡的选取流程,从而实现提前选取电子卡的目的,提高了后续的响应速度。具体的实现流程如下:终端设备会持续获取运动传感器的参数值,并根据采集时刻的先后次序,将各个采集时刻对应的参数值存储于运动参数队列内,并根据先入先出的次序,不断更新上述的运动参数队列。若终端设备检测到用户执行刷卡操作,则获取刷卡操作时刻,该运动参数队列内的所有参数值,并生成关于刷卡时刻该运动参数队列对应的运动轨迹。终端设备可以将历史刷卡操作所对应的运动轨迹导入到机器学习模型内,从而可以生成关于刷卡操作识别模型。终端设备在使用过程中,会将参数运动队列内的各个参数值导入到上述的刷卡操作识别模型内,判定是否存在刷卡动作,若是,则执行电子卡的选取流程;反之,则继续采集运动传感器的参数值,并更新上述的参数运动队列。需要说明的是,终端设备在每执行一次刷卡操作时,均可以对上述的刷卡操作识别模型进行更新,从而提高识别的准确性。In a possible implementation, the terminal device can obtain the user's movement trajectory when performing a card swiping operation through the terminal device through a built-in learning algorithm, so that when it is detected that the current terminal's movement trajectory is consistent with the aforementioned learned movement trajectory , Automatically activate the electronic card selection process, so as to achieve the purpose of selecting the electronic card in advance and improve the subsequent response speed. The specific implementation process is as follows: the terminal device will continue to obtain the parameter value of the motion sensor, and according to the sequence of the collection time, store the parameter value corresponding to each collection time in the motion parameter queue, and continuously update it according to the first-in-first-out order The above mentioned motion parameter queue. If the terminal device detects that the user performs a card swiping operation, it acquires all the parameter values in the motion parameter queue at the time of the card swiping operation, and generates a motion trajectory corresponding to the motion parameter queue at the time of the card swipe. The terminal device can import the movement trajectory corresponding to the historical card swiping operation into the machine learning model, so that a recognition model for the card swiping operation can be generated. When the terminal device is in use, it will import each parameter value in the parameter motion queue into the above-mentioned swiping operation recognition model to determine whether there is a swiping action, if it is, then execute the electronic card selection process; otherwise, continue to collect the motion sensor And update the above-mentioned parameter motion queue. It should be noted that each time the terminal device performs a card swiping operation, it can update the above-mentioned card swiping operation recognition model, thereby improving the accuracy of recognition.
在S302中,选取与所述场景类型匹配的候选电子卡作为目标电子卡。In S302, a candidate electronic card matching the scene type is selected as the target electronic card.
在本实施例中,用户可以在终端设备上绑定多个电子卡,每个已绑定的电子卡即为上述的候选电子卡。其中,绑定电子卡的方式可以为:用户可以将实体卡的标识输入到终端设备,并将该实体卡的授权信息通过终端设备的电子卡控件发送给实体卡所属方的云端服务器,例如输入已绑定的手机号码或用户身份信息等,云端服务器在检测到授权信息合法后,可以将对应的授权码反馈给终端设备,终端设备将授权码与终端设备内生成的该实体卡对应的电子卡进行关联,从而在终端设备创建了关于实体卡对应的电子卡。In this embodiment, the user can bind multiple electronic cards to the terminal device, and each bound electronic card is the aforementioned candidate electronic card. The method of binding the electronic card may be: the user can input the identification of the physical card into the terminal device, and send the authorization information of the physical card to the cloud server of the entity card owner through the electronic card control of the terminal device, for example, input The bound mobile phone number or user identity information, etc., after the cloud server detects that the authorization information is legal, it can feed back the corresponding authorization code to the terminal device, and the terminal device sends the authorization code to the electronic card corresponding to the physical card generated in the terminal device. The card is associated, so that an electronic card corresponding to the physical card is created in the terminal device.
在本实施例中,终端设备可以为不同的候选电子卡配置关联的场景类型。在确定了场景信息对应的场景类型后,可以判断当前的场景类型与各个候选电子卡的场景类型是否匹配,即电子卡关联的场景类型与当前的场景类型是否一致,将场景类型一致的候选电子卡作为目标电子卡,并执行后续的刷卡操作。In this embodiment, the terminal device can configure associated scene types for different candidate electronic cards. After the scene type corresponding to the scene information is determined, it can be judged whether the current scene type matches the scene type of each candidate electronic card, that is, whether the scene type associated with the electronic card is consistent with the current scene type, and the candidate electronic cards with the same scene type The card is used as the target electronic card, and subsequent swiping operations are performed.
在一种可能的实现方式中,若多个候选电子卡关联的场景类型相同,则可以根据各个候选电子卡的优先级,选取优先级最高的候选电子卡作为目标电子卡。举例性地,图5示出了本申请一实施例提供的电子卡的选取示意图。终端设备绑定有银行卡A、银行卡B、公交卡以及门禁卡四种电子卡。终端设备通过采集当前的场景信息,确定当前的场景类型为银行类型,而银行卡A以及银行卡B关联的场景类型均为银行类型, 即上述两个电子卡的场景类型均与当前的场景类型匹配,在该情况下,则可以获取上述两个银行卡对应的优先级,若银行卡A的优先级高于银行卡B的优先级,则可以选取银行卡A作为目标电子卡。In a possible implementation manner, if the scene types associated with multiple candidate electronic cards are the same, the candidate electronic card with the highest priority can be selected as the target electronic card according to the priority of each candidate electronic card. For example, FIG. 5 shows a schematic diagram of selecting an electronic card provided by an embodiment of the present application. The terminal equipment is bound with four electronic cards: bank card A, bank card B, bus card, and access card. The terminal device determines that the current scene type is the bank type by collecting the current scene information, and the scene types associated with bank card A and bank card B are all bank types, that is, the scene types of the above two electronic cards are the same as the current scene type In this case, the priority corresponding to the above two bank cards can be obtained. If the priority of bank card A is higher than the priority of bank card B, bank card A can be selected as the target electronic card.
在一种可能的实现方式中,若多个候选电子卡关联的场景类型相同,则可以根据当前的刷卡时间以及刷卡地点,计算上述场景类型相同的多个候选电子卡的匹配度,选取匹配度最高的一个候选电子卡作为目标电子卡。具体地,不同的电子卡具有对应的使用习惯,例如用户上午使用电子卡A进行刷卡操作,而下午则使用电子卡B进行刷卡操作,终端设备可以根据各个电子卡的历史刷卡记录内的历史时间以及历史地点,计算与当前场景之间的匹配度,并选取匹配度较高的一个候选电子卡作为目标电子卡。In a possible implementation, if multiple candidate electronic cards are associated with the same scene type, the matching degree of multiple candidate electronic cards with the same scene type can be calculated according to the current card swiping time and location, and the matching degree can be selected The highest candidate electronic card is used as the target electronic card. Specifically, different electronic cards have corresponding usage habits. For example, a user uses electronic card A for swiping operations in the morning, and electronic card B for swiping operations in the afternoon. The terminal device can use the historical time in the history of each electronic card to swipe the card. As well as the historical location, the matching degree with the current scene is calculated, and a candidate electronic card with a higher matching degree is selected as the target electronic card.
以上可以看出,本申请实施例提供的一种电子卡的选取方法在需要调用电子卡进行认证、支付等操作时,通过终端设备采集当前的场景信息,并根据场景信息内包含的场景对象确定场景类型,并从所有候选电子卡中选取与该场景类型关联的电子卡作为目标电子卡,实现了自动选取电子卡的目的,提高了电子卡的操作效率以及响应速度。It can be seen from the above that the method for selecting an electronic card provided by the embodiment of this application collects current scene information through the terminal device when the electronic card needs to be invoked for authentication, payment, and other operations, and determines according to the scene objects contained in the scene information. Scene type, and select the electronic card associated with the scene type from all candidate electronic cards as the target electronic card, which realizes the purpose of automatically selecting the electronic card and improves the operation efficiency and response speed of the electronic card.
图6示出了本申请第二实施例提供的一种电子卡的选取方法S301的具体实现流程图。参见图6,相对于图3所述实施例,本实施例提供的一种电子卡的选取方法中S301包括:S601~S603,具体详述如下:FIG. 6 shows a specific implementation flowchart of an electronic card selection method S301 provided by the second embodiment of the present application. Referring to FIG. 6, compared with the embodiment described in FIG. 3, S301 in an electronic card selection method provided in this embodiment includes: S601 to S603, which are detailed as follows:
进一步地,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:Further, the acquiring current scene information and determining the scene type according to the scene information includes:
在S601中,接收智能眼镜反馈的场景图像。In S601, a scene image fed back by the smart glasses is received.
在本实施例中,终端设备与一外接的智能眼镜建立有通信连接,通过智能眼镜内置的摄像模块,采集当前场景下的场景图像。由于智能眼镜佩戴于用户的眼部附近,因此,与使用终端设备内置的摄像模块采集场景图像相比,视线较为清晰,与用户观看到的场景之间的一致性较高,减少了主要场景主体在拍摄时被其他物体遮挡的情况出现,从而提高了场景类型识别的准确性。对于某些场景,例如在交通场景类型下,用户使用电子卡来搭乘公交时,往往会从衣服或裤子的袋子中取出手机后,直接进行刷卡操作,在从口袋到靠近刷卡机的移动路径下,终端设备内置的摄像模块大概率无法采集得到包含有刷卡机的场景图像。In this embodiment, the terminal device establishes a communication connection with an external smart glasses, and collects scene images in the current scene through the camera module built into the smart glasses. Since smart glasses are worn near the user's eyes, compared with using the camera module built in the terminal device to capture scene images, the line of sight is clearer, and the consistency between the scene viewed by the user is higher, and the main scene subjects are reduced. It is blocked by other objects during shooting, which improves the accuracy of scene type recognition. For some scenes, such as the type of traffic scenes, when users use an electronic card to take a bus, they will often take out their mobile phone from the bag of clothes or trousers, and then directly perform the card swipe operation, in the moving path from the pocket to the card machine. , The camera module built into the terminal device is likely to be unable to collect scene images containing the card reader.
示例性地,图7示出了本申请一实施例提供的终端设备在刷卡过程中的拍摄场景范围的示意图。参见图7所示,终端设备的初始位置在口袋内,在需要刷卡时,需要从口袋取出并靠近刷卡机,即目标位置为刷卡机附近。在移动的过程中,拍摄到的区域如图7中的扇形区域所示。可见,终端设备只有在靠近刷卡机附近时,拍摄到的场景图像才会包含有刷卡设备,且只有刷卡设备的部分图像,从而识别准确率较低。Exemplarily, FIG. 7 shows a schematic diagram of a shooting scene range of a terminal device during a card swiping process according to an embodiment of the present application. As shown in Figure 7, the initial position of the terminal device is in the pocket. When the card needs to be swiped, it needs to be taken out of the pocket and close to the credit card machine, that is, the target location is near the credit card machine. During the movement, the captured area is shown as the fan-shaped area in Fig. 7. It can be seen that only when the terminal device is close to the card swiping machine, the captured scene image will contain the card swiping device, and only part of the image of the swiping device, so the recognition accuracy is low.
示例性地,图8示出了本申请另一实施例提供的智能眼镜在刷卡过程中的拍摄范围的示意图。参见图8所示,智能眼镜由于佩戴于用户的眼部区域,其可拍摄范围与人眼的可视范围基本一致,并且用户在前进方向过程中,即用户往刷卡设备靠近的过程中,刷卡机可以持续被智能眼镜记录,从而相比于使用终端设备的内置摄像模块相比,通过智能眼镜采集环境图像具有较优的识别效果。Exemplarily, FIG. 8 shows a schematic diagram of a photographing range of smart glasses provided by another embodiment of the present application during a card swiping process. As shown in Figure 8, because the smart glasses are worn on the user's eye area, their photographable range is basically the same as the visual range of the human eye, and the user swipes the card while the user is moving forward, that is, when the user approaches the card swiping device The camera can be continuously recorded by the smart glasses, so that compared to using the built-in camera module of the terminal device, the collection of environmental images through the smart glasses has a better recognition effect.
在一种可能的实现方式中,终端设备在检测到满足预设的场景信息采集条件时, 则可以向智能眼镜发送一个采集指令,智能眼镜在接受到采集指令后,可以执行图像采集操作,并将采集到的图像反馈给终端设备,从而终端设备可以获得上述的场景图像。具体地,上述的场景信息采集条件可以为:终端设备检测到当前场景下包含有近场通信信号时,则识别满足场景信息采集条件;又或者,终端设备根据历史刷卡操作,记录有多个刷卡地点,在检测到当前位置到达上述存储有的刷卡地点时,则识别满足场景信息采集条件。In a possible implementation, when the terminal device detects that the preset scene information collection conditions are met, it can send a collection instruction to the smart glasses, and the smart glasses can perform image collection operations after receiving the collection instruction, and The collected images are fed back to the terminal device, so that the terminal device can obtain the above-mentioned scene image. Specifically, the aforementioned scene information collection condition may be: when the terminal device detects that the current scene contains a near field communication signal, it recognizes that the scene information collection condition is satisfied; or, the terminal device records multiple card swiping operations based on historical card swiping operations Location, when it is detected that the current location reaches the above-mentioned stored card swiping location, it is recognized that the scene information collection condition is satisfied.
在一种可能的实现方式中,智能眼镜可以以预设的采集周期获取当前的场景图像,并将采集到的场景图像反馈给终端设备,终端设备可以通过对场景图像内的拍摄主体进行识别,判断拍摄主体是否包含有与刷卡操作相关的目标主体,若存在,则执行S603的操作。In a possible implementation, the smart glasses can acquire the current scene image in a preset collection period, and feed the collected scene image back to the terminal device, and the terminal device can recognize the subject in the scene image, It is determined whether the photographing subject contains a target subject related to the card swiping operation, and if it exists, the operation of S603 is executed.
在本实施例中,终端设备与智能眼镜之间可以建立有无线通信,具体地,智能眼镜内置有无线通信模块,如WiFi模块、蓝牙模块、紫蜂ZigBee模块等,对应地,终端设备也可以内置有对应的无线通信模块。终端设备通过搜索智能眼镜的无线网络,并加入到该无线网络内,从而建立了与智能眼镜之间的无线通信链路。In this embodiment, wireless communication can be established between the terminal device and the smart glasses. Specifically, the smart glasses have built-in wireless communication modules, such as a WiFi module, a Bluetooth module, and a ZigBee module. Correspondingly, the terminal device can also Corresponding wireless communication module is built-in. The terminal device searches for the wireless network of the smart glasses and joins the wireless network, thereby establishing a wireless communication link with the smart glasses.
在S602中,识别所述场景图像内包含的拍摄主体。In S602, the subject included in the scene image is identified.
在本实施例中,终端设备可以通过图像分析算法,解析场景图像内包含的拍摄主体,其中,确定拍摄主体的方式具体可以为:通过识别场景图像内包含的轮廓线,将场景图像划分为多个主体区域,根据主体区域的轮廓形状以及颜色特点,确定各个主体区域对应的拍摄主体的主体类型。In this embodiment, the terminal device can analyze the shooting subject contained in the scene image through an image analysis algorithm. The method of determining the shooting subject can be specifically as follows: by identifying the contour lines contained in the scene image, the scene image is divided into multiple For each subject area, the subject type of the shooting subject corresponding to each subject area is determined according to the contour shape and color characteristics of the subject area.
在一种可能的实现方式中,终端设备可以配置有主体类型列表,并为每个主体类型关联有对应的主体模型。终端设备可以将各个主体区域与各个主体模型进行匹配,选取匹配度最高的主体模型的主体类型作为主体区域对应的拍摄主体。In a possible implementation manner, the terminal device may be configured with a list of subject types, and a corresponding subject model is associated with each subject type. The terminal device may match each subject area with each subject model, and select the subject type of the subject model with the highest matching degree as the photographing subject corresponding to the subject area.
在一种可能的实现方式中,终端设备在对场景图像进行解析之前,可以对场景图像进行预处理操作,从而能够提高拍摄主体识别的准确性。具体地,预处理操作的方式可以为:终端设备对场景图像进行灰度处理,即将彩色图像转换为单色图像,并通过滤波器以及拍摄场景的实际光强,对上述单色图像进行调整,例如提高高光区域的像素值以及降低阴影区域的像素值等,并通过轮廓识别算法,确定场景图像内包含的轮廓线,对轮廓线区域进行加深处理,从而能够便于分离各个拍摄主体以及确定各个拍摄主体的轮廓特征。In a possible implementation manner, the terminal device may perform a preprocessing operation on the scene image before analyzing the scene image, so that the accuracy of the subject identification can be improved. Specifically, the preprocessing operation may be as follows: the terminal device performs grayscale processing on the scene image, that is, converts the color image into a monochrome image, and adjusts the monochrome image through the filter and the actual light intensity of the shooting scene. For example, increase the pixel value of the highlight area and reduce the pixel value of the shadow area, and use the contour recognition algorithm to determine the contour line contained in the scene image, and deepen the contour area, so as to facilitate the separation of each subject and determine each shot The contour feature of the main body.
在S603中,根据所有所述拍摄主体确定所述场景类型。In S603, the scene type is determined according to all the photographing subjects.
在本实施例中,终端设备可以根据识别得到的拍摄主体,计算各个候选类型的匹配因子,并将所有拍摄主体的匹配因子进行叠加,确定各个候选类型的匹配度。选取匹配度最高的一个候选场景作为场景图像对应的场景类型。In this embodiment, the terminal device may calculate the matching factors of each candidate type according to the identified photographing subject, and superimpose the matching factors of all photographing subjects to determine the matching degree of each candidate type. The candidate scene with the highest matching degree is selected as the scene type corresponding to the scene image.
在一种可能的实现方式中,终端设备可以根据各个拍摄主体在场景图像内所占的面积,确定各个拍摄主体对应的权重值。其中,拍摄主体在场景图像内所占的面积越大,则对应的权重值越高;反之,所占面积越少,则对应的权重值越低,并基于各个拍摄主体与候选类型之间的匹配因子以及权重值进行加权叠加,确定各个候选类型的匹配度。In a possible implementation manner, the terminal device may determine the weight value corresponding to each photographic subject according to the area occupied by each photographic subject in the scene image. Among them, the larger the area occupied by the subject in the scene image, the higher the corresponding weight value; conversely, the smaller the occupied area, the lower the corresponding weight value, and is based on the relationship between each subject and the candidate type. The matching factor and weight value are weighted and superimposed to determine the matching degree of each candidate type.
举例性地,一场景图像内拍摄到的拍摄主体包含有:取款机、屏蔽门、银行标志 以及实体人,并且上述拍摄主体占整个场景图像的区域面积分别为:25%、30%、8%以及15%,终端设备可以将上述的区域面积占比转换为对应的权重值,分别为:2,2,1以及1.5。上述四个拍摄主体与银行场景类型之间的匹配因子分别为:100%、80%、100%以及30%,因此场景图像与银行场景类型之间的匹配度具体为:2*100%+2*80%+1*100%+1.5*30%=5.05。For example, the subjects captured in a scene image include: cash machines, screen doors, bank signs, and physical persons, and the above-mentioned subjects occupy 25%, 30%, and 8% of the entire scene image. And 15%, the terminal device can convert the above-mentioned area area ratio into the corresponding weight value, respectively: 2, 2, 1, and 1.5. The matching factors between the above four shooting subjects and the bank scene type are: 100%, 80%, 100%, and 30%, so the matching degree between the scene image and the bank scene type is specifically: 2*100%+2 *80%+1*100%+1.5*30%=5.05.
在本申请实施例中,通过智能眼镜采集场景图像,并对场景图像内包含的拍摄主体进行解析,从而确定当前的场景类型,实现了场景类型的自动识别,并进一步提高了场景类型的识别准确性,从而提高了电子卡选取的准确率。In the embodiments of the present application, the scene image is collected through smart glasses, and the subject contained in the scene image is analyzed to determine the current scene type, realize the automatic recognition of the scene type, and further improve the accuracy of the scene type recognition Therefore, the accuracy of electronic card selection is improved.
图9示出了本申请第三实施例提供的一种电子卡的选取方法S301的具体实现流程图。参见图9,相对于图3所述实施例,本实施例提供的一种电子卡的选取方法中S301包括:S901~S903,具体详述如下:FIG. 9 shows a specific implementation flowchart of an electronic card selection method S301 provided by the third embodiment of the present application. Referring to FIG. 9, compared with the embodiment described in FIG. 3, S301 in an electronic card selection method provided in this embodiment includes: S901 to S903, and the details are as follows:
进一步地,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:Further, the acquiring current scene information and determining the scene type according to the scene information includes:
在S901中,采集当前场景下的环境声。In S901, the ambient sound in the current scene is collected.
在本实施例中,终端设备可以通过内置或外置的麦克风模块采集当前场景的环境声。具体地,终端设备在检测到满足预设的场景信息采集条件时,可以向麦克风模块发送一个场景信息采集指令。其中,基于场景信息采集条件触发场景类型识别操作的过程可以参见上一实施例的相关描述,在此不再赘述。In this embodiment, the terminal device can collect the ambient sound of the current scene through a built-in or external microphone module. Specifically, when the terminal device detects that a preset scene information collection condition is met, it can send a scene information collection instruction to the microphone module. The process of triggering the scene type recognition operation based on the scene information collection condition can refer to the related description of the previous embodiment, which is not repeated here.
在一种可能的实现方式中,用户佩戴有耳机控件,该耳机控件包含有第一麦克风模块,而终端设备与耳机控件之间建立有通信链路。在该情况下,终端设备可以控制耳机控件的第一麦克风模块以及内置的第二麦克风模块采集环境声,并基于两个麦克风模块采集到的环境声确定当前场景下的环境声。具体地,基于两路环境声确定当前场景的环境声的方式可以为:终端设备检测第一麦克风模块采集的第一环境声的第一信噪比,以及确定第二麦克风模块采集的第二环境声的第二信噪比,并判断两个信噪比大小,选取信噪比较大的环境声作为当前场景下的环境声。由于信噪比越大,则表示采集环境声时噪声信号的影响越小,从而在后续确定发声主体的过程中,准确率更高。In a possible implementation manner, the user wears a headset control, the headset control includes a first microphone module, and a communication link is established between the terminal device and the headset control. In this case, the terminal device can control the first microphone module and the built-in second microphone module of the headset control to collect ambient sound, and determine the ambient sound in the current scene based on the ambient sound collected by the two microphone modules. Specifically, the method for determining the environmental sound of the current scene based on the two environmental sounds may be: the terminal device detects the first signal-to-noise ratio of the first environmental sound collected by the first microphone module, and determines the second environment collected by the second microphone module The second signal-to-noise ratio of the sound, and the magnitude of the two signal-to-noise ratios are judged, and the environmental sound with the larger signal-to-noise ratio is selected as the environmental sound in the current scene. As the signal-to-noise ratio is larger, it means that the influence of the noise signal is smaller when collecting the environmental sound, so that the accuracy rate is higher in the subsequent process of determining the main body of the sound.
在S902中,获取所述环境声的频域频谱,并根据所述频域频谱内包含的频率值确定当前场景内包含的发声主体。In S902, the frequency domain spectrum of the environmental sound is acquired, and the sounding subject contained in the current scene is determined according to the frequency value contained in the frequency domain spectrum.
在本实施例中,终端设备可以对环境声进行傅里叶变换,将时域信号转换为频域信号,得到环境声对应的频域频谱,并基于频谱频谱内包含的频率值以及各个频率值对应的频域振幅,确定场景内包含的发声主体。由于不同的物体有固定的的发声频率,终端设备可以通过频率值的不同确定不同的发声主体。例如,人体的发声频率在8-10KHz,而蜂鸣器的发声频率则固定在2KHz。因此,将环境声转换为频域信号,则可以确定该环境声对应的发声主体。In this embodiment, the terminal device can perform Fourier transform on the environmental sound, convert the time domain signal into the frequency domain signal, and obtain the frequency domain spectrum corresponding to the environmental sound, and based on the frequency value contained in the spectrum spectrum and each frequency value The corresponding frequency domain amplitude determines the vocal subjects contained in the scene. Since different objects have fixed sounding frequencies, the terminal device can determine different sounding subjects through different frequency values. For example, the sound frequency of the human body is 8-10KHz, while the sound frequency of the buzzer is fixed at 2KHz. Therefore, by converting the environmental sound into a frequency domain signal, the sounding subject corresponding to the environmental sound can be determined.
在一种可能的实现方式中,终端设备可以确定各个发声主体对应的权重值,其中确定权重值的方式可以为:终端设备识别各个发声主体在频域频谱内的振幅大小,并基于振幅大小确定各个发声主体的权重值。In a possible implementation manner, the terminal device can determine the weight value corresponding to each vocal subject, wherein the method of determining the weight value may be: the terminal device recognizes the amplitude of each vocal subject in the frequency domain spectrum, and determines based on the amplitude The weight value of each vocal subject.
在S903中,根据所有所述发声主体确定所述场景类型。In S903, the scene type is determined according to all the speaking subjects.
在本实施例中,终端设备可以在确定了发声主体后,可以根据识别得到的发声主体,计算各个候选类型的匹配因子,并将所有发声主体的匹配因子进行叠加,确定各个候选类型的匹配度。选取匹配度最高的一个候选场景作为环境声对应的场景类型。In this embodiment, after determining the utterance subject, the terminal device may calculate the matching factor of each candidate type according to the identified utterance subject, and superimpose the matching factors of all the utterance subjects to determine the matching degree of each candidate type. . The candidate scene with the highest matching degree is selected as the scene type corresponding to the environmental sound.
在本申请实施例中,通过麦克风采集环境声,并对环境声内包含的发声主体进行解析,从而确定当前的场景类型,实现了场景类型的自动识别提高了电子卡选取的准确率。In the embodiment of the present application, the environmental sound is collected by the microphone, and the vocalization subject contained in the environmental sound is analyzed to determine the current scene type, which realizes the automatic recognition of the scene type and improves the accuracy of electronic card selection.
图10示出了本申请第四实施例提供的一种电子卡的选取方法S301的具体实现流程图。参见图10,相对于图3所述实施例,本实施例提供的一种电子卡的选取方法中S301包括:S1001~S1003,具体详述如下:FIG. 10 shows a specific implementation flowchart of an electronic card selection method S301 provided by the fourth embodiment of the present application. Referring to FIG. 10, with respect to the embodiment described in FIG. 3, S301 in an electronic card selection method provided in this embodiment includes: S1001 to S1003, which are detailed as follows:
进一步地,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:Further, the acquiring current scene information and determining the scene type according to the scene information includes:
在S1001中,获取当前的位置信息,并提取所述位置信息内包含的场景关键词。In S1001, current location information is acquired, and scene keywords contained in the location information are extracted.
在本实施例中,终端设备内置有定位模块,通过定位模块可以确定终端设备当前的定位坐标,通过第三方地图服务器或地图应用等方式,可以获取得到定位坐标关联的位置信息。举例性地,若终端设备获取得到的当前的定位坐标为(113.300562,23.143292),则可以将其输入到对应的地图应用,获取该定位坐标关联的位置信息,例如上述的定位坐标对应的位置信息为:A市B区域的银行A,从而能够通过包含文字内容的位置信息,确定当前的场景类型。In this embodiment, the terminal device has a built-in positioning module, through which the current positioning coordinates of the terminal device can be determined, and the location information associated with the positioning coordinates can be obtained through a third-party map server or map application. For example, if the current location coordinates obtained by the terminal device are (113.300562, 23.143292), it can be input into the corresponding map application to obtain the location information associated with the location coordinates, such as the location information corresponding to the aforementioned location coordinates. It is: Bank A in area B of city A, so that the current scene type can be determined by the location information containing the text content.
在本实施例中,终端设备可以通过语义识别算法,从位置信息中提取场景关键词。在一种可能的实现方式中,终端设备可以将删除区域相关的字符,保留与场景相关的字符,并将与场景相关的字符作为上述的场景关键词。举例性地,上述确定的位置信息为:A市B区域C街道XX号银行G,通过语义识别算法,可以确定“A市B区域C街道XX号”为与区域相关的字符,则进行删除,剩下与场景相关的字符,即“银行G”,则将“银行G”作为桑树的场景关键词。In this embodiment, the terminal device can extract scene keywords from the location information through a semantic recognition algorithm. In a possible implementation manner, the terminal device may delete the characters related to the area, retain the characters related to the scene, and use the characters related to the scene as the aforementioned scene keywords. For example, the above determined location information is: Bank G, No. XX, Street C, Area A, City B, through a semantic recognition algorithm, it can be determined that "No. XX No. C Street, Area A, City B" is a region-related character, and then delete it. The remaining characters related to the scene, namely "Bank G", use "Bank G" as the key word of the scene in Mulberry.
在S1002中,根据所有所述场景关键词关联的候选场景的置信度,分别计算各个候选场景的置信概率。In S1002, the confidence probability of each candidate scene is calculated according to the confidence of all candidate scenes associated with the scene keywords.
在本实施例中,终端设备可以分别计算各个场景关键词与各个候选场景之间的置信度,并根据所有场景关键词的置信度计算该位置信息与各个候选场景之间的置信概率。举例性地,若位置信息内包含有场景关键词A以及场景关键词B,则与第一候选场景之间的置信度分为80%以及60%,终端设备可以将上述两个置信度进行叠加,也可以计算上述两个置信度之间的均值,将计算结果作为第一候选场景的置信概率。In this embodiment, the terminal device may calculate the confidence levels between each scene keyword and each candidate scene respectively, and calculate the confidence probability between the location information and each candidate scene according to the confidence levels of all the scene keywords. For example, if the location information contains scene keyword A and scene keyword B, the confidence level with the first candidate scene is divided into 80% and 60%, and the terminal device can superimpose the above two confidence levels , It is also possible to calculate the mean value between the above two confidence levels, and use the calculation result as the confidence probability of the first candidate scene.
在一种可能的实现方式中,终端设备可以为不同的候选场景配置有对应的关键词列表,终端设备可以判断上述的场景关键词内是否在候选场景的关键词列表内,基于判断结果确定上述的置信度。具体地,若该场景关键词在候选场景的关键词列表内,则识别上述的场景关键词与该候选场景之间的置信度为100%;反之,则判断场景关键词内是否存在任一字符在候选场景的关键词列表内,并基于包含的字符个数,确定与候选场景之间的置信度。In a possible implementation manner, the terminal device may be configured with corresponding keyword lists for different candidate scenes. The terminal device may determine whether the above scene keywords are in the candidate scene keyword list, and determine the above based on the judgment result. Confidence level. Specifically, if the scene keyword is in the keyword list of the candidate scene, the confidence between the scene keyword and the candidate scene identified above is 100%; otherwise, it is judged whether there is any character in the scene keyword In the keyword list of the candidate scene, and based on the number of characters contained, determine the confidence level with the candidate scene.
在S1003中,选取所述置信概率最高的候选场景作为所述位置信息对应的场景类型。In S1003, the candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
在本实施例中,终端设备可以选取置信度最高的一个候选场景作为与位置信息相匹配的场景类型。In this embodiment, the terminal device may select a candidate scene with the highest confidence as the scene type matching the location information.
在本申请实施例中,通过确定位置信息,并对位置信息进行语义分析,确定场景关键词,并基于场景关键词确定各个候选场景的置信概率,从而确定当前的场景类型,实现了场景类型的自动识别提高了电子卡选取的准确率。In the embodiment of the present application, by determining the location information and performing semantic analysis on the location information, the scene keywords are determined, and the confidence probability of each candidate scene is determined based on the scene keywords, thereby determining the current scene type, and realizing the scene type Automatic recognition improves the accuracy of electronic card selection.
图11示出了本申请第五实施例提供的一种电子卡的选取方法S302的具体实现流程图。参见图11,相对于图3、图6、图9以及图10任一所述实施例,本实施例提供的一种电子卡的选取方法中S302包括:S1101~S1102,具体详述如下:FIG. 11 shows a specific implementation flowchart of an electronic card selection method S302 provided by the fifth embodiment of the present application. Referring to FIG. 11, with respect to any one of the embodiments described in FIG. 3, FIG. 6, FIG. 9 and FIG.
进一步地,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:Further, the selecting the candidate electronic card matching the scene type as the target electronic card includes:
在S1101中,分别计算各个所述候选电子卡与所述场景类型之间的匹配度。In S1101, the matching degree between each candidate electronic card and the scene type is calculated respectively.
在本实施例中,终端设备在确定了场景类型后,可以分别计算终端设备内各个已有的候选电子卡之间的匹配度。具体地,终端设备可以存储有各个候选电子卡的标准场景,每个标准场景可以对应至少一个场景标签,并基于场景标签的范围大小,建立对应的标签树。举例性地,对于某一交通电子卡,关联有以下场景标签:“区级巴士”、“巴士”、“公交”以及“交通”,根据场景标签所包含的范围大小,可以确定,“巴士”是涵盖了“区级巴士”、“市级巴士”等多种区域巴士类型,是巴士类型的统称,即“巴士”的范围大于“区级巴士”,因此“巴士”为“区级巴士”的父节点,依次类推,可以构建成一标签树。终端设备可以根据范围大小,配置对应的匹配度,其中范围越小,则对应的匹配度越高。终端设备可以判断当前的场景类型是否与候选电子卡的任一场景标签相匹配,并基于匹配的场景标签关联的匹配度,作为该场景类型与候选电子卡之间的匹配度。In this embodiment, after the terminal device determines the scene type, the matching degree between each existing candidate electronic card in the terminal device can be calculated separately. Specifically, the terminal device may store standard scenes of each candidate electronic card, and each standard scene may correspond to at least one scene tag, and a corresponding tag tree is established based on the range of the scene tag. For example, for a certain transportation electronic card, the following scene tags are associated: "District Bus", "Bus", "Bus" and "Traffic". According to the scope of the scene tag, it can be determined, "Bus" It covers a variety of regional bus types such as "district-level buses" and "city-level buses". It is a collective term for bus types, that is, "buses" are larger than "district-level buses", so "buses" are "district-level buses" The parent node of, and so on, can be constructed into a tag tree. The terminal device can configure the corresponding matching degree according to the size of the range, where the smaller the range, the higher the corresponding matching degree. The terminal device can determine whether the current scene type matches any scene tag of the candidate electronic card, and use the matching degree associated with the matched scene tag as the matching degree between the scene type and the candidate electronic card.
在S1102中,选取所述匹配度最高的所述候选电子卡作为所述目标电子卡。In S1102, the candidate electronic card with the highest matching degree is selected as the target electronic card.
在本实施例中,由于匹配度是用于标识各个候选电子卡与当前场景之间的关联关系,因此,匹配度越高,则候选电子卡与当前场景之间的关联关系越强;反之,匹配度越低,则候选电子卡与当前场景之间的关联关系越弱。基于此,终端设备可以选取匹配度最高的候选电子卡作为目标电子卡,实现了电子卡的自动选取。In this embodiment, since the matching degree is used to identify the association relationship between each candidate electronic card and the current scene, the higher the matching degree, the stronger the association relationship between the candidate electronic card and the current scene; on the contrary, The lower the matching degree, the weaker the association relationship between the candidate electronic card and the current scene. Based on this, the terminal device can select the candidate electronic card with the highest matching degree as the target electronic card, realizing the automatic selection of the electronic card.
在本申请实施例中,通过计算各个候选电子卡与场景类型之间的匹配度,并选取匹配度最高的候选电子卡作为目标电子卡,提高了目标电子卡的选取准确性。In the embodiment of the present application, by calculating the matching degree between each candidate electronic card and the scene type, and selecting the candidate electronic card with the highest matching degree as the target electronic card, the accuracy of selecting the target electronic card is improved.
进一步地,作为本申请的另一实施例,在S302之后,还可以包括S1103以及S1104:Further, as another embodiment of the present application, after S302, S1103 and S1104 may also be included:
在S1103中,通过所述目标电子卡与刷卡设备执行刷卡认证操作。In S1103, a card swiping authentication operation is performed through the target electronic card and the card swiping device.
在本实施例中,终端设备在确定了目标电子卡后,可以通过与刷卡设备之间的近场通信链路,将目标电子卡的卡信息发送给刷卡设备,以对目标电子卡进行刷卡认证,判断该目标电子卡是否与刷卡设备相匹配。若匹配成功,则执行后续的认证、授权、扣费等操作,其中后续操作与用户发起的操作类型相关,例如,目标电子卡为交通类型的电子卡,则可以通过交通电子卡支付交通费用;若目标电子卡为门禁类型的电子卡,则可以通过门禁电子卡进行开门授权。若检测到刷卡认证失败,则执行S1104的操作。In this embodiment, after the terminal device determines the target electronic card, it can send the card information of the target electronic card to the card-swiping device through the near field communication link with the card-swiping device to perform card-swiping authentication on the target electronic card To determine whether the target electronic card matches the card swiping device. If the matching is successful, perform subsequent operations such as authentication, authorization, and deduction, where the subsequent operations are related to the type of operation initiated by the user. For example, if the target electronic card is a transportation electronic card, the transportation electronic card can be used to pay for transportation; If the target electronic card is an access control type electronic card, the door opening authorization can be performed through the access control electronic card. If it is detected that the card swiping authentication fails, the operation of S1104 is executed.
在S1104中,若刷卡认证失败,则从除所述目标电子卡外的所有所述候选电子卡 中,选取所述匹配度最高的所述候选电子卡作为新的所述目标电子卡,并返回执行所述通过所述目标电子卡与刷卡设备执行刷卡操作,直到刷卡认证成功。In S1104, if the credit card authentication fails, select the candidate electronic card with the highest matching degree from all the candidate electronic cards except the target electronic card as the new target electronic card, and return Perform the card swiping operation through the target electronic card and the card swiping device until the card swiping authentication is successful.
在本实施例中,终端设备若接收到刷卡设备反馈的认证失败信息,则表示当前选择的目标电子卡与当前场景类型不匹配,因此需要重新从候选电子卡中重新确定目标电子卡。因此,终端设备可以选取匹配度数值次高的候选电子卡作为目标电子卡,并重新执行刷卡认证操作,直到刷卡认证成功。In this embodiment, if the terminal device receives the authentication failure information fed back by the card swiping device, it means that the currently selected target electronic card does not match the current scene type, so it is necessary to re-determine the target electronic card from the candidate electronic cards. Therefore, the terminal device can select the candidate electronic card with the second highest matching degree value as the target electronic card, and re-execute the card swipe authentication operation until the swipe authentication succeeds.
在本申请实施例中,在刷卡失败时,自动选取匹配度次高的候选电子卡作为目标电子卡,从而实现了自动更换电子卡的目的,减少了用户的操作。In the embodiment of the present application, when the card swiping fails, the candidate electronic card with the second highest matching degree is automatically selected as the target electronic card, thereby achieving the purpose of automatically replacing the electronic card and reducing user operations.
图12示出了本申请第六实施例提供的一种电子卡的选取方法S302的具体实现流程图。参见图12,相对于图3、图6、图9以及图10任一所述实施例,本实施例提供的一种电子卡的选取方法中S302包括:S1201~S1202,具体详述如下:FIG. 12 shows a specific implementation flowchart of an electronic card selection method S302 provided by the sixth embodiment of the present application. Referring to FIG. 12, with respect to any one of the embodiments described in FIG. 3, FIG. 6, FIG. 9 and FIG.
进一步地,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:Further, the selecting the candidate electronic card matching the scene type as the target electronic card includes:
在S1201中,获取各个所述候选电子卡的标准场景。In S1201, the standard scene of each candidate electronic card is acquired.
在本实施例中,终端设备在存储各个候选电子卡时,可以根据用户设置或者基于电子卡类型确定关联的标准场景,并建立有标准场景索引表,并在确定了当前场景的场景类型后,基于上述的标准场景索引表,获取各个候选电子卡预先关联的标准场景。In this embodiment, when storing each candidate electronic card, the terminal device can determine the associated standard scene according to user settings or based on the electronic card type, and establish a standard scene index table, and after determining the scene type of the current scene, Based on the above-mentioned standard scene index table, the standard scenes pre-associated with each candidate electronic card are obtained.
在S1202中,将所述场景类型与各个所述标准场景进行匹配,并根据匹配结果确定所述目标电子卡。In S1202, the scene type is matched with each of the standard scenes, and the target electronic card is determined according to the matching result.
在本实施例中,终端设备可以将当前识别得到的场景类型与各个标准场景进行匹配,判断是否存在任一候选电子卡的标准场景与当前的场景类型一致,若存在,则识别该候选电子卡为目标电子卡。In this embodiment, the terminal device can match the currently recognized scene type with each standard scene, determine whether there is any candidate electronic card whose standard scene is consistent with the current scene type, and if so, identify the candidate electronic card For the target electronic card.
在本申请实施例中,通过为不同的候选电子卡关联标准场景,将标准场景与场景类型相匹配,确定目标电子卡,实现了目标电子卡的自动选取,减少了用户的操作难度,提高了刷卡效率。In the embodiment of this application, by associating standard scenes for different candidate electronic cards, matching the standard scenes with the scene types, and determining the target electronic card, the automatic selection of the target electronic card is realized, which reduces the user's operational difficulty and improves Swipe efficiency.
示例性地,图13示出了本申请一实施例提供的电子卡的选取系统的结构示意图。参见图13所示,该电子卡的选取系统包含有移动终端131,智能眼镜132、外置式麦克风133以及刷卡设备134,其中,移动终端131与智能眼镜132以及外置式麦克风133之间建立有通信连接,该移动终端131与刷卡设备134通过近场通信模块建立有通信连接。移动终端131内置有摄像模块1311、定位模块1312以及内置麦克风模块1313,移动终端131可以通过上述多个模块采集不同类型的场景信息,需要说明的是,移动终端131可以调用任一模块或外接设备采集一个场景信息,也可以通过两个或以上的模块、外接设备采集多个场景信息,并基于场景信息确定场景类型,并基于场景类型选取目标电子卡。Illustratively, FIG. 13 shows a schematic structural diagram of an electronic card selection system provided by an embodiment of the present application. As shown in FIG. 13, the electronic card selection system includes a mobile terminal 131, smart glasses 132, an external microphone 133, and a card swiping device 134, wherein communication is established between the mobile terminal 131, the smart glasses 132 and the external microphone 133 Connected, the mobile terminal 131 and the card swiping device 134 establish a communication connection through the near field communication module. The mobile terminal 131 has a camera module 1311, a positioning module 1312, and a built-in microphone module 1313. The mobile terminal 131 can collect different types of scene information through the above multiple modules. It should be noted that the mobile terminal 131 can call any module or external device. To collect one scene information, it is also possible to collect multiple scene information through two or more modules and external devices, determine the scene type based on the scene information, and select the target electronic card based on the scene type.
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the size of the sequence number of each step in the foregoing embodiment does not mean the order of execution. The execution sequence of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
对应于上文实施例所述的电子卡的选取方法,图14示出了本申请实施例提供的电子卡的选取装置的结构框图,为了便于说明,仅示出了与本申请实施例相关的部分。Corresponding to the electronic card selection method described in the above embodiment, FIG. 14 shows a structural block diagram of the electronic card selection device provided in the embodiment of the present application. For ease of description, only the information related to the embodiment of the present application is shown part.
参照图14,该电子卡的选取装置包括:Referring to Figure 14, the selection device of the electronic card includes:
场景类型确定单元141,用于获取当前的场景信息,并根据所述场景信息确定场景类型;The scene type determining unit 141 is configured to obtain current scene information, and determine the scene type according to the scene information;
电子卡选取单元142,用于选取与所述场景类型匹配的候选电子卡作为目标电子卡。The electronic card selection unit 142 is configured to select a candidate electronic card matching the scene type as the target electronic card.
可选地,所述场景类型确定单元141包括:Optionally, the scene type determining unit 141 includes:
场景图像获取单元,用于接收智能眼镜反馈的场景图像;The scene image acquisition unit is used to receive the scene image fed back by the smart glasses;
场景图像解析单元,用于识别所述场景图像内包含的拍摄主体;A scene image analysis unit for identifying the shooting subject contained in the scene image;
拍摄主体解析单元,用于根据所有所述拍摄主体确定所述场景类型。The photographing subject analysis unit is configured to determine the scene type according to all the photographing subjects.
可选地,所述场景类型确定单元141包括:Optionally, the scene type determining unit 141 includes:
环境声采集单元,用于采集当前场景下的环境声;The environmental sound collection unit is used to collect the environmental sound in the current scene;
发声主体确定单元,用于获取所述环境声的频域频谱,并根据所述频域频谱内包含的频率值确定当前场景内包含的发声主体;The utterance subject determining unit is configured to obtain the frequency domain spectrum of the environmental sound, and determine the utterance subject contained in the current scene according to the frequency value contained in the frequency domain spectrum;
发声主体解析单元,用于根据所有所述发声主体确定所述场景类型。The utterance subject analysis unit is configured to determine the scene type according to all the utterance subjects.
可选地,所述场景类型确定单元141包括:Optionally, the scene type determining unit 141 includes:
场景关键词提取单元,用于获取当前的位置信息,并提取所述位置信息内包含的场景关键词;The scene keyword extraction unit is used to obtain the current location information and extract the scene keywords contained in the location information;
置信概率计算单元,用于根据所有所述场景关键词关联的候选场景的置信度,分别计算各个候选场景的置信概率;A confidence probability calculation unit, configured to calculate the confidence probability of each candidate scene according to the confidence of all candidate scenes associated with the scene keywords;
场景类型选取单元,用于选取所述置信概率最高的候选场景作为所述位置信息对应的场景类型。The scene type selection unit is configured to select the candidate scene with the highest confidence probability as the scene type corresponding to the location information.
可选地,所述电子卡选取单元142包括:Optionally, the electronic card selection unit 142 includes:
匹配度计算单元,用于分别计算各个所述候选电子卡与所述场景类型之间的匹配度;A matching degree calculation unit, configured to calculate the matching degree between each candidate electronic card and the scene type;
匹配度选取单元,用于选取所述匹配度最高的所述候选电子卡作为所述目标电子卡。The matching degree selecting unit is configured to select the candidate electronic card with the highest matching degree as the target electronic card.
可选地,所述电子卡的选取装置还包括:Optionally, the electronic card selection device further includes:
刷卡认证单元,用于通过所述目标电子卡与刷卡设备执行刷卡认证操作;A card swiping authentication unit, configured to perform a card swiping authentication operation through the target electronic card and the card swiping device;
认证失败响应单元,用于若刷卡认证失败,则从除所述目标电子卡外的所有所述候选电子卡中,选取所述匹配度最高的所述候选电子卡作为新的所述目标电子卡,并返回执行所述通过所述目标电子卡与刷卡设备执行刷卡操作,直到刷卡认证成功。The authentication failure response unit is configured to, if the card swiping authentication fails, select the candidate electronic card with the highest matching degree as the new target electronic card from all the candidate electronic cards except the target electronic card , And return to execute the card swiping operation through the target electronic card and the card swiping device until the card swiping authentication is successful.
可选地,所述电子卡选取单元142包括:Optionally, the electronic card selection unit 142 includes:
标准场景获取单元,用于获取各个所述候选电子卡的标准场景;A standard scene acquiring unit, configured to acquire the standard scene of each candidate electronic card;
标准场景匹配单元,用于将所述场景类型与各个所述标准场景进行匹配,并根据匹配结果确定所述目标电子卡。The standard scene matching unit is configured to match the scene type with each of the standard scenes, and determine the target electronic card according to the matching result.
因此,本申请实施例提供的电子卡的选取装置同样可以在生成目标神经网络之前, 通过获取目标神经网络的网络信息,确定不同网络层级对应的量化精度,并基于当前层级的量化精度与上一层级的量化精度,配置用于转换不同精度之间数据格式的预处理函数,并根据预处理函数生成目标神经网络,从而实现了在同一目标神经网络内处理不同精度的数据,解决了混合精度的神经网络的兼容性问题,提高了运算效率。Therefore, the electronic card selection device provided by the embodiment of the present application can also determine the quantization accuracy corresponding to different network levels by acquiring the network information of the target neural network before generating the target neural network, and based on the quantization accuracy of the current level and the previous one. Hierarchical quantization precision, configure the preprocessing function used to convert the data format between different precisions, and generate the target neural network according to the preprocessing function, so as to realize the processing of different precision data in the same target neural network, and solve the problem of mixed precision The compatibility problem of neural network improves the efficiency of calculation.
图15为本申请一实施例提供的终端设备的结构示意图。如图15所示,该实施例的终端设备15包括:至少一个处理器150(图15中仅示出一个)处理器、存储器151以及存储在所述存储器151中并可在所述至少一个处理器150上运行的计算机程序152,所述处理器150执行所述计算机程序152时实现上述任意各个电子卡的选取方法实施例中的步骤。FIG. 15 is a schematic structural diagram of a terminal device provided by an embodiment of this application. As shown in FIG. 15, the terminal device 15 of this embodiment includes: at least one processor 150 (only one is shown in FIG. 15), a processor, a memory 151, and a processor stored in the memory 151 and capable of being processed in the at least one processor. The computer program 152 running on the processor 150, when the processor 150 executes the computer program 152, implements the steps in any of the above-mentioned electronic card selection method embodiments.
所述终端设备15可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。该终端设备可包括,但不仅限于,处理器150、存储器151。本领域技术人员可以理解,图15仅仅是终端设备15的举例,并不构成对终端设备15的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如还可以包括输入输出设备、网络接入设备等。The terminal device 15 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The terminal device may include, but is not limited to, a processor 150 and a memory 151. Those skilled in the art can understand that FIG. 15 is only an example of the terminal device 15 and does not constitute a limitation on the terminal device 15. It may include more or less components than shown in the figure, or a combination of certain components, or different components. , For example, can also include input and output devices, network access devices, and so on.
所称处理器150可以是中央处理单元(Central Processing Unit,CPU),该处理器150还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。The so-called processor 150 may be a central processing unit (Central Processing Unit, CPU), and the processor 150 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and application specific integrated circuits (Application Specific Integrated Circuits). , ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
所述存储器151在一些实施例中可以是所述终端设备15的内部存储单元,例如终端设备15的硬盘或内存。所述存储器151在另一些实施例中也可以是所述终端设备15的外部存储设备,例如所述终端设备15上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器151还可以既包括所述终端设备15的内部存储单元也包括外部存储设备。所述存储器151用于存储操作系统、应用程序、引导装载程序(BootLoader)、数据以及其他程序等,例如所述计算机程序的程序代码等。所述存储器151还可以用于暂时地存储已经输出或者将要输出的数据。The memory 151 may be an internal storage unit of the terminal device 15 in some embodiments, such as a hard disk or a memory of the terminal device 15. In other embodiments, the memory 151 may also be an external storage device of the terminal device 15, for example, a plug-in hard disk equipped on the terminal device 15, a smart media card (SMC), a secure digital (Secure Digital, SD) card, Flash Card, etc. Further, the memory 151 may also include both an internal storage unit of the terminal device 15 and an external storage device. The memory 151 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as the program code of the computer program. The memory 151 can also be used to temporarily store data that has been output or will be output.
需要说明的是,上述装置/单元之间的信息交互、执行过程等内容,由于与本申请方法实施例基于同一构思,其具体功能及带来的技术效果,具体可参见方法实施例部分,此处不再赘述。It should be noted that the information interaction and execution process between the above-mentioned devices/units are based on the same concept as the method embodiment of this application, and its specific functions and technical effects can be found in the method embodiment section. I won't repeat it here.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于 限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, only the division of the above functional units and modules is used as an example. In practical applications, the above functions can be allocated to different functional units and modules as needed. Module completion, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. The functional units and modules in the embodiments can be integrated into one processing unit, or each unit can exist alone physically, or two or more units can be integrated into one unit. The above-mentioned integrated units can be hardware-based Formal realization can also be realized in the form of a software functional unit. In addition, the specific names of the functional units and modules are only for the convenience of distinguishing each other, and are not used to limit the scope of protection of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, which will not be repeated here.
本申请实施例还提供了一种网络设备,该网络设备包括:至少一个处理器、存储器以及存储在所述存储器中并可在所述至少一个处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任意各个方法实施例中的步骤。An embodiment of the present application also provides a network device, which includes: at least one processor, a memory, and a computer program stored in the memory and running on the at least one processor, and the processor executes The computer program implements the steps in any of the foregoing method embodiments.
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application also provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in each of the foregoing method embodiments can be realized.
本申请实施例提供了一种计算机程序产品,当计算机程序产品在移动终端上运行时,使得移动终端执行时实现可实现上述各个方法实施例中的步骤。The embodiments of the present application provide a computer program product. When the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be realized when the mobile terminal is executed.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质至少可以包括:能够将计算机程序代码携带到拍照装置/终端设备的任何实体或装置、记录介质、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质。例如U盘、移动硬盘、磁碟或者光盘等。在某些司法管辖区,根据立法和专利实践,计算机可读介质不可以是电载波信号和电信信号。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the implementation of all or part of the processes in the above-mentioned embodiment methods in the present application can be accomplished by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When executed by the processor, the steps of the foregoing method embodiments can be implemented. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may at least include: any entity or device capable of carrying the computer program code to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), and random access memory (RAM, Random Access Memory), electric carrier signal, telecommunications signal and software distribution medium. For example, U disk, mobile hard disk, floppy disk or CD-ROM, etc. In some jurisdictions, according to legislation and patent practices, computer-readable media cannot be electrical carrier signals and telecommunication signals.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above-mentioned embodiments, the description of each embodiment has its own focus. For parts that are not described in detail or recorded in an embodiment, reference may be made to related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
在本申请所提供的实施例中,应该理解到,所揭露的装置/网络设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/网络设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided in this application, it should be understood that the disclosed apparatus/network equipment and method may be implemented in other ways. For example, the device/network device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units. Or components can be combined or integrated into another system, or some features can be omitted or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, a person of ordinary skill in the art should understand that it can still implement the foregoing The technical solutions recorded in the examples are modified, or some of the technical features are equivalently replaced; these modifications or replacements do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the application, and should be included in Within the scope of protection of this application.

Claims (10)

  1. 一种电子卡的选取方法,其特征在于,包括:A method for selecting an electronic card, which is characterized in that it includes:
    获取当前的场景信息,并根据所述场景信息确定场景类型;Acquiring current scene information, and determining a scene type according to the scene information;
    选取与所述场景类型匹配的候选电子卡作为目标电子卡。The candidate electronic card matching the scene type is selected as the target electronic card.
  2. 根据权利要求1所述的选取方法,其特征在于,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:The selection method according to claim 1, wherein the acquiring current scene information and determining the scene type according to the scene information comprises:
    接收智能眼镜反馈的场景图像;Receive scene images fed back by smart glasses;
    识别所述场景图像内包含的拍摄主体;Identifying the shooting subject contained in the scene image;
    根据所有所述拍摄主体确定所述场景类型。The scene type is determined according to all the photographing subjects.
  3. 根据权利要求1所述的选取方法,其特征在于,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:The selection method according to claim 1, wherein the acquiring current scene information and determining the scene type according to the scene information comprises:
    采集当前场景下的环境声;Collect the ambient sound in the current scene;
    获取所述环境声的频域频谱,并根据所述频域频谱内包含的频率值确定当前场景内包含的发声主体;Acquiring a frequency domain spectrum of the environmental sound, and determining a sounding subject contained in the current scene according to the frequency value contained in the frequency domain spectrum;
    根据所有所述发声主体确定所述场景类型。The scene type is determined according to all the speaking subjects.
  4. 根据权利要求1所述的选取方法,其特征在于,所述获取当前的场景信息,并根据所述场景信息确定场景类型,包括:The selection method according to claim 1, wherein the acquiring current scene information and determining the scene type according to the scene information comprises:
    获取当前的位置信息,并提取所述位置信息内包含的场景关键词;Acquiring current location information, and extracting scene keywords contained in the location information;
    根据所有所述场景关键词关联的候选场景的置信度,分别计算各个候选场景的置信概率;Calculate the confidence probability of each candidate scene according to the confidence of all candidate scenes associated with the scene keywords;
    选取所述置信概率最高的候选场景作为所述位置信息对应的场景类型。The candidate scene with the highest confidence probability is selected as the scene type corresponding to the location information.
  5. 根据权利要求1-4任一所述的选取方法,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:According to the selection method of any one of claims 1-4, the selection of a candidate electronic card matching the scene type as a target electronic card comprises:
    分别计算各个所述候选电子卡与所述场景类型之间的匹配度;Respectively calculating the matching degree between each of the candidate electronic cards and the scene type;
    选取所述匹配度最高的所述候选电子卡作为所述目标电子卡。The candidate electronic card with the highest matching degree is selected as the target electronic card.
  6. 根据权利要求5所述的选取方法,其特征在于,在所述选取与所述场景类型匹配的候选电子卡作为目标电子卡之后,还包括:The selection method according to claim 5, wherein after the selection of a candidate electronic card matching the scene type as a target electronic card, the method further comprises:
    通过所述目标电子卡与刷卡设备执行刷卡认证操作;Perform a card swiping authentication operation through the target electronic card and the card swiping device;
    若刷卡认证失败,则从除所述目标电子卡外的所有所述候选电子卡中,选取所述匹配度最高的所述候选电子卡作为新的所述目标电子卡,并返回执行所述通过所述目标电子卡与刷卡设备执行刷卡操作,直到刷卡认证成功。If the credit card authentication fails, from all the candidate electronic cards except the target electronic card, select the candidate electronic card with the highest matching degree as the new target electronic card, and return to execute the pass The target electronic card and the card swiping device perform a card swiping operation until the swiping authentication is successful.
  7. 根据权利要求1-4任一所述的选取方法,其特征在于,所述选取与所述场景类型匹配的候选电子卡作为目标电子卡,包括:The selection method according to any one of claims 1 to 4, wherein the selection of a candidate electronic card matching the scene type as a target electronic card comprises:
    获取各个所述候选电子卡的标准场景;Acquiring the standard scene of each candidate electronic card;
    将所述场景类型与各个所述标准场景进行匹配,并根据匹配结果确定所述目标电子卡。The scene type is matched with each of the standard scenes, and the target electronic card is determined according to the matching result.
  8. 一种电子卡的选取装置,其特征在于,包括:An electronic card selection device, characterized in that it comprises:
    场景类型确定单元,用于获取当前的场景信息,并根据所述场景信息确定场景类型;A scene type determining unit, configured to obtain current scene information, and determine the scene type according to the scene information;
    电子卡选取单元,用于选取与所述场景类型匹配的候选电子卡作为目标电子卡。The electronic card selection unit is used to select a candidate electronic card matching the scene type as the target electronic card.
  9. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7任一项所述的方法。A terminal device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, wherein the processor executes the computer program as claimed in claims 1 to 7. The method of any one of.
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7任一项所述的方法。A computer-readable storage medium storing a computer program, wherein the computer program implements the method according to any one of claims 1 to 7 when the computer program is executed by a processor.
PCT/CN2021/080488 2020-03-17 2021-03-12 Electronic card selection method and apparatus, terminal, and storage medium WO2021185174A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010187020.XA CN113409041B (en) 2020-03-17 2020-03-17 Electronic card selection method, device, terminal and storage medium
CN202010187020.X 2020-03-17

Publications (1)

Publication Number Publication Date
WO2021185174A1 true WO2021185174A1 (en) 2021-09-23

Family

ID=77677276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080488 WO2021185174A1 (en) 2020-03-17 2021-03-12 Electronic card selection method and apparatus, terminal, and storage medium

Country Status (2)

Country Link
CN (1) CN113409041B (en)
WO (1) WO2021185174A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703391A (en) * 2022-09-23 2023-09-05 荣耀终端有限公司 Electronic card activation method and device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115987935A (en) * 2022-11-30 2023-04-18 海尔优家智能科技(北京)有限公司 Notification message push method, device, storage medium and electronic device
TWI833519B (en) * 2022-12-23 2024-02-21 華南商業銀行股份有限公司 Electronic payment system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014104430A1 (en) * 2012-12-27 2014-07-03 신한카드 주식회사 Method for controlling payment device for selecting payment means
CN107330687A (en) * 2017-06-06 2017-11-07 深圳市金立通信设备有限公司 A kind of near field payment method and terminal
CN109919600A (en) * 2019-03-04 2019-06-21 出门问问信息科技有限公司 A kind of virtual card call method, device, equipment and storage medium
CN110557742A (en) * 2019-09-26 2019-12-10 珠海市魅族科技有限公司 Default binding card switching method, device, equipment and storage medium for near field communication
CN110795949A (en) * 2019-09-25 2020-02-14 维沃移动通信(杭州)有限公司 Card swiping method and device, electronic equipment and medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007048976A1 (en) * 2007-06-29 2009-01-02 Voice.Trust Ag Virtual prepaid or credit card and method and system for providing such and for electronic payments
CN101593522B (en) * 2009-07-08 2011-09-14 清华大学 Method and equipment for full frequency domain digital hearing aid
CN102405673A (en) * 2010-02-04 2012-04-04 华为终端有限公司 Method and wireless terminal device for reducing power consumption of wireless terminal device
CN103456301B (en) * 2012-05-28 2019-02-12 中兴通讯股份有限公司 A kind of scene recognition method and device and mobile terminal based on ambient sound
TWI476718B (en) * 2012-12-12 2015-03-11 Insyde Software Corp Automatic Screening Method and Device for Electronic Card of Handheld Mobile Device
CN106547533A (en) * 2016-07-15 2017-03-29 乐视控股(北京)有限公司 A kind of display packing and device
CN108600634B (en) * 2018-05-21 2020-07-21 Oppo广东移动通信有限公司 Image processing method and device, storage medium, electronic device
CN110536274B (en) * 2019-08-06 2022-11-25 拉卡拉支付股份有限公司 NFC device control method and device, NFC device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014104430A1 (en) * 2012-12-27 2014-07-03 신한카드 주식회사 Method for controlling payment device for selecting payment means
CN107330687A (en) * 2017-06-06 2017-11-07 深圳市金立通信设备有限公司 A kind of near field payment method and terminal
CN109919600A (en) * 2019-03-04 2019-06-21 出门问问信息科技有限公司 A kind of virtual card call method, device, equipment and storage medium
CN110795949A (en) * 2019-09-25 2020-02-14 维沃移动通信(杭州)有限公司 Card swiping method and device, electronic equipment and medium
CN110557742A (en) * 2019-09-26 2019-12-10 珠海市魅族科技有限公司 Default binding card switching method, device, equipment and storage medium for near field communication

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703391A (en) * 2022-09-23 2023-09-05 荣耀终端有限公司 Electronic card activation method and device
CN116703391B (en) * 2022-09-23 2024-04-26 荣耀终端有限公司 Electronic card activation method and device

Also Published As

Publication number Publication date
CN113409041A (en) 2021-09-17
CN113409041B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
US12190878B2 (en) Voice interaction method and apparatus
US11582337B2 (en) Electronic device and method of executing function of electronic device
EP4064276A1 (en) Method and device for speech recognition, terminal and storage medium
WO2020181988A1 (en) Speech control method and electronic device
KR20210092795A (en) Voice control method and electronic device
WO2021185174A1 (en) Electronic card selection method and apparatus, terminal, and storage medium
CN112269853B (en) Retrieval processing method, device and storage medium
CN110795007B (en) Method and device for acquiring screenshot information
CN112130714B (en) Keyword search method capable of learning and electronic equipment
CN113220848B (en) Automatic question and answer method and device for man-machine interaction and intelligent equipment
CN111881315A (en) Image information input method, electronic device, and computer-readable storage medium
WO2021135578A1 (en) Page processing method and apparatus, and storage medium and terminal device
CN116311388A (en) Fingerprint identification method and device
WO2022194190A1 (en) Method and apparatus for adjusting numerical range of recognition parameter of touch gesture
CN113806469B (en) Statement intention recognition method and terminal equipment
CN112740148A (en) Method for inputting information into input box and electronic equipment
CN113742460B (en) Method and device for generating virtual roles
CN115131789A (en) Character recognition method, character recognition equipment and storage medium
WO2021242820A1 (en) Media request system
CN117631939A (en) Touch input method, system, electronic equipment and storage medium
CN116049347B (en) A sequence annotation method and related equipment based on word fusion
WO2024093993A1 (en) Simulated click method and electronic device
WO2023236908A1 (en) Image description method, electronic device and computer-readable storage medium
WO2023236801A1 (en) Graphic code recognition method and electronic device
CN119088484A (en) Method and electronic device for detecting page service

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21771738

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21771738

Country of ref document: EP

Kind code of ref document: A1