CN110138999B - Certificate scanning method and device for mobile terminal - Google Patents

Certificate scanning method and device for mobile terminal Download PDF

Info

Publication number
CN110138999B
CN110138999B CN201910461705.6A CN201910461705A CN110138999B CN 110138999 B CN110138999 B CN 110138999B CN 201910461705 A CN201910461705 A CN 201910461705A CN 110138999 B CN110138999 B CN 110138999B
Authority
CN
China
Prior art keywords
certificate
angle
pattern
frame
shooting interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910461705.6A
Other languages
Chinese (zh)
Other versions
CN110138999A (en
Inventor
张宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Star Map Financial Services Group Co.,Ltd.
Original Assignee
Suning Financial Services Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suning Financial Services Shanghai Co ltd filed Critical Suning Financial Services Shanghai Co ltd
Priority to CN201910461705.6A priority Critical patent/CN110138999B/en
Publication of CN110138999A publication Critical patent/CN110138999A/en
Application granted granted Critical
Publication of CN110138999B publication Critical patent/CN110138999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a certificate scanning method and device for a mobile terminal, relates to the technical field of intelligent terminals, and can simplify user operation and improve scanning efficiency. The invention comprises the following steps: identifying a pattern which accords with a preset image identifier from the shot image; acquiring the position and the angle of the card object according to the identified pattern; and displaying a viewing frame in a shooting interface according to the position and the angle of the card object, wherein the display position of the viewing frame is matched with the position of the card object, and the angle of the viewing frame is matched with the angle of the card object. The invention is suitable for certificate scanning.

Description

Certificate scanning method and device for mobile terminal
Technical Field
The invention relates to the technical field of intelligent terminals, in particular to a certificate scanning method and device for a mobile terminal.
Background
As the online financial service industry enters a rapid development period, a plurality of business schemes and branch technologies are derived, wherein a certificate and card identification scheme based on image acquisition becomes one of important tools for facilitating terminal users. For example, as shown in fig. 1, the current operation mode of scanning an identity card is mainly as follows: and applying an image recognition technology, enabling a user to align the edge of the certificate to shoot by using a mobile phone camera according to a horizontal or vertical view frame given by the app, and then separating the certificate from the photo background according to the view frame. The method is the most common mode for scanning certificates through a smart phone at present, and is generally applied to APP of various large electronic commerce and banks.
Although the image recognition technology is a mature technology, and realizes the recognition and a certain degree of angle calibration of each element in the captured image, the 'maturity' of the image recognition technology does not represent the 'good use' of the final scanning mode, and the current scheme also has defects and shortcomings, especially the problem that a user needs to align the edge of the certificate by using a view frame on a mobile phone screen during the use process.
In practical application, the performance difference of different smart phones and the shooting level of different users are restricted, the operation is not easy, the user needs to adjust the distance and the rotation angle of the smart phone from the certificate, the smart phone is far away from the certificate and cannot move, the user is askew and cannot move, if the user shakes the hand in the aligning process, the scanned picture is blurred, the later-stage pixel calibration and inclination angle calibration are carried out, the problem of recognition error is also existed, therefore, most users can place the certificate on a desktop, and the user can take pictures by holding the smart phone with two hands. Some users can also hold the certificate card with one hand and hold the mobile phone with the other hand, and shoot through continuous adjustment of two hands, and also shoot by using two hands. This undoubtedly increases the operational complexity of the user and the actual time consumption of the entire operational process does not have any particularly great efficiency advantage with respect to the input of numbers.
Disclosure of Invention
The embodiment of the invention provides a certificate scanning method and device for a mobile terminal, which can simplify user operation and improve scanning efficiency.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
identifying a pattern which accords with a preset image identifier from the shot image; acquiring the position and the angle of the card object according to the identified pattern; and displaying a viewing frame in a shooting interface according to the position and the angle of the card object, wherein the display position of the viewing frame is matched with the position of the card object, and the angle of the viewing frame is matched with the angle of the card object.
Wherein, according to the pattern of discernment, acquire position and angle of card object, include: determining a contour template of the card object according to the type of the card object, wherein the contour template comprises a default shape of the card object and a position and an angle of the preset image identifier in the card object; and according to the position and the angle of the pattern in the shooting interface, contrasting the position and the angle of the preset image identification in the card object to obtain the position and the angle of the card object.
Specifically, after recognizing the pattern conforming to the preset image identifier, the method further includes: obtaining the inclination angle of the pattern according to the position and the orientation of the pattern in the shooting interface; and determining the boundary points of the pattern, and acquiring a rectangular frame covering the pattern by using the boundary points of the pattern and the inclination angle of the pattern.
Specifically, the displaying a finder frame in the shooting interface according to the position and the angle of the card object includes: determining the outline of the card object according to the outline template and a rectangular frame covering the pattern; generating a viewing frame according to the current outline of the card object in the shooting interface; and covering the current outline of the card object in the shooting interface through the generated view-finding frame.
In this embodiment, an image identifier fixed on a card object (for example, a union pay mark on a certificate) is locked through image recognition, the image identifier is scanned, an edge boundary point of an image position is found, an inclination angle of the image identifier of the card object and the size of a rectangular outer frame are determined, and in view of the fact that the position, the size and the proportional relation with the card object of some image identifiers (for example, the union pay mark) on the card object are fixed, the outer contour of the certificate is calculated according to the position, the size and the proportional relation of the image identifiers, and the certificate is separated from a scanned image. The generated view-finding frame actively matches the photographed card object, so that the user can save the process of aligning the rectangular frame, and can finish scanning and separating the images of the card objects such as bank cards and the like and upload the images only according to the normal photographing flow. The method and the device not only facilitate the operation of users, but also improve the scanning efficiency.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of one implementation in the prior art;
FIG. 2 is a diagram of a hardware device provided by an embodiment of the present invention;
FIG. 3 is a diagram illustrating a software environment in a hardware device according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a method provided by an embodiment of the present invention;
FIG. 5a, FIG. 5b, FIG. 5c, FIG. 5d, and FIG. 5e are schematic diagrams of an embodiment of the present invention;
FIGS. 6a, 6b, and 6c are schematic diagrams of another embodiment according to the present invention;
FIG. 7 is a schematic flow chart illustrating another embodiment of the present invention;
fig. 8 is a schematic structural diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
Referring to fig. 2, fig. 2 is a schematic structural diagram of an implementation manner of a user equipment main body (e.g., a mobile terminal such as a smart phone) according to an embodiment of the present disclosure. As can be seen from fig. 2, the user equipment main body 100 may further include a processor 103, an external memory interface 104, an internal memory 105, a Universal Serial Bus (USB) interface 106, a charging management module 107, a power management module 108, a battery 109, an antenna 1, an antenna 2, a mobile communication module 110, a wireless communication module 111, an audio module 112, a speaker 113, a receiver 114, a microphone 115, an earphone interface 116, a sensor module 117, a button 118, a motor 119, an indicator 120, a camera 121, a Subscriber Identity Module (SIM) card interface 122, and the like. The sensor module 117 may include a pressure sensor 1171, a gyroscope sensor 1172, an air pressure sensor 1173, a magnetic sensor 1174, an acceleration sensor 1175, a distance sensor 1176, a proximity light sensor 1177, a fingerprint sensor 1178, a temperature sensor 1179, a touch sensor 1180, an ambient light sensor 1181, a bone conduction sensor 1182, and the like.
The input means of the user equipment main body 100 shown in the above embodiments may include: display screen 102, sensor module 117, keys 118, motor 119, camera 121, and the like.
Those skilled in the art will appreciate that the structure of the user equipment body shown in fig. 2 does not constitute a limitation of the user equipment body of the present application, which may include more or less components than those shown, or some components may be combined, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Among other things, processor 103 may include one or more processing units, such as: the processor 103 may include an Application Processor (AP), a modem processor, a Graphics Processor (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. The controller may be a neural center and a command center of the user equipment, among others. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 103 for storing instructions and data. In some embodiments, the memory in the processor 103 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 103. If the processor 103 needs to use the instruction or data again, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 103, thereby increasing the efficiency of the system.
In some embodiments, the processor 103 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 103 may include multiple sets of I2C buses. The processor 103 may be coupled to the touch sensor 1180, the charger, the flash, the camera 121, and the like through different I2C bus interfaces. For example: the processor 103 may be coupled to the touch sensor 1180 through an I2C interface, so that the processor 103 and the touch sensor 1180 communicate through an I2C bus interface to implement a touch function of the user equipment.
The I2S interface may be used for audio communication. In some embodiments, processor 103 may include multiple sets of I2S buses. The processor 103 may be coupled to the audio module 112 via an I2S bus to enable communication between the processor 103 and the audio module 112. In some embodiments, the audio module 112 can transmit audio signals to the wireless communication module 111 through the I2S interface, so as to receive phone calls through the bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 112 and wireless communication module 111 may be coupled by a PCM bus interface. In some embodiments, the audio module 112 may also transmit the audio signal to the wireless communication module 111 through the PCM interface, so as to implement the function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 103 with the wireless communication module 111. For example: the processor 103 communicates with the bluetooth module in the wireless communication module 111 through the UART interface to implement the bluetooth function. In some embodiments, the audio module 112 may transmit the audio signal to the wireless communication module 111 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 103 with peripheral devices such as the display screen 102, the camera 121, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a display screen serial interface (DSI), and the like. In some embodiments, processor 103 and camera 121 communicate through a CSI interface to implement the shooting function of the user equipment. The processor 103 and the display screen 102 communicate through the DSI interface to implement the display function of the user equipment.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 103 with the camera 121, the display screen 102, the wireless communication module 111, the audio module 112, the sensor module 117, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 106 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 106 may be used to connect a charger to charge the user device, and may also be used to transfer data between the user device and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in this embodiment is only an exemplary illustration, and does not constitute a structural limitation on the user equipment. In other embodiments, the user equipment may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 107 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 107 may receive charging input from a wired charger via the USB interface 106. In some wireless charging embodiments, the charging management module 107 may receive a wireless charging input through a wireless charging coil of the user device. The charging management module 107 may also provide power to the user device through the power management module 108 while charging the battery 109.
The power management module 108 is used for connecting the battery 109, the charging management module 107 and the processor 103. The power management module 108 receives input from the battery 109 and/or the charging management module 107, and provides power to the processor 103, the internal memory 105, the external memory, the display screen 102, the camera 121, and the wireless communication module 111. The power management module 108 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 108 may also be disposed in the processor 103. In other embodiments, the power management module 108 and the charging management module 107 may be disposed in the same device.
The wireless communication function of the user equipment can be realized by the antenna 1, the antenna 2, the mobile communication module 110, the wireless communication module 111, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the user equipment may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 110 may provide a solution including 2G/3G/4G/5G wireless communication applied on the user equipment. The mobile communication module 110 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 110 can receive the electromagnetic wave from the antenna 1, and filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 110 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least part of the functional modules of the mobile communication module 110 may be provided in the processor 103. In some embodiments, at least part of the functional modules of the mobile communication module 110 may be provided in the same device as at least part of the modules of the processor 103.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 113, the receiver 114, etc.) or displays an image or video through the display screen 102. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 103 and may be disposed in the same device as the mobile communication module 110 or other functional modules.
The wireless communication module 111 may provide a solution for wireless communication applied to the user equipment, including Wireless Local Area Networks (WLANs) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite Systems (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 111 may be one or more devices integrating at least one communication processing module. The wireless communication module 111 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 103. The wireless communication module 111 may also receive a signal to be transmitted from the processor 103, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, the antenna 1 of the user equipment is coupled to the mobile communication module 110 and the antenna 2 is coupled to the wireless communication module 111, so that the user equipment can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou satellite navigation system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The user device implements the display function via the GPU, the display screen 102, and the application processor, etc. The GPU is a microprocessor for image processing, and is connected to the display screen 102 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 103 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 102 is used to display images, video, and the like. The display screen 102 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the user device may include 1 or N display screens 102, N being a positive integer greater than 1.
The user equipment may implement a shooting function through the ISP, the camera 121, the video codec, the GPU, the display screen 102, the application processor, and the like.
The issp is used to process data fed back by the camera 121. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 121.
The camera 121 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the user device may include 1 or N cameras 121, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the user equipment selects in a frequency point, the digital signal processor is used for performing fourier transform and the like on the frequency point energy.
Video codecs are used to compress or decompress digital video. The user equipment may support one or more video codecs. In this way, the user equipment can play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can realize applications such as intelligent cognition of user equipment, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 104 may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the user equipment. The external memory card communicates with the processor 103 through the external memory interface 104, implementing a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
Internal memory 105 may be used to store computer-executable program code, including instructions. The processor 103 executes various functional applications of the user equipment and data processing by executing instructions stored in the internal memory 105. The internal memory 105 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data created during use of the user device (e.g., audio data, a phonebook, etc.), and the like. Further, the internal memory 105 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The user equipment may implement audio functions through the audio module 112, the speaker 113, the receiver 114, the microphone 115, the headphone interface 116, and the application processor. Such as music playing, recording, etc.
The audio module 112 is used for converting digital audio information into an analog audio signal output and also for converting an analog audio input into a digital audio signal. The audio module 112 may also be used to encode and decode audio signals. In some embodiments, the audio module 112 may be disposed in the processor 103, or some functional modules of the audio module 112 may be disposed in the processor 103.
The speaker 113, also called a "horn", is used to convert an audio electrical signal into an acoustic signal. The user equipment can listen to music through the speaker 113 or listen to a hands-free conversation.
The receiver 114, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the user equipment answers a call or voice information, the voice can be answered by placing the receiver 114 close to the ear.
The microphone 115, also called "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 115 by speaking the user's mouth near the microphone 115. The user equipment may be provided with at least one microphone 115. In other embodiments, the user equipment may be provided with two microphones 115 to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the user equipment may further include three, four, or more microphones 115 to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The earphone interface 116 is used to connect a wired earphone. The headset interface 116 may be the USB interface 106, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 1171 is configured to sense a pressure signal, which may be converted to an electrical signal. In some embodiments, the pressure sensor 1171 may be disposed on the display screen 102. The pressure sensor 1171 can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, or the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 1171, the capacitance between the electrodes changes. The user equipment determines the intensity of the pressure from the change in capacitance. When a touch operation is applied to the display screen 102, the user equipment detects the intensity of the touch operation according to the pressure sensor 1171. The user device can also calculate the position of the touch from the detection signal of the pressure sensor 1171. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 1172 may be used to determine a motion gesture of the user device. In some embodiments, the angular velocity of the user device about three axes (i.e., x, y, and z axes) may be determined by the gyroscope sensors 1172. The gyro sensor 1172 may be used to capture anti-shake. Illustratively, when the shutter is pressed, the gyroscope 1172 detects the shaking angle of the user equipment, calculates the distance to be compensated for by the lens module according to the angle, and enables the lens to counteract the shaking of the user equipment through reverse movement, thereby realizing anti-shaking. The gyro sensor 1172 may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 1173 is used to measure air pressure. In some embodiments, the user device calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 1173.
The magnetic sensor 1174 comprises a hall sensor. The user device may detect the opening and closing of the holster 200 using the magnetic sensor 1174. In some embodiments, when the user device is a flip phone, the user device may detect the opening and closing of the flip based on the magnetic sensor 1174. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 1175 can detect the magnitude of acceleration of the user device in various directions (typically three axes). The magnitude and direction of gravity can be detected when the user device is stationary. The method can also be used for identifying the gesture of the user equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 1176 for measuring distance. The user device may measure distance by infrared or laser. In some embodiments, taking a scene, the user device may range using the distance sensor 1176 to achieve fast focus.
The proximity light sensor 1177 may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The user equipment emits infrared light outwards through the light emitting diode. The user device uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object in the vicinity of the user device. When insufficient reflected light is detected, the user device may determine that there are no objects near the user device. The user device may use the proximity sensor 1177 to detect that the user holds the user device close to the ear for talking, so as to automatically turn off the screen for power saving. The proximity light sensor 1177 may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 1181 is used to sense ambient light brightness. The user device may adaptively adjust the brightness of the display screen 102 based on the perceived ambient light level. The ambient light sensor 1181 may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 1181 may also cooperate with the proximity light sensor 1177 to detect whether the user device is in a pocket to prevent inadvertent contact.
Fingerprint sensor 1178 is used to capture a fingerprint. The user equipment can utilize the collected fingerprint characteristics to realize fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering and the like.
The temperature sensor 1179 is used to detect temperature. In some embodiments, the user device implements a temperature processing strategy using the temperature detected by the temperature sensor 1179. For example, when the temperature reported by the temperature sensor 1179 exceeds a threshold, the user equipment performs a reduction in performance of a processor located in proximity to the temperature sensor 1179 to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the user device heats battery 109 to avoid a low temperature causing the user device to shutdown abnormally. In other embodiments, when the temperature is below a further threshold, the user device performs a boost on the output voltage of battery 109 to avoid an abnormal shutdown due to low temperature.
Touch sensor 1180, also referred to as a "touch panel". The touch sensor 1180 may be disposed on the display screen 102, and the touch sensor 1180 and the display screen 102 form a touch screen, which is also called a "touch screen". The touch sensor 1180 is used to detect a touch operation applied thereto or nearby. The touch sensor 1180 may pass the detected touch operation to an application processor to determine the touch event type. Visual output related to the touch operation may be provided through the display screen 102. In other embodiments, the touch sensor 1180 may be disposed on the surface of the user device body 100, and is located at a position different from the position of the display screen 102.
Bone conduction sensor 1182 may acquire a vibration signal. In some embodiments, the bone conduction sensor 1182 may acquire a vibration signal of the human voice vibrating the bone mass. The bone conduction sensor 1182 may also contact the human body pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 1182 may also be disposed in a headset, integrated into a bone conduction headset. The audio module 112 may analyze a voice signal based on the vibration signal of the bone block vibrated by the sound part obtained by the bone conduction sensor 1182, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure pulsation signal acquired by the bone conduction sensor 1182, so as to realize a heart rate detection function.
The keys 118 include a power-on key (also referred to as an on-off key), a volume key, and the like. The keys 118 may be mechanical keys or touch keys. The user device may receive key inputs, generating key signal inputs relating to user settings and function controls of the user device.
The motor 119 may generate a vibration indication. The motor 119 may be used for both an electrical vibration cue and a touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 119 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 102. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 120 may be an indicator light, and may be used to indicate a charging status, a change in power, or a message, a missed call, a notification, etc.
The SIM card interface 122 is for connecting a SIM card. The SIM card can be attached to and detached from the user equipment by being inserted into the SIM card interface 122 or being pulled out from the SIM card interface 122. The user equipment can support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 122 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. Multiple cards can be inserted into the same SIM card interface 122 at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 122 may also be compatible with different types of SIM cards. The SIM card interface 122 may also be compatible with external memory cards. The user equipment realizes functions of communication, data communication and the like through the interaction of the SIM card and the network. In some embodiments, the user device employs esims, namely: an embedded SIM card. The eSIM card may be embedded in the user equipment main body 100 and cannot be separated from the user equipment main body 100.
Specifically, the input device may be configured to receive an operation of triggering a shooting interface by a user, and the camera 121 obtains image data through shooting and transmits the image data to the processor 103. After the processor 103 receives the image data, further processing may be performed on the image data, including but not limited to: and identifying patterns, faces, characters and the like in the image, and determining the areas of the identified patterns, faces and characters.
The software system of the user device 10 may employ a hierarchical architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, a software structure of the user equipment 10 is exemplarily described by taking an Android system with a layered architecture as an example.
Fig. 3 is a block diagram of a software structure of a user equipment according to an embodiment of the present application. As can be seen from fig. 3:
the layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The telephony manager is used to provide the communication functions of the user device 10. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like. The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications. The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing. The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
Based on the structure of the user equipment 10, the embodiment of the application also provides a certificate scanning method for the mobile terminal. The credential scanning method may be implemented in the user device 10 provided in the above embodiment. Optionally, reference may be made to fig. 4 for a specific implementation process of the certificate scanning method, and fig. 4 is a schematic flowchart of an implementation manner of the certificate scanning method provided in an embodiment of the present application. As can be seen in fig. 4, the document scanning method can be implemented as follows:
s101, identifying a pattern which accords with a preset image identifier from the shot image.
The preset image identification refers to preset identification information such as patterns, figures, characters and the like, and the identification information is recorded in the internal memory 105 of the user equipment main body in a mode of being input in advance; after the preset image identification is recorded, the preset image identification can be uploaded to a cloud server through a wireless network.
Specifically, many patterns are often specific patterns, and the user device body may further determine the type of the card object according to the recognized patterns. For example: if the pattern obtained by identification is a Unionpay mark, the card object is judged to be a bank card; for another example: and identifying to obtain the identity card background with the background pattern of the card object as the badge and the blue background, and then judging that the card object is the identity card.
In specific application, the scheme of the embodiment is executed in an app which needs real-name authentication (or scanning/uploading an identity card), specifically, a real-name authentication page can be entered, a page entering scanning and uploading a card object is clicked, and the card object is placed on a certain plane, so that the mobile phone is parallel to the plane; the card object is enabled to appear in a view finding frame of a photographing/uploading page, alignment is not needed, and scanning is started by clicking focusing.
S102, acquiring the position and the angle of the card object according to the identified pattern.
And S103, displaying a view frame in a shooting interface according to the position and the angle of the card object.
The display position of the viewing frame is matched with the position of the card object, and the angle of the viewing frame is matched with the angle of the card object.
In this embodiment, an image identifier fixed on a card object (for example, a union pay mark on a certificate) is locked through image recognition, the image identifier is scanned, an edge boundary point of an image position is found, an inclination angle of the image identifier of the card object and the size of a rectangular outer frame are determined, and in view of the fact that the position, the size and the proportional relation with the card object of some image identifiers (for example, the union pay mark) on the card object are fixed, the outer contour of the certificate is calculated according to the position, the size and the proportional relation of the image identifiers, and the certificate is separated from a scanned image.
The generated view-finding frame actively matches the photographed card object, so that the user can save the process of aligning the rectangular frame, and can finish scanning and separating the images of the card objects such as bank cards and the like and upload the images only according to the normal photographing flow. Compared with the existing scanning mode, the time is greatly saved, and the success rate of scanning/uploading certificates is improved. In practical application, the original scanning mode needs about 15-20 seconds, the problems of incapability of aligning, blurred hand shaking in the aligning process and the like can occur, and the success rate is low. The present embodiment can be completed in only 2-3 seconds, and the success rate of scanning/uploading can be greatly increased.
In the present embodiment, step S102: the position and the angle of the card object are obtained according to the recognized pattern, which can be specifically realized as follows:
and determining the outline template of the card object according to the type of the card object. And according to the position and the angle of the pattern in the shooting interface, contrasting the position and the angle of the preset image identification in the card object to obtain the position and the angle of the card object.
Wherein the default shape of the card object and the position and angle of the preset image identifier in the card object are included in the contour template. For example, as shown in fig. 5a, when the text recognition of the mobile phone detects that the bank card has the same pattern as the preset image identifier (the pattern of the union pay is recognized), the position orientation of the image identifier is recognized.
Further, after recognizing the pattern corresponding to the preset image identifier, the method further includes:
and obtaining the inclination angle of the pattern according to the position and the orientation of the pattern in the shooting interface. Specifically, as shown in fig. 5b, the inclination angle α (i.e. the angle between the horizontal position and the horizontal position of the screen) of the image marker (pattern) can be calculated according to the position and orientation of the image marker (pattern). After the boundary points of the pattern are determined, a rectangular frame covering the pattern is obtained by using the boundary points of the pattern and the inclination angle of the pattern. For example, the upper, lower, left, right (i.e. A, B, C, D)4 boundary points of the image identification (pattern) are found and determined, as shown in fig. 5 c. Then, the license number image identification (pattern) rectangular frame M can be determined through 4 boundary points and the character inclination angle α, as shown in fig. 5 d.
Specifically, the method for displaying the finder frame in the shooting interface according to the position and the angle of the card object includes:
and determining the outline of the card object according to the outline template and the rectangular frame covering the pattern. And generating a viewing frame according to the current outline of the card object in the shooting interface. And covering the current outline of the card object in the shooting interface through the generated view-finding frame. For example: as shown in fig. 5e, let M be l long and d wide; the outline of the outer rectangle of the certificate is Q, the length of Q is L, and the width of Q is D. Then, as shown in fig. 6a, the position and size proportional relationship on the bank card (certificate) according to the preset image identifier (union pay flag) is L1/L2 ═ k1, L3/L2 ═ k2, D1/D2 ═ k3, and D3/D2 ═ k 4. The lower side and the left side of the rectangular frame are taken as coordinate axes X-axis and Y-axis, a plane coordinate system is established, and the central point of the rectangular frame M of the image identification (pattern) (Unionpay identification) is O1(X1, Y1), and the central point of the rectangular frame of the bank card (certificate) is O2(X2, Y2).
Thus, O1 coordinates x1 — L2/2 and y1 — D2/2 can be obtained; the rectangular frame Q of the outer contour of the bank card (certificate) has the length L of k1 XL 2 and the width D of k3 XL 2.
Further, the coordinates of the center point O2 of the rectangular frame of the bank card (certificate) can be calculated as:
x2=x1-L1/2-L2/2-L3/2=L2/2-k1×L2/2-L2/2-k2×L2/2=-(k1+k2) ×L2/2;
y2=y1-D1/2-D2/2-D3/2=D2/2-k3×L2/2-L2/2-k4×L2/2=-(k3+k4) ×D2/2;
the proportionality coefficients k1, k2, k3 and k4 are known and can be obtained according to the public design standards of bank cards, identity cards and the like.
The length L and the width D of the rectangular frame M are substituted into a formula to calculate the length L of the rectangular frame Q of the certificate outer contour as k1 × L and the width D as k3 × D. The center point O1 coordinates are: x1 ═ k1+ k2 × l/2, Y1 ═ k3+ k4 × d/2; further, specific values of the center point O2(x2, y2) of the rectangular frame Q are calculated.
The length, width and position of the rectangular frame Q of the outer contour of the bank card (certificate) are determined according to the calculation, as shown in FIG. 6 b. And separating the rectangular frame Q with the outer contour of the bank card (certificate) from the picture as a view finder, rotating alpha anticlockwise to a horizontal position as shown in figure 6c by taking the central point O of the Q as a center, and storing and uploading the image.
Specifically, for example, as shown in fig. 7, the scanning process of the real-name certificate:
this function is turned on in apps that have a need for real-name authentication (or scanning/uploading of identity cards).
1. Entering a real-name authentication page, clicking to enter a scanned and uploaded certificate page, and placing the certificate on a plane to enable the mobile phone to be parallel to the plane;
2. the certificate is made to appear in a view-finding frame of a photographing/uploading page, alignment is not needed, and scanning is started by clicking focusing;
3. when the character recognition of the mobile phone detects that the bank card (certificate) has a pattern which is the same as the preset image identifier, the position and the orientation of the image identifier are recognized, as shown in fig. 5 a;
4. calculating the inclination angle alpha (i.e. the included angle between the image identifier and the horizontal position of the picture) of the image identifier according to the position orientation of the image identifier as shown in fig. 5 b;
5. finding and determining 4 boundary points of the image identification, i.e. A, B, C, D, such as fig. 5 c;
6. determining a license number image identification rectangular frame M according to the 4 boundary points and the character inclination angle alpha, as shown in FIG. 5 d;
7. identifying the position on the bank card (certificate) according to the known image and the proportional relation of the size L1/L2-k 1, L3/L2-k 2, D1/D2-k 3, and D3/D2-k 4; as shown in fig. 6a, the lower side and the left side of the rectangular frame are taken as the X axis and the Y axis, a plane coordinate system is established, the central point of the rectangular frame M of the image identifier (union pay identifier) is set as O1(X1, Y1), and the central point of the rectangular frame of the bank card (certificate) is set as O2(X2, Y2) as shown in fig. 5 e;
o1 coordinates x1 — L2/2 and y1 — D2/2 can be derived;
the rectangular frame Q of the outer contour of the bank card (certificate) has the length L of k1 XL 2 and the width D of k3 XL 2;
the coordinates of the center point O2 of the rectangular frame of the bank card (certificate) can be calculated as:
x2=x1-L1/2-L2/2-L3/2=L2/2-k1×L2/2-L2/2-k2×L2/2=-(k1+k2)× L2/2;
y2=y1-D1/2-D2/2-D3/2=D2/2-k3×L2/2-L2/2-k4×L2/2=-(k3+k4)× D2/2。
8. the length L and the width D of the rectangular frame M are substituted into a formula to calculate the length L of the rectangular frame Q of the certificate outer contour as k1 × L and the width D as k3 × D. The center point O1 coordinates are: x1 ═ k1+ k2 × l/2, Y1 ═ k3+ k4 × d/2; further calculating specific values of the center point O2(x2, y2) of the rectangular frame Q; determining the length, width and position of the rectangular frame Q of the outer contour of the bank card (certificate) according to the calculation;
9. the rectangular frame Q of the outer contour of the bank card (certificate) is separated from the photo, the center point O of Q is used as the center, the counter-clockwise rotation alpha is carried out to the horizontal position, and the image is stored and uploaded, as shown in fig. 6 c.
And recognizing the image identifier fixed in the bank card by using the preset fixed image identifier, and further calculating the current inclination angle alpha of the image identifier. Further, four points of the farthest edge contour point A, B, C, D around the pattern identifier are found out through image recognition and the known inclination angle alpha, and then the region where the image identifier is located, namely the rectangular frame M, is locked. According to the position of the known image identification (such as the Unionpay identification and the badge identification) on the bank card (certificate) and the size proportional relation, the size and the position of the rectangular frame Q of the outer contour of the bank card (certificate) can be obtained, and then the uploaded photos are stored in a separated mode.
In a particular application, there may be multiple embodiments. For example: the method can be used for scanning the back of the certificate, identifying the badge pattern on the front of the certificate, locking the rectangular outline of the badge pattern and the inclination angle of the rectangular outline of the badge pattern through identifying the image (red color), and further calculating the position and the size of the outline of the current certificate outline according to the fixed position/proportion relation of the outline of the rectangular outline of the badge outline and the outline of the certificate outline. And separating the front image of the certificate from the background, rotating to a horizontal position, and storing/uploading.
Optionally, the user equipment main body may also display the default viewing frame at an initial angle at an initial position in the shooting interface after the shooting interface is displayed. The default frame can be an existing representation, such as being in the center of the shooting interface and parallel to the edge of the shooting interface, i.e. an existing shooting frame. And refreshing the angle and the position of the default viewing frame according to the current contour of the card object in the shooting interface to obtain the viewing frame covering the current contour of the card object in the shooting interface.
Wherein the refreshing the angle and position of the default viewfinder comprises: and acquiring a position difference value and an angle difference value. And playing transition animation according to the position difference value and the angle difference value. And playing transition animation in the process of refreshing the angle and the position of the default viewing frame.
Wherein the position difference is: the initial position and the position of a view frame covering the current outline of the card object in the shooting interface are different, and the angle difference is as follows: and the difference value of the initial angle and the angle of a view frame covering the current outline of the card object in the shooting interface. The start of the transition animation is the angle and position of the default viewfinder, and the end of the transition animation is: and covering the position and the angle of a view frame of the current outline of the card object in the shooting interface.
An embodiment of the present invention further provides a certificate scanning apparatus for a mobile terminal, which may be implemented in a software environment as shown in fig. 3 through program codes, and as shown in fig. 8, the certificate scanning apparatus includes:
the image preprocessing module is used for identifying a pattern which accords with a preset image identifier from the shot image;
the image processing module is used for acquiring the position and the angle of the card object according to the identified pattern;
and the viewing frame generating module is used for displaying a viewing frame in a shooting interface according to the position and the angle of the card object, wherein the display position of the viewing frame is matched with the position of the card object, and the angle of the viewing frame is matched with the angle of the card object.
The image processing module is specifically configured to determine a contour template of the card object according to the type of the card object, where the contour template includes a default shape of the card object and a position and an angle of the preset image identifier in the card object; and according to the position and the angle of the pattern in the shooting interface, contrasting the position and the angle of the preset image identification in the card object to obtain the position and the angle of the card object.
The image processing module is further used for obtaining the inclination angle of the pattern according to the position and the orientation of the pattern in the shooting interface after the pattern conforming to the preset image identification is identified; and determining the boundary points of the pattern, and acquiring a rectangular frame covering the pattern by using the boundary points of the pattern and the inclination angle of the pattern.
The finder frame generation module is specifically configured to determine the outline of the card object according to the outline template and the rectangular frame covering the pattern;
generating a viewing frame according to the current outline of the card object in the shooting interface;
and covering the current outline of the card object in the shooting interface through the generated view-finding frame.
The viewing frame generation module is further configured to refresh the angle and the position of the default viewing frame according to the current contour of the card object in the shooting interface to obtain a viewing frame covering the current contour of the card object in the shooting interface; and after the shooting interface is displayed, displaying a default viewing frame at an initial angle at an initial position in the shooting interface.
The viewfinder generation module is specifically configured to obtain a position difference value and an angle difference value, where the position difference value is: the initial position and the position of a view frame covering the current outline of the card object in the shooting interface are different, and the angle difference is as follows: the difference value of the initial angle and the angle of a view frame covering the current outline of the card object in the shooting interface; and playing transition animation according to the position difference value and the angle difference value.
In a specific implementation, an embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a program, where the program includes instructions, and when executed, the program may include some or all of the steps of the payment method provided in the present application. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a wireless communication device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
All parts of the specification are described in a progressive mode, the same and similar parts of all embodiments can be referred to each other, and each embodiment is mainly introduced to be different from other embodiments. In addition, unless stated to the contrary, the embodiments of the present application refer to the ordinal numbers "first" and "second" for distinguishing a plurality of objects and not for limiting the sequence of the plurality of objects.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (2)

1. A certificate scanning method for a mobile terminal, comprising:
identifying a pattern which accords with a preset image identifier from the shot image;
acquiring the position and the angle of the certificate according to the recognized pattern;
displaying a view-finding frame in a shooting interface according to the position and the angle of the certificate, wherein the display position of the view-finding frame is matched with the position of the certificate, and the angle of the view-finding frame is matched with the angle of the certificate;
further comprising: displaying a default viewing frame at an initial angle at an initial position in the shooting interface after the shooting interface is displayed; refreshing the angle and the position of the default viewing frame according to the current profile of the certificate in the shooting interface to obtain a viewing frame covering the current profile of the certificate in the shooting interface;
the refreshing the angle and the position of the default viewfinder comprises: obtaining a position difference value and an angle difference value, wherein the position difference value is as follows: the difference value between the initial position and the position of a viewing frame covering the current profile of the certificate in the shooting interface is as follows: the difference value of the initial angle and the angle of a view frame covering the current outline of the certificate in the shooting interface; playing transition animation according to the position difference and the angle difference;
determining a rectangular frame M of the image identifier in the certificate through 4 boundary points and a character inclination angle alpha, wherein the length of the M is L2, and the width of the M is D2; the rectangular frame of the outer contour of the certificate is Q, the length of Q is L1, and the width of Q is D1; according to the position and size proportional relation L1/L2 ═ k1, L3/L2 ═ k2, D1/D2 ═ k3, D3/D2 ═ k4, the lower side and the left side of the image identifier are taken as coordinate axes X and Y, a plane coordinate system is established, the central point of a rectangular frame M is set to be O1(X1 and Y1), the central point of a rectangular frame of the outer contour of the certificate is set to be O2(X2 and Y2), the X1 of the coordinate of O1 is set to be L2/2, and Y1 is set to be D2/2; the length L of a certificate outer contour rectangular frame is k1 xL 2, the width D is k3 xL 2, wherein L1 to L3 represent the segmentation of the length of the certificate, L1 is (L2+ L3-x2) × 2, D1 to D3 represent the segmentation parameters of the width of the certificate, and D1 is D2+ D3;
after the specific numerical value of the central point O2 of the outer contour rectangular frame Q is calculated, separating the outer contour rectangular frame Q of the certificate from the picture as a viewing frame, and rotating alpha counterclockwise to a horizontal position by taking the central point O of Q as a center;
the acquiring the position and angle of the certificate according to the recognized pattern includes:
determining an outline template of the certificate according to the type of the certificate, wherein the outline template comprises a default shape of the certificate and a position and an angle of the preset image identifier in the certificate;
according to the position and the angle of the pattern in the shooting interface, the position and the angle of the certificate are obtained by contrasting the position and the angle of the preset image mark in the certificate;
after identifying the pattern conforming to the preset image identification, the method further comprises the following steps:
obtaining the inclination angle of the pattern according to the position and the orientation of the pattern in the shooting interface;
determining boundary points of the pattern, and acquiring a rectangular frame covering the pattern by using the boundary points of the pattern and the inclination angle of the pattern;
the position and the angle according to the certificate show the frame in the shooting interface, include:
determining the outline of the certificate according to the outline template and the rectangular frame covering the pattern;
generating a viewing frame according to the current profile of the certificate in the shooting interface;
and covering the current outline of the certificate in the shooting interface through the generated view-finding frame.
2. A credential scanning device for a mobile terminal, comprising:
the image preprocessing module is used for identifying a pattern which accords with a preset image identifier from the shot image;
the image processing module is used for acquiring the position and the angle of the certificate according to the identified pattern;
the viewing frame generating module is used for displaying a viewing frame in a shooting interface according to the position and the angle of the certificate, wherein the display position of the viewing frame is matched with the position of the certificate, and the angle of the viewing frame is matched with the angle of the certificate;
the viewing frame generation module is further configured to refresh an angle and a position of a default viewing frame according to the current profile of the certificate in the shooting boundary to obtain a viewing frame covering the current profile of the certificate in the shooting interface; after the shooting interface is displayed, displaying a default viewing frame at an initial angle at an initial position in the shooting interface;
the viewfinder generation module is specifically configured to obtain a position difference value and an angle difference value, where the position difference value is: the difference value between the initial position and the position of a viewing frame covering the current profile of the certificate in the shooting interface is as follows: the difference value of the initial angle and the angle of a view frame covering the current outline of the certificate in the shooting interface; playing transition animation according to the position difference and the angle difference;
determining a rectangular frame M of the image identifier in the certificate through 4 boundary points and a character inclination angle alpha, wherein the length of the M is L2, and the width of the M is D2; the rectangular frame of the outer contour of the certificate is Q, the length of Q is L1, and the width of Q is D1; according to the position and size proportional relation L1/L2 ═ k1, L3/L2 ═ k2, D1/D2 ═ k3, D3/D2 ═ k4, the lower side and the left side of the image identifier are taken as coordinate axes X and Y, a plane coordinate system is established, the central point of a rectangular frame M is set to be O1(X1 and Y1), the central point of a rectangular frame of the outer contour of the certificate is set to be O2(X2 and Y2), the X1 of the coordinate of O1 is set to be L2/2, and Y1 is set to be D2/2; the length L of a certificate outer contour rectangular frame is k1 xL 2, the width D is k3 xL 2, wherein L1 to L3 represent the segmentation of the length of the certificate, L1 is (L2+ L3-x2) × 2, D1 to D3 represent the segmentation parameters of the width of the certificate, and D1 is D2+ D3;
after the specific numerical value of the central point O2 of the outer contour rectangular frame Q is calculated, separating the outer contour rectangular frame Q of the certificate from the picture as a viewing frame, and rotating alpha counterclockwise to a horizontal position by taking the central point O of Q as a center;
the image processing module is specifically configured to determine an outline template of the certificate according to the type of the certificate, where the outline template includes a default shape of the certificate, and a position and an angle of the preset image identifier in the certificate; according to the position and the angle of the pattern in the shooting interface, the position and the angle of the certificate are obtained by contrasting the position and the angle of the preset image mark in the certificate;
the image processing module is further used for obtaining the inclination angle of the pattern according to the position and the orientation of the pattern in the shooting interface after the pattern conforming to the preset image identification is identified; determining boundary points of the pattern, and acquiring a rectangular frame covering the pattern by using the boundary points of the pattern and the inclination angle of the pattern;
the viewfinder frame generation module is specifically used for determining the outline of the certificate according to the outline template and the rectangular frame covering the pattern;
generating a viewing frame according to the current profile of the certificate in the shooting interface;
and covering the current outline of the certificate in the shooting interface through the generated view-finding frame.
CN201910461705.6A 2019-05-30 2019-05-30 Certificate scanning method and device for mobile terminal Active CN110138999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910461705.6A CN110138999B (en) 2019-05-30 2019-05-30 Certificate scanning method and device for mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910461705.6A CN110138999B (en) 2019-05-30 2019-05-30 Certificate scanning method and device for mobile terminal

Publications (2)

Publication Number Publication Date
CN110138999A CN110138999A (en) 2019-08-16
CN110138999B true CN110138999B (en) 2022-01-07

Family

ID=67582910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910461705.6A Active CN110138999B (en) 2019-05-30 2019-05-30 Certificate scanning method and device for mobile terminal

Country Status (1)

Country Link
CN (1) CN110138999B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112405518B (en) * 2019-08-23 2022-08-23 深圳拓邦股份有限公司 Robot control method, robot and automatic backtracking system of robot
CN110647881B (en) * 2019-09-19 2023-09-05 腾讯科技(深圳)有限公司 Method, device, equipment and storage medium for determining card type corresponding to image
CN113411477B (en) * 2021-06-10 2023-03-10 支付宝(杭州)信息技术有限公司 Image acquisition method, device and equipment
CN117994993A (en) * 2024-04-02 2024-05-07 中国电建集团昆明勘测设计研究院有限公司 Road intersection traffic light control method, system, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625760A (en) * 2009-07-28 2010-01-13 谭洪舟 Method for correcting certificate image inclination
CN103473541A (en) * 2013-08-21 2013-12-25 方正国际软件有限公司 Certificate perspective correction method and system
JP2015191531A (en) * 2014-03-28 2015-11-02 株式会社トッパンTdkレーベル Determination method of spatial position of two-dimensional code, and device therefor
CN105825243A (en) * 2015-01-07 2016-08-03 阿里巴巴集团控股有限公司 Method and device for certificate image detection
CN108764344A (en) * 2018-05-29 2018-11-06 北京物灵智能科技有限公司 A kind of method, apparatus and storage device based on limb recognition card
CN109034165A (en) * 2018-07-06 2018-12-18 北京中安未来科技有限公司 A kind of cutting method of certificate image, device, system and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625760A (en) * 2009-07-28 2010-01-13 谭洪舟 Method for correcting certificate image inclination
CN103473541A (en) * 2013-08-21 2013-12-25 方正国际软件有限公司 Certificate perspective correction method and system
JP2015191531A (en) * 2014-03-28 2015-11-02 株式会社トッパンTdkレーベル Determination method of spatial position of two-dimensional code, and device therefor
CN105825243A (en) * 2015-01-07 2016-08-03 阿里巴巴集团控股有限公司 Method and device for certificate image detection
CN108764344A (en) * 2018-05-29 2018-11-06 北京物灵智能科技有限公司 A kind of method, apparatus and storage device based on limb recognition card
CN109034165A (en) * 2018-07-06 2018-12-18 北京中安未来科技有限公司 A kind of cutting method of certificate image, device, system and storage medium

Also Published As

Publication number Publication date
CN110138999A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
US11785329B2 (en) Camera switching method for terminal, and terminal
CN113132620B (en) Image shooting method and related device
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN110138999B (en) Certificate scanning method and device for mobile terminal
CN111666119A (en) UI component display method and electronic equipment
WO2020029306A1 (en) Image capture method and electronic device
CN110248037B (en) Identity document scanning method and device
CN113542580B (en) Method and device for removing light spots of glasses and electronic equipment
CN114089932B (en) Multi-screen display method, device, terminal equipment and storage medium
CN114466107A (en) Sound effect control method and device, electronic equipment and computer readable storage medium
US11816494B2 (en) Foreground element display method and electronic device
CN114500901A (en) Double-scene video recording method and device and electronic equipment
CN112449101A (en) Shooting method and electronic equipment
CN112584037B (en) Method for saving image and electronic equipment
CN114222020B (en) Position relation identification method and device and readable storage medium
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
CN114283195B (en) Method for generating dynamic image, electronic device and readable storage medium
CN115686182B (en) Processing method of augmented reality video and electronic equipment
US20230298300A1 (en) Appearance Analysis Method and Electronic Device
CN114079725B (en) Video anti-shake method, terminal device, and computer-readable storage medium
CN111982037B (en) Height measuring method and electronic equipment
CN115150542A (en) Video anti-shake method and related equipment
CN114445522A (en) Brush effect graph generation method, image editing method, device and storage medium
CN111339513A (en) Data sharing method and device
CN116051351B (en) Special effect processing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 210042 No.1, Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Patentee after: Shanghai Star Map Financial Services Group Co.,Ltd.

Address before: 210042 No.1, Suning Avenue, Xuanwu District, Nanjing City, Jiangsu Province

Patentee before: Suning Financial Services (Shanghai) Co.,Ltd.

CP03 Change of name, title or address