CN113641431A - Method and terminal equipment for enhancing display of two-dimensional code - Google Patents

Method and terminal equipment for enhancing display of two-dimensional code Download PDF

Info

Publication number
CN113641431A
CN113641431A CN202110889849.9A CN202110889849A CN113641431A CN 113641431 A CN113641431 A CN 113641431A CN 202110889849 A CN202110889849 A CN 202110889849A CN 113641431 A CN113641431 A CN 113641431A
Authority
CN
China
Prior art keywords
dimensional code
screen image
image
detection
user operation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110889849.9A
Other languages
Chinese (zh)
Inventor
步晨
古全永
张恩迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202110889849.9A priority Critical patent/CN113641431A/en
Publication of CN113641431A publication Critical patent/CN113641431A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Abstract

The application relates to the technical field of terminals, and discloses a method for enhancing display of a two-dimensional code and a terminal device, which are used for solving the problem that the display parameters of the terminal device are often required to be manually adjusted by a user to enhance display of the two-dimensional code in the prior art. The two-dimension code detection on the screen interface can be triggered based on the user operation event, whether the two-dimension code exists in the screen interface can be automatically identified through detection, and when the two-dimension code exists, the display parameter of the terminal equipment can be adjusted to achieve enhanced display of the two-dimension code. Therefore, the scheme can be realized in the terminal equipment system in the embodiment of the application, so that any application can be free from independently developing a scheme for automatically enhancing the two-dimension code.

Description

Method and terminal equipment for enhancing display of two-dimensional code
Technical Field
The application relates to the technical field of terminals, in particular to a method for enhancing display of a two-dimensional code and a terminal device.
Background
The display parameters such as contrast, saturation, brightness and the like of the ink-wash screen are required to be adjusted according to actual conditions. According to actual tests, under the condition that light is dim or the quality of scanning equipment is general, the two-dimensional code displayed on the page cannot be easily recognized by external equipment, such as a subway two-dimensional code scanner. Therefore, a process of performing enhanced display on a page in which a two-dimensional code exists is required.
In the prior art, some applications can only display and enhance a fixed interface displaying a two-dimensional code in the application, and applications without an enhanced display function cannot automatically enhance and display the two-dimensional code in a page of the applications. For the application that the two-dimensional code cannot be automatically displayed in an enhanced manner, a user is often required to manually adjust the display parameters of the terminal device, so that the two-dimensional code recognition device can recognize the two-dimensional code. And the manual adjustment of the display parameters is complex to operate.
Disclosure of Invention
The application aims to provide a method for enhancing display of a two-dimensional code and a terminal device, and the method and the terminal device are used for solving the problem that in the prior art, a user often needs to manually adjust display parameters of the terminal device to enhance display of the two-dimensional code.
In a first aspect, the present application provides a method for enhanced display of a two-dimensional code, the method comprising:
receiving a first user operation event;
triggering the operation of two-dimension code detection on the screen image based on the first user operation event;
and if the two-dimensional code is detected from the screen image, performing enhanced display processing on the screen image.
In some embodiments, the triggering the operation of performing two-dimensional code detection on the screen image based on the first user operation event includes:
setting a trigger waiting time length based on the first user operation event;
and when the trigger waiting time length is timed, triggering the operation of detecting the two-dimensional code on the screen image.
In some embodiments, before triggering the operation of performing two-dimensional code detection on the screen image, the method further includes:
determining that a second user operation event is not received before the trigger waiting duration is timed.
In some embodiments, before triggering the operation of performing two-dimensional code detection on the screen image, the method further includes:
and if a second user operation event is received before the trigger waiting time is counted, abandoning the operation of performing two-dimensional code detection on the screen image triggered by the first user operation event.
In some embodiments, the performing two-dimensional code detection on the screen image includes:
the following operations are executed for N continuous periods until the two-dimensional code is detected or N periods are detected, wherein N is a positive integer:
acquiring a latest synthesized page image as the screen image;
detecting whether the screen image contains the two-dimensional code;
if any period in N continuous periods detects the two-dimensional code, determining that the two-dimensional code is detected from the screen image;
and if the two-dimensional code is not detected in the N periods, determining that the two-dimensional code is not detected in the screen image.
In some embodiments, before triggering the two-dimensional code detection on the screen image, the method further includes: storing the currently displayed image as the screen image into a cache;
the two-dimensional code detection is carried out on the screen image, and the method specifically comprises the following steps:
and detecting whether the screen image in the cache contains the two-dimensional code.
In some embodiments, before triggering the two-dimensional code detection on the screen image, the method further includes: storing the currently displayed image as the screen image into a cache;
the two-dimensional code detection is carried out on the screen image, and the method specifically comprises the following steps:
if no image to be displayed is synthesized or received currently, detecting whether a screen image in the cache contains a two-dimensional code;
if the current image to be displayed is synthesized or is received, the following operations are continuously executed for N periods until the two-dimensional code is detected or N periods are detected, wherein N is a positive integer:
acquiring a latest synthesized page image as the screen image;
detecting whether the screen image contains the two-dimensional code;
if any period in N continuous periods detects the two-dimensional code, determining that the two-dimensional code is detected from the screen image;
and if the two-dimensional code is not detected in the N periods, determining that the two-dimensional code is not detected in the screen image.
In some embodiments, the performing the enhanced display processing on the screen image includes:
the contrast of the screen image is increased, and/or the brightness of the terminal device is increased.
In some embodiments, the performing the enhanced display processing on the screen image includes:
displaying a user operation interface on the interface of the screen image;
and enhancing the displayed user operation based on the confirmation of the user operation interface, and increasing the contrast of the terminal equipment and/or increasing the brightness of the terminal equipment.
In a second aspect, the present application provides a terminal device, comprising:
a display, a processor, and a memory;
the display is used for displaying the two-dimensional code;
the memory to store the processor-executable instructions;
the processor is configured to execute the instructions to implement a method of enhanced display of a two-dimensional code as described in any of the above first aspects.
In a third aspect, the present application provides a computer-readable storage medium, where instructions, when executed by a terminal device, enable the terminal device to perform the method for enhanced display of a two-dimensional code as described in any one of the first aspect.
Whether two-dimensional codes exist in the screen interface is automatically identified through the two-dimensional code detection of the screen interface in the embodiment of the application, and the display effect of the detected two-dimensional code image is automatically enhanced, so that the probability that the two-dimensional code image is identified by external equipment can be improved under the condition that light is dim or the quality of scanning equipment is general.
In addition, in the embodiment of the application, the trigger waiting time is set in the process of automatically detecting the two-dimensional code of the screen image, so that when a user frequently and quickly operates, the operation of detecting the two-dimensional code cannot be frequently triggered, and the system load is effectively reduced.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 2 is a block diagram of a software structure of a terminal device according to an embodiment of the present disclosure;
fig. 3 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for enhancing display of a two-dimensional code according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a first implementation manner of triggering two-dimensional code detection on a screen image according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a second implementation manner of triggering two-dimensional code detection on a screen image according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a sync frame detection provided in an embodiment of the present application;
fig. 8 is a schematic flowchart of asynchronous frame detection according to an embodiment of the present application;
FIG. 9 is a schematic flowchart of a mixed frame detection and enhanced display process performed on a screen image according to an embodiment of the present disclosure;
FIG. 10 is a user interface diagram provided in an embodiment of the present application;
FIG. 11 is a block diagram of software provided by an embodiment of the present application;
fig. 12 is a schematic diagram of a motion receiver according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Also, in the description of the embodiments of the present application, "/" indicates or means, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.
The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, "a plurality" means two or more unless otherwise indicated.
In the related art, some applications can only perform display enhancement on a fixed interface displaying a two-dimensional code in the application, and applications without an enhanced display function cannot automatically perform enhanced display on the two-dimensional code in a page of the applications. For the application that the two-dimensional code cannot be automatically displayed in an enhanced manner, a user is often required to manually adjust the display parameters of the terminal device, so that the two-dimensional code recognition device can recognize the two-dimensional code. The display parameters are manually adjusted, and the operation is complex. Therefore, the application provides a method for enhancing display of a two-dimensional code.
The inventive concept of the present application can be summarized as follows: in the embodiment of the application, the two-dimension code detection on the screen interface can be triggered based on the user operation event, whether the two-dimension code exists in the screen interface can be automatically identified through detection, and when the two-dimension code exists, the display parameter of the terminal equipment can be adjusted to realize the enhanced display of the two-dimension code. Therefore, the scheme can be realized in the terminal equipment system in the embodiment of the application, so that any application can be free from independently developing a scheme for automatically enhancing the two-dimension code.
After the inventive concept of the present application is introduced, the terminal device provided in the present application will be described below. Fig. 1 shows a schematic structural diagram of a terminal device 100. It should be understood that the terminal device 100 shown in fig. 1 is only an example, and the terminal device 100 may have more or less components than those shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A block diagram of a hardware configuration of a terminal device 100 according to an exemplary embodiment is exemplarily shown in fig. 1. As shown in fig. 1, the terminal device 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal device 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal device 100 to operate. The memory 120 may store an operating system and various application programs, and may also store program codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal device 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal device 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal apparatus 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal device 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display an interface containing a two-dimensional code in the present application, such as an interface of a browser application.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal device 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.
The terminal device 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal device 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal device 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal device, or outputs the audio data to the memory 120 for further processing. In this application, the microphone 162 may capture the voice of the user.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal device 100 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.
The processor 180 is a control center of the terminal device 100, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, a touch response, and the method for enhancing the display of the two-dimensional code according to the embodiment of the present application. Further, the processor 180 is coupled with the display unit 130.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.
The terminal device 100 also includes a power supply 190 (such as a battery) for powering the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal device 100 may further be configured with a power button for powering on and off the terminal device, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the terminal device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer, from top to bottom, respectively.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, dialed and answered calls, browsing history and bookmarks, phone books, short messages, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying a picture.
The phone manager is used to provide the communication function of the terminal device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application.
The notification manager allows the application to display notification information (e.g., message digest of short message, message content) in the status bar, can be used to convey notification-type messages, and can automatically disappear after a short dwell without user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal device vibrates, an indicator light flickers, and the like.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
A 2D (an animation mode) graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The workflow of the terminal device 100 software and hardware is exemplified below in connection with user input events.
When the touch screen 131 receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. The application framework layer acquires an original input event from the kernel layer, identifies the input event, and then triggers two-dimensional code detection on the screen image so as to determine whether to adjust the display parameters of the terminal equipment.
The terminal device 100 in the embodiment of the present application may be an electronic device including, but not limited to, a mobile terminal, a desktop computer, a mobile computer, a tablet computer, a household sign data acquisition device (such as a blood pressure monitor), a television, and the like.
Some brief descriptions are given below to application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Fig. 3 is a schematic diagram illustrating an application scenario provided by an embodiment of the present application. The application scenario diagram includes a network 301, a terminal device 302, and an external device 303. Wherein:
the terminal device 302 may obtain a page to be displayed for displaying based on user operation, and any page may be identified as a screen image in this embodiment of the application. For example, a page fetch request may be sent to the network 301. The network 301 provides a page containing the two-dimensional code to the terminal device 302 for display based on the page acquisition request.
The terminal device 302 is configured to perform a two-dimensional code detection operation and a two-dimensional code enhancement display operation on the screen image based on the trigger of the user operation event.
The external device 303 is configured to recognize a two-dimensional code image provided by the terminal device 302, and includes but is not limited to electronic devices such as a subway two-dimensional code scanner, a bus two-dimensional code scanner, and a medical insurance two-dimensional code scanner.
Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 3, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 3 will be described in the following method embodiments, and will not be described in detail herein.
In order to facilitate understanding of the method for enhancing display of a two-dimensional code provided in the embodiments of the present application, the following description is further provided with reference to the accompanying drawings.
As shown in fig. 4, a schematic flow chart of a method for enhancing display of a two-dimensional code provided in an embodiment of the present application includes the following steps:
step 401: receiving a first user operation event;
in some embodiments, user operation events include, but are not limited to, the following events: any press, slide and lift actions by the user; the mode of the screen display is changed, such as dark mode and light mode of the screen, and the specific mode can be determined by the developer and is not limited in this respect; user click event on navigation key (back, home, last task).
Step 402: triggering the operation of two-dimension code detection on the screen image based on the first user operation event;
in some embodiments, after the user stops operating the screen, the two-dimensional code may have a certain display delay due to the influence of application design, network environment, system performance, or the like, and in order to accurately acquire the screen image, the delay detection may be performed in the embodiments of the present application. In implementation, in the embodiment of the present application, after receiving a user operation event, a trigger waiting time T2 may be set; and when the trigger waiting time T2 is counted, triggering the operation of two-dimensional code detection on the screen image.
Of course, in another embodiment, the trigger waiting time duration may be a fixed time duration or may be dynamically adjusted according to the situation, for example, the ratio of the number of times of detecting the two-dimensional code in a period of time to the total number of times of the user operation event in the period of time may be automatically and dynamically adjusted. The closer the ratio is to 1, the shorter the trigger waiting time is; conversely, the closer the ratio is to 0, the longer the trigger wait period.
Wherein, the smaller the value of the trigger waiting time length is, the less the waiting time is, and the more the trigger frequency is.
Wherein, the trigger waiting time length can be set in the main thread. Of course, in another embodiment, if the reported user operation event is received, a child thread may be opened up, and the child thread sets the trigger waiting duration. The development of the sub-thread can improve the utilization rate of a multi-core CPU (central processing unit).
After the trigger waiting time period T2 is set, the intermediate embodiment of triggering two-dimensional code detection may be configured as desired. The examples of the present application provide the following two embodiments for explanation.
As shown in fig. 5, it is an embodiment of a first trigger to perform two-dimensional code detection on a screen image, where:
step 501: a child thread is created that is used to delay triggering of two-dimensional code detection.
Step 502: the child thread enters a wait state (hereinafter, briefly described as state one).
Step 503: and after receiving the user operation event, the main thread triggers the sub-thread to enter an active state (state two).
The method comprises the steps that after a main thread receives a user operation event, the system time reported this time is recorded, meanwhile, a two-dimensional code detection signal is set to be true, a notify semaphore is applied to a sub-thread, the sub-thread enters a state II after receiving the semaphore, and the main thread informs the sub-thread that the trigger waiting time is T2.
Step 504: the sub-thread acquires the trigger waiting time length T2 from the main thread, sets a timer to count T2, and then enters the wait state (state three).
At this time, in some embodiments, the user operation has a certain randomness, and sometimes the user frequently operates within a time period, so that the user operation event is frequently generated. When the main thread detects a first user operation event, a second user operation event may be received, and at this time, the detection of the two-dimensional code may be frequently triggered, so that the load of the system is increased. Therefore, in the embodiment of the application, part of user operation events can be filtered, and frequent detection is avoided. In practice, in step 505, as shown in fig. 5: the main thread receives the user operation event again.
Step 506: the main thread records the current system time.
Step 507: and the main thread judges whether the time interval between the current system time and the last recorded user operation event is smaller than a threshold value. If the time interval between the two user operation events is greater than the threshold, which indicates that the user operation event is not received within the period from the timing to the trigger waiting time, the trigger waiting time is first timed before a new user operation event is received, and after the trigger waiting time is timed, the sub-thread enters an active state (state four) in step 508. Then, in step 509: and triggering and detecting the two-dimensional code by the sub-thread. The manner of triggering can be implemented, among other things, as a modification of the mStatus (global variable) semaphore by the child thread. And then, the mStatus semaphore can be scanned in real time, and if the mStatus semaphore changes, the screen image needs to be subjected to two-dimensional code detection.
When a child thread is created, a Flag (two-dimensional code identification binary cycle trigger Flag bit) is set, and the initial value is Ox 100000. And counting the trigger waiting time, inverting the value of the Flag and performing logic operation on the Mode (the current display Mode of the screen), and then assigning the value of the Flag to the mStatus semaphore to change the mStatus semaphore. For example: if the Mode is 0x100203, the Flag is combined with the Flag to be 0x100203, the next time is 0x203, and the next time is 0x100203, the steps are sequentially circulated, the mStatus signal quantity changes every time the Flag value changes, and then the operation of detecting the two-dimensional code of the screen image is triggered.
If the time interval between two user operation events is greater than or equal to the threshold, the triggering two-dimensional code detection is completed first based on the last user operation event, and for the newly received user operation event, step 502 is executed, and the triggering sub-thread enters the state to start timing the triggering delay waiting time required by the new user operation event.
If the time interval of the two user operation events is smaller than the threshold value, the sub-thread is in three states, the timing of the trigger delay waiting time length is interrupted, and the sub-thread is switched to the first state in the third state.
Of course, in another embodiment, if the time interval between two user operation events is smaller than the threshold, the trigger sub-thread may restart to time the trigger waiting time based on a new user operation event. Therefore, when a user operation event frequently occurs, the timekeeping is continuously performed again, the sub-thread stays in three stages of the state, and the operation of two-dimensional code detection can be reduced.
As shown in fig. 6, it is an embodiment of the second trigger to perform two-dimensional code detection on a screen image, where:
step 601: a child thread is created that is used to delay triggering of two-dimensional code detection.
Step 602: the child thread enters the wait state (state one).
Step 603: and after receiving the user operation event, the main thread triggers the sub-thread to enter an active state (state two).
The method comprises the steps that after a main thread receives a user operation event, the system time reported this time is recorded, meanwhile, a two-dimensional code detection signal is set to be true, a notify semaphore is applied to a sub-thread, the sub-thread enters a state II after receiving the semaphore, and the main thread informs the sub-thread that the trigger waiting time is T2.
Step 604: the sub-thread acquires the trigger waiting time length T2 from the main thread, sets a timer to count T2, and then enters the wait state (state three).
At this time, in some embodiments, the user operation has a certain randomness, and sometimes the user frequently operates within a time period, so that the user operation event is frequently generated. When the main thread detects a first user operation event, a second user operation event may be received, and at this time, the detection of the two-dimensional code may be frequently triggered, so that the load of the system is increased. Therefore, in the embodiment of the application, part of user operation events can be filtered, and frequent detection is avoided. In practice, as shown in fig. 6, in step 605: the main thread receives the user operation event again.
Step 606: the main thread records the current system time.
Step 607: and judging whether the time interval between the current system time and the last recorded time is smaller than a threshold value.
If the time interval between the current system time and the last recorded time is less than the threshold, which indicates that the second user operation event is received before the trigger waiting time is counted, in step 608, the two-dimensional code detection signal is set to false, and a notify semaphore is applied to the child thread to wake up the child thread, at this time, in step 610, the child thread enters a state four. Then, in step 611, the child thread reads false from the two-dimensional code detection signal, and enters state one to wait for being awakened by the next user operation event.
If the time interval between the current system time and the last recording is greater than or equal to the threshold, in step 609, the main thread sets the two-dimensional code detection signal to true and applies notify semaphore to the sub-thread to wake up the sub-thread, and the sub-thread enters state four. At this time, in step 611, the sub-thread reads that the two-dimensional code detection signal is true, and in step 612, the sub-thread triggers the detection of the two-dimensional code. The manner of triggering can be implemented, among other things, as a modification of the mStatus (global variable) semaphore by the child thread. And then, the mStatus semaphore can be scanned in real time, and if the mStatus semaphore changes, the screen image needs to be subjected to two-dimensional code detection.
In summary, in the embodiment of the present application, before the operation of performing two-dimensional code detection on the screen image is triggered, it may be determined whether a second user operation event is received before the trigger waiting time is counted. If the second user operation event is not received, the two-dimensional code detection of the screen image can be triggered based on the first user operation event, and if the second user operation event is received, the execution of the first user operation event is abandoned to trigger the two-dimensional code detection. In short, if the time interval between two adjacent operation events is short, which indicates that the user is frequently operating, the content of the page displayed on the screen may also be frequently changed, and at this time, the two-dimensional code detection may be abandoned.
Based on the foregoing description, the embodiment of the application can delay the operation of two-dimensional code detection on the screen image by setting the trigger waiting time length. On one hand, the method can ensure that a proper screen image can be acquired for detection, and on the other hand, the system burden caused by frequently triggering two-dimensional code detection can be relieved.
In some embodiments, after the operation of performing two-dimensional code detection on the screen image is triggered, a suitable screen image can be acquired for detection, and several optional detection modes are provided in the embodiments of the present application. Including synchronous frame detection, asynchronous frame detection, hybrid frame detection, etc. These several detection methods are explained below:
synchronous frame detection
The basic idea of sync frame detection is to detect the currently displayed image in real time. The detection mode is particularly suitable for the situation that the page content changes continuously. In order to ensure the accuracy of detection, in the embodiment of the present application, N detection periods are provided in the synchronous frame detection, where N is a positive integer. And if detecting that the screen image is currently synthesized in each detection period, waiting for the completion of synthesizing the synthesized screen image, performing two-dimensional code detection by taking the synthesized image as the screen image, and if detecting that no screen image is currently synthesized, performing two-dimensional code detection by taking the image displayed in the current period as the screen image. And if the two-dimensional code is detected in any period of the continuous N periods, determining that the two-dimensional code is detected from the image, and if the two-dimensional code is not detected in the continuous N periods, determining that the two-dimensional code is not detected from the screen image.
In one possible embodiment, the embodiment of the sync frame detection is shown in fig. 7, and includes the following steps:
in step 701, the detection period is counted.
In step 702, it is determined whether the count of the current detection period is less than or equal to N, if so, step 705 is performed, and if not, step 703 is performed.
In step 703, the newly synthesized page image is acquired as a screen image.
As shown in fig. 7, the method can be implemented as step 7031, in which it is detected whether there is an image being synthesized currently, if so, step 7032 is executed, and if not, step 7033 is executed.
In step 7032, the image after the synthesis is acquired as a screen image while the synthesis of the image being synthesized is completed.
In step 7033, the currently displayed image is acquired as a screen image.
In step 704, it is detected whether the two-dimensional code is included in the screen image. If the two-dimensional code is detected from the screen image, step 705 is performed. If the two-dimensional code is not detected from the screen image, the process returns to step 701 to update the count to start the detection of the next detection period.
In step 705, the operation ends.
To further understand the sync frame detection, an example will be given below.
And reporting the detection signal after setting a delay of n seconds, and performing synchronous frame detection in the n +0 second, the n +1 second, the n +2 second and the n +4 second respectively. If the initial frame is the 0 th frame, the corresponding relationship between the detection time point and the synchronization frame is shown in table 1:
TABLE 1
Figure BDA0003195503040000111
Figure BDA0003195503040000121
The selection of the detection time point comprehensively considers the real-time performance and the power consumption level, the data are only exemplified, and the detection interval and the detection times can be changed according to the real-time requirement during actual use.
Two, asynchronous frame detection
The basic idea of asynchronous frame detection is to detect the cached image after monitoring the change of the mStatus semaphore. The detection mode is particularly suitable for the condition that the page content is in a static state. In order to ensure the detection accuracy, in the embodiment of the application, before the operation of performing two-dimensional code detection on the screen image is triggered, the currently displayed image is stored into the cache as the screen image. After two-dimensional code detection is triggered to be carried out on the screen image in asynchronous frame detection, whether the screen image is synthesized currently is detected. And if no screen image is currently synthesized, detecting whether the screen image in the cache contains the two-dimensional code. And if the current screen image is detected to be synthesized, placing the synthesized image into a cache, and detecting whether the screen image in the cache contains the two-dimensional code.
In a possible embodiment, an embodiment of asynchronous frame detection is shown in fig. 8, and includes the following steps:
in step 801, the currently displayed image is acquired as a screen image.
In step 802, it is determined whether there is an image in the buffer. If yes, go to step 803, otherwise go to step 804.
In step 803, the image in the cache is cleared.
In step 804, the screen image is stored in a buffer.
In step 805, an operation of two-dimensional code detection on the screen image is triggered, and whether the screen image is currently synthesized is detected. If yes, go to step 806, otherwise go to step 807.
In step 806, the synthesized image is placed in a cache.
In step 807, it is checked whether the screen image in the buffer contains the two-dimensional code.
Three, mixed frame detection
As the name implies, hybrid frame detection refers to the possibility of using asynchronous frame detection as well as synchronous frame detection. In implementation, the detection mode can be determined according to the condition of the page image. The basic idea is that the dynamic change of page content is suitable for synchronous frame detection, and the almost unchanged page content is suitable for asynchronous frame detection.
In practice, whether the page content is dynamically changed or unchanged may be determined based on whether an image to be displayed is currently being composited or being received.
In order to ensure the accuracy of detection, in the embodiment of the application, before the two-dimensional code detection is triggered to be performed on the screen image, the currently displayed image is stored into the cache as the screen image. When the two-dimension code detection is triggered to be carried out on the screen image in the asynchronous frame detection, whether the screen image in the cache contains the two-dimension code is detected. N detection periods are provided in the synchronous frame detection, wherein N is a positive integer. And carrying out two-dimensional code detection by taking the newly synthesized page image as a screen image in each detection period. And if the two-dimensional code is detected in any period of the continuous N periods, determining that the two-dimensional code is detected from the image, and if the two-dimensional code is not detected in the continuous N periods, determining that the two-dimensional code is not detected from the screen image.
In one possible embodiment, an embodiment of hybrid frame detection is shown in fig. 9, and includes the following steps:
in step 904, the current frame image is acquired as a screen image.
In step 905, the screen image is stored in the buffer.
In step 906: the child thread is queried for a detection notification.
In step 907: and judging whether the two-dimensional code detection is informed to the screen image. If so, go to step 908, otherwise go to step 904.
Step 908: judging whether the image to be displayed is synthesized or received currently, if not, executing step 909; if yes, go to step 910.
Step 909: synchronization frame detection is performed (the specific operation is shown above for synchronization frame detection).
Step 910: asynchronous frame detection is performed (the specific operation is shown as the asynchronous frame detection described above).
Based on the foregoing description, the synchronous frame detection is the most common method, is suitable for dynamic images, performs real-time detection on currently displayed images, and has high real-time performance; asynchronous frame detection is applied to still images and detects buffered images. The mixed frame detection satisfies detection of both a moving image and a still image. Whether the current screen image contains the two-dimensional code image or not can be determined through three detection methods.
In summary, after detecting the screen image, in step 403: and if the two-dimensional code is detected from the screen image, performing enhanced display processing on the screen image.
In some embodiments, in order to improve the probability that the external device can recognize the two-dimensional code image displayed on the page under the condition of dim light or general quality of the scanning device, the embodiment of the application performs enhanced display processing on the screen image by increasing the contrast of the screen image and/or increasing the brightness of the terminal device. As shown in fig. 9, the method comprises the following specific steps:
step 911: and judging whether the two-dimensional code image is detected. If the two-dimensional code image is detected, executing step 912; if the two-dimensional code image is not detected, step 914 is executed.
Step 912: the original contrast and background intensity are saved.
Step 913: the new contrast and backlight are set.
Wherein, the new contrast and the backlight degree can be set to maximum values in the embodiment of the present application. Of course, in another embodiment, the adjustment can be performed in other manners, which are suitable for the embodiment of the present application, such as increasing the contrast and the backlight by 50%.
Step 914: and restoring the original contrast and the original backlight.
In some embodiments, the enhanced display processing is performed on the screen image, and a user operation interface can be displayed on the interface of the screen image; and enhancing the displayed user operation based on the confirmation of the user operation interface, and increasing the contrast of the screen image and/or increasing the brightness of the terminal equipment. Fig. 10 is a diagram showing a user operation interface. The user can click on the top right bulb shaped button to adjust the image contrast and the backlight of the terminal device. By using the method, the user does not need to open other interfaces for operation, and only needs to click the button on the user operation interface displayed on the interface, so that the operation of the user is simplified.
Based on the foregoing description, the display of the two-dimensional code page may be automatically enhanced, and the user may also be allowed to adjust the display of the two-dimensional code interface by himself. Therefore, the probability that the two-dimensional code image displayed on the page is recognized by the external equipment can be improved under the condition of dim light or common quality of the scanning equipment.
Fig. 11 is a block diagram of software provided in an embodiment of the present application. The method of the embodiment of the application is realized in a system server process, and the main sub-modules comprise an operation detection module, a core service module, an algorithm identification module and an interface processing module.
Wherein, the operation detection module comprises an action receiver. Fig. 12 is a schematic diagram of a motion receiver according to an embodiment of the present application:
the action receiver is created along with the generation of the user operation event, and receives the user operation through the following three ways:
mode 1), user operation screen: the a1 method is used to listen for any press, slide and lift actions by the user and pass these actions to the epdmanagervice (core service).
Mode 2), the user operates the navigation key: using the a2 method, user clicks on navigation keys (return, home, last task) are monitored and passed to the epdmanagervice.
Mode 3), user switching display mode: the A3 method is used to listen for screen display mode changes and pass this event to EpdManagerService.
The action receiving and reporting principle is that after the Android native function is executed, the user event is transmitted to the core service EpdManagerservice, and the process is executed in the main thread. The main thread finishes the reporting of the user event and informs the sub-thread, and the main thread enters an idle state (releases the lock) at present and can begin to process the reporting of the next user event.
The action receiver reports the user operation event to an operation processor in the core service module, and the operation processor is used for receiving the user event reported by the action receiver, opening up a sub-thread, setting the awakening time of the delay trigger and applying a notify signal to the delay trigger.
The delay trigger is used for receiving the semaphore transmitted by the operation processor, determining thread awakening time according to the trigger waiting time length and informing the signal processor in the algorithm identification module to detect the two-dimensional code.
The signal processor can identify a signal for triggering detection sent by the delay trigger, detect whether the signal is a synchronous signal or an asynchronous signal, and send the signal to the image identifier to detect the two-dimensional code image.
The image recognizer uses a public mature algorithm to scan and recognize the two-dimensional code image, and the value of the recognition result is fed back to the signal processor. Meanwhile, if the two-dimensional code image is detected, the signal is sent to a display enhancer of the interface processing module.
And the display enhancer adjusts the display parameters of the terminal equipment according to the identification result of the image identifier.
The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. An enhanced display method of a two-dimensional code, the method comprising:
receiving a first user operation event;
triggering the operation of two-dimension code detection on the screen image based on the first user operation event;
and if the two-dimensional code is detected from the screen image, performing enhanced display processing on the screen image.
2. The method of claim 1, wherein triggering the operation of two-dimensional code detection on the screen image based on the first user operation event comprises:
setting a trigger waiting time length based on the first user operation event;
and when the trigger waiting time length is timed, triggering the operation of detecting the two-dimensional code on the screen image.
3. The method according to claim 2, wherein before triggering the operation of two-dimensional code detection on the screen image, the method further comprises:
determining that a second user operation event is not received before the trigger waiting duration is timed.
4. The method of claim 3, further comprising:
and if a second user operation event is received before the trigger waiting time is counted, abandoning the operation of performing two-dimensional code detection on the screen image triggered by the first user operation event.
5. The method of claim 2, wherein the two-dimensional code detection of the screen image comprises:
the following operations are executed for N continuous periods until the two-dimensional code is detected or N periods are detected, wherein N is a positive integer:
acquiring a latest synthesized page image as the screen image;
detecting whether the screen image contains the two-dimensional code;
if any period in N continuous periods detects the two-dimensional code, determining that the two-dimensional code is detected from the screen image;
and if the two-dimensional code is not detected in the N periods, determining that the two-dimensional code is not detected in the screen image.
6. The method of claim 2, wherein before triggering the two-dimensional code detection on the screen image, the method further comprises: storing the currently displayed image as the screen image into a cache;
the two-dimensional code detection is carried out on the screen image, and the method specifically comprises the following steps:
and detecting whether the screen image in the cache contains the two-dimensional code.
7. The method of claim 2, wherein before triggering the two-dimensional code detection on the screen image, the method further comprises: storing the currently displayed image as the screen image into a cache;
the two-dimensional code detection is carried out on the screen image, and the method specifically comprises the following steps:
if no image to be displayed is synthesized or received currently, detecting whether a screen image in the cache contains a two-dimensional code;
if the current image to be displayed is synthesized or is received, the following operations are continuously executed for N periods until the two-dimensional code is detected or N periods are detected, wherein N is a positive integer:
acquiring a latest synthesized page image as the screen image;
detecting whether the screen image contains the two-dimensional code;
if any period in N continuous periods detects the two-dimensional code, determining that the two-dimensional code is detected from the screen image;
and if the two-dimensional code is not detected in the N periods, determining that the two-dimensional code is not detected in the screen image.
8. The method according to any one of claims 1 to 7, wherein the performing of the enhanced display processing on the screen image includes:
the contrast of the screen image is increased, and/or the brightness of the terminal device is increased.
9. The method according to any one of claims 1 to 7, wherein the performing of the enhanced display processing on the screen image includes:
displaying a user operation interface on the interface of the screen image;
and enhancing the displayed user operation based on the confirmation of the user operation interface, and increasing the contrast of the terminal equipment and/or increasing the brightness of the terminal equipment.
10. A terminal device, comprising:
a display, a processor, and a memory;
the display is used for displaying the two-dimensional code;
the memory to store the processor-executable instructions;
the processor is configured to execute the instructions to implement a method of enhanced display of two-dimensional codes as recited in any of claims 1-9.
CN202110889849.9A 2021-08-04 2021-08-04 Method and terminal equipment for enhancing display of two-dimensional code Pending CN113641431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110889849.9A CN113641431A (en) 2021-08-04 2021-08-04 Method and terminal equipment for enhancing display of two-dimensional code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110889849.9A CN113641431A (en) 2021-08-04 2021-08-04 Method and terminal equipment for enhancing display of two-dimensional code

Publications (1)

Publication Number Publication Date
CN113641431A true CN113641431A (en) 2021-11-12

Family

ID=78419568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110889849.9A Pending CN113641431A (en) 2021-08-04 2021-08-04 Method and terminal equipment for enhancing display of two-dimensional code

Country Status (1)

Country Link
CN (1) CN113641431A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069396A (en) * 2023-03-01 2023-05-05 浪潮电子信息产业股份有限公司 Detection method, device, equipment and medium for out-of-order execution of multi-core CPU

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5691527A (en) * 1994-12-26 1997-11-25 Nippondenso Co., Ltd. Two dimensional code reading apparatus
CN103150584A (en) * 2013-01-30 2013-06-12 广东电网公司电力调度控制中心 Communication resource motion processing method and system
US20160140430A1 (en) * 2014-11-13 2016-05-19 Casio Computer Co., Ltd. Electronic device, method of displaying two-dimensional code, and recording medium with program recorded thereon
CN205644176U (en) * 2016-05-18 2016-10-12 广州正峰电子科技有限公司 Intelligent instrument and charging monitored control system
CN107844730A (en) * 2017-11-07 2018-03-27 维沃移动通信有限公司 A kind of graphic code scan method and mobile terminal
CN109067995A (en) * 2018-08-28 2018-12-21 努比亚技术有限公司 A kind of brightness adjusting method, mobile terminal and computer readable storage medium
CN110837328A (en) * 2019-10-28 2020-02-25 维沃移动通信有限公司 Display method and electronic equipment
US10665204B1 (en) * 2019-10-08 2020-05-26 Capital One Services, Llc Automatically adjusting screen brightness based on screen content
JP2020149542A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Information code reading device, automatic ticket examination machine, information code reading method and information code reading program
CN112036201A (en) * 2020-08-06 2020-12-04 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN112163436A (en) * 2020-09-04 2021-01-01 北京三快在线科技有限公司 Information identification system, method and device
CN112581899A (en) * 2020-12-28 2021-03-30 维沃移动通信有限公司 Control method and electronic device
CN112651258A (en) * 2020-12-25 2021-04-13 南京航空航天大学 Data transmission method and system based on dynamic two-dimensional code
CN112653784A (en) * 2021-01-09 2021-04-13 烟台亿浩智能科技有限公司 Method and system for non-contact type identification input and use of mobile phone number

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5691527A (en) * 1994-12-26 1997-11-25 Nippondenso Co., Ltd. Two dimensional code reading apparatus
CN103150584A (en) * 2013-01-30 2013-06-12 广东电网公司电力调度控制中心 Communication resource motion processing method and system
US20160140430A1 (en) * 2014-11-13 2016-05-19 Casio Computer Co., Ltd. Electronic device, method of displaying two-dimensional code, and recording medium with program recorded thereon
CN205644176U (en) * 2016-05-18 2016-10-12 广州正峰电子科技有限公司 Intelligent instrument and charging monitored control system
CN107844730A (en) * 2017-11-07 2018-03-27 维沃移动通信有限公司 A kind of graphic code scan method and mobile terminal
CN109067995A (en) * 2018-08-28 2018-12-21 努比亚技术有限公司 A kind of brightness adjusting method, mobile terminal and computer readable storage medium
JP2020149542A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Information code reading device, automatic ticket examination machine, information code reading method and information code reading program
US10665204B1 (en) * 2019-10-08 2020-05-26 Capital One Services, Llc Automatically adjusting screen brightness based on screen content
CN110837328A (en) * 2019-10-28 2020-02-25 维沃移动通信有限公司 Display method and electronic equipment
CN112036201A (en) * 2020-08-06 2020-12-04 浙江大华技术股份有限公司 Image processing method, device, equipment and medium
CN112163436A (en) * 2020-09-04 2021-01-01 北京三快在线科技有限公司 Information identification system, method and device
CN112651258A (en) * 2020-12-25 2021-04-13 南京航空航天大学 Data transmission method and system based on dynamic two-dimensional code
CN112581899A (en) * 2020-12-28 2021-03-30 维沃移动通信有限公司 Control method and electronic device
CN112653784A (en) * 2021-01-09 2021-04-13 烟台亿浩智能科技有限公司 Method and system for non-contact type identification input and use of mobile phone number

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116069396A (en) * 2023-03-01 2023-05-05 浪潮电子信息产业股份有限公司 Detection method, device, equipment and medium for out-of-order execution of multi-core CPU

Similar Documents

Publication Publication Date Title
CN111367456A (en) Communication terminal and display method in multi-window mode
CN113709026B (en) Method, device, storage medium and program product for processing instant communication message
CN112835472B (en) Communication terminal and display method
CN113055585B (en) Thumbnail display method of shooting interface and mobile terminal
CN113038141A (en) Video frame processing method and electronic equipment
CN113641431A (en) Method and terminal equipment for enhancing display of two-dimensional code
CN114339419B (en) Video stream pulling processing method, device and storage medium
CN114979533A (en) Video recording method, device and terminal
CN113900740A (en) Method and device for loading multiple list data
CN111399955B (en) Mobile terminal and interface display method of application program thereof
CN114356559A (en) Multithreading control method and terminal equipment
CN114610202A (en) Silence help seeking method and terminal equipment
CN114546219A (en) Picture list processing method and related device
CN111787157A (en) Mobile terminal and operation response method thereof
CN111225113A (en) Communication terminal and starting method thereof
CN113255644B (en) Display device and image recognition method thereof
CN113179362B (en) Electronic device and image display method thereof
CN113253905B (en) Touch method based on multi-finger operation and intelligent terminal
CN116089320B (en) Garbage recycling method and related device
CN112929858B (en) Method and terminal for simulating access control card
CN111142648B (en) Data processing method and intelligent terminal
WO2022206600A1 (en) Screen projection method and system, and related apparatus
CN112000409B (en) Mobile terminal and display method thereof
CN111258699B (en) Page display method and communication terminal
CN115269232A (en) Task processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination