CN112818719A - Method and device for identifying two-dimensional code - Google Patents

Method and device for identifying two-dimensional code Download PDF

Info

Publication number
CN112818719A
CN112818719A CN202011618821.3A CN202011618821A CN112818719A CN 112818719 A CN112818719 A CN 112818719A CN 202011618821 A CN202011618821 A CN 202011618821A CN 112818719 A CN112818719 A CN 112818719A
Authority
CN
China
Prior art keywords
user
track
dimensional code
video stream
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011618821.3A
Other languages
Chinese (zh)
Other versions
CN112818719B (en
Inventor
黄永生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011618821.3A priority Critical patent/CN112818719B/en
Publication of CN112818719A publication Critical patent/CN112818719A/en
Priority to PCT/CN2021/125287 priority patent/WO2022142620A1/en
Application granted granted Critical
Publication of CN112818719B publication Critical patent/CN112818719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Abstract

The application aims to provide a method and equipment for identifying a two-dimensional code, wherein the method comprises the following steps: in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream; and if the identification is successful, processing the two-dimensional code information obtained by the identification. The method and the device have the advantages that the two-dimension code can be identified very simply, conveniently and accurately in the video call process, great convenience can be provided for users participating in the video call, the two-dimension code identification operation is only executed on the video frame image area corresponding to the intercepting area in the video stream instead of executing the two-dimension code identification operation on all display areas of the video stream, the identification speed of the two-dimension code can be increased, and the identification precision and the identification efficiency of the two-dimension code are improved.

Description

Method and device for identifying two-dimensional code
Technical Field
The present application relates to the field of communications, and in particular, to a technique for recognizing a two-dimensional code.
Background
With the development of the times, the two-dimensional code is widely applied to different scenes of various industries, almost all aspects of life are involved, and a user can obtain corresponding two-dimensional code contents by scanning the two-dimensional code, for example, mobile payment, information identification and the like are carried out through the two-dimensional code, so that the convenience of daily life of people is greatly improved.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for recognizing a two-dimensional code.
According to an aspect of the present application, there is provided a method of identifying a two-dimensional code applied to a first user equipment, the method including:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided a method of identifying a two-dimensional code applied to a second user equipment, the method including:
in the process of video call between a first user and a second user, responding to the track drawing operation of the second user for the video stream of the second user, obtaining the track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, sending the two-dimension code information obtained by the identification to first user equipment corresponding to the first user so that the first user equipment can process the two-dimension code information.
According to an aspect of the present application, there is provided a first user equipment for recognizing a two-dimensional code, the first user equipment including:
the system comprises a one-to-one module, a two-dimension code recognition module and a two-dimension code recognition module, wherein the one-to-one module is used for responding to track drawing operation of a video stream of a second user by a first user in the process of video call between the first user and the second user, obtaining a track drawn by the first user, determining a capture area according to the track, and executing two-dimension code recognition operation on the capture area in the video stream;
and the second module is used for processing the two-dimension code information obtained by identification if the identification is successful.
According to another aspect of the present application, there is provided a second user equipment for recognizing a two-dimensional code, the second user equipment including:
a first module 21, configured to, in a video call between a first user and a second user, respond to a track drawing operation of the second user on a video stream of the second user, obtain a track drawn by the second user, determine an intercepting region according to the track, and perform a two-dimensional code recognition operation on the intercepting region in the video stream;
and a second module 22, configured to send the two-dimensional code information obtained through identification to the first user equipment corresponding to the first user if the two-dimensional code information is successfully identified, so that the first user equipment processes the two-dimensional code information.
According to an aspect of the present application, there is provided an apparatus for recognizing a two-dimensional code, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the present application, there is provided an apparatus for recognizing a two-dimensional code, wherein the apparatus includes:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
in the process of video call between a first user and a second user, responding to the track drawing operation of the second user for the video stream of the second user, obtaining the track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, sending the two-dimension code information obtained by the identification to first user equipment corresponding to the first user so that the first user equipment can process the two-dimension code information.
According to one aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the application, there is provided a computer-readable medium storing instructions that, when executed, cause a system to:
in the process of video call between a first user and a second user, responding to the track drawing operation of the second user for the video stream of the second user, obtaining the track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, sending the two-dimension code information obtained by the identification to first user equipment corresponding to the first user so that the first user equipment can process the two-dimension code information.
According to an aspect of the application, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method of:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
According to another aspect of the application, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the method of:
in the process of video call between a first user and a second user, responding to the track drawing operation of the second user for the video stream of the second user, obtaining the track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, sending the two-dimension code information obtained by the identification to first user equipment corresponding to the first user so that the first user equipment can process the two-dimension code information.
Compared with the prior art, the method and the device have the advantages that in the process of video call between a first user and a second user, in response to the track drawing operation of the first user on the video stream of the second user, the track drawn by the first user is obtained, the intercepting area is determined according to the track, and the two-dimension code recognition operation is executed on the intercepting area in the video stream, so that the second user only needs to align the camera of the second user to the two-dimension code required to be displayed to the first user without quitting the video, the two-dimension code is not required to be provided to the first user through photographing or screenshot, the first user only needs to execute the track drawing operation on the screen aiming at the two-dimension code displayed by the second user, the first user equipment can quickly and conveniently recognize the two-dimension code, and the two-dimension code can be recognized simply, conveniently and accurately in the process of video call, the method and the device can provide great convenience for users participating in video call, and only perform two-dimension code recognition operation on the video frame image area corresponding to the intercepting area in the video stream instead of performing two-dimension code recognition operation on all display areas of the video stream, so that the recognition speed of the two-dimension code can be increased, and the recognition accuracy and the recognition efficiency of the two-dimension code are improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
fig. 1 illustrates a flowchart of a method for identifying a two-dimensional code applied to a first user equipment according to an embodiment of the present application;
fig. 2 shows a flowchart of a method for identifying a two-dimensional code applied to a second user equipment according to an embodiment of the present application;
fig. 3 is a diagram illustrating a first user equipment structure for recognizing a two-dimensional code according to an embodiment of the present application;
fig. 4 is a diagram illustrating a second user equipment structure for recognizing a two-dimensional code according to an embodiment of the present application;
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product, such as a smart phone, a tablet computer, etc., capable of performing human-computer interaction with a user (e.g., human-computer interaction through a touch panel), and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, etc. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 is a flowchart illustrating a method for identifying a two-dimensional code applied to a first user equipment according to an embodiment of the present application, the method including steps S11 and S12. In step S11, in the process of a video call between a first user and a second user, in response to a track drawing operation of the first user on a video stream of the second user, a first user device obtains a track drawn by the first user, determines an intercepting area according to the track, and performs a two-dimensional code recognition operation on the intercepting area in the video stream; in step S12, if the identification is successful, the first user equipment processes the two-dimensional code information obtained by the identification.
In step S11, in the process of a video call between a first user and a second user, in response to a track drawing operation of the first user on a video stream of the second user, a first user device obtains a track drawn by the first user, determines an intercepting area according to the track, and performs a two-dimensional code recognition operation on the intercepting area in the video stream. In some embodiments, in the process of a video call between a first user and a second user, the second user aligns a two-dimensional code to be displayed to the first user with a camera of second user equipment used by the second user, and sends the two-dimensional code to the first user equipment in a video stream manner, so that the two-dimensional code is displayed on a video picture of the second user displayed on the first user equipment, the second user is not required to quit the video call at all, and the second user is not required to provide the two-dimensional code to be displayed to the first user in a photographing or screenshot manner, preferably, the second user needs to switch a front camera currently used by the second user equipment to a rear camera, and aligns the rear camera with the two-dimensional code to be displayed to the first user. In some embodiments, when the first user sees that the two-dimensional code is displayed on the video screen of the second user, the first user may perform a track drawing operation on the video screen by using his finger (for example, the finger of the first user is pressed down at a certain position on the screen of the first user device, and the finger is moved while keeping the finger pressed down), at this time, the first user device may obtain a track drawn by the first user, and in response to a track drawing end event corresponding to the track drawing operation (for example, the finger of the first user is lifted up from the screen of the first user device), a corresponding clipping area is determined according to the currently drawn track of the first user. In some embodiments, the first user-drawn track may or may not be displayed on the first user device screen. In some embodiments, if the currently drawn track of the first user is closed, the area enclosed by the closed track is determined as the intercepting area. In some embodiments, if the currently drawn trajectory of the first user is not closed, the drawing starting point (for example, at the position where the finger is pressed) and the drawing ending point (for example, at the position where the finger is lifted) may be connected by a virtual straight line, so as to obtain a virtual closed trajectory corresponding to the drawing trajectory, and an area surrounded by the virtual closed trajectory is determined as the cut-out area. In some embodiments, if the currently drawn trajectory of the first user is not closed, a virtual tangent extension line may be further drawn at the drawing start point and the drawing end point, respectively, a virtual closed trajectory corresponding to the drawn trajectory is obtained according to the two drawn virtual switching extension lines and the screen boundary of the first user equipment, and an area surrounded by the virtual closed trajectory is determined as the clipping area. In some embodiments, a two-dimensional code recognition operation is performed on a video frame image area corresponding to the cut-out area in the video picture of the second user, and two-dimensional code information contained in a two-dimensional code displayed on the video frame image area is obtained through recognition. In some embodiments, the two-dimensional code recognition operation is performed on the video frame image area corresponding to the cut-out area in the video picture of the second user, instead of performing the two-dimensional code recognition operation on all display areas in the video picture of the second user, so that the recognition speed of the two-dimensional code can be increased, and the recognition accuracy and the recognition efficiency of the two-dimensional code can be improved.
In step S12, if the identification is successful, the first user equipment processes the two-dimensional code information obtained by the identification. In some embodiments, if the identification is successful, the two-dimensional code information obtained through the identification may be directly processed, or the two-dimensional code information obtained through the identification may be processed according to user authorization information or user identification information (e.g., token, uuid (universal Unique Identifier), etc.) of the first user, or the two-dimensional code information obtained through the identification may be processed according to the personal real identity information of the first user bound to the video call application.
The method can respond to the track drawing operation of the first user aiming at the video stream of the second user in the video call process of the first user and the second user, obtain the track drawn by the first user, determine the intercepting area according to the track, and execute the two-dimension code recognition operation on the intercepting area in the video stream, so that the second user only needs to align the camera thereof to the two-dimension code which needs to be displayed to the first user without quitting the video, and does not need to provide the two-dimension code to the first user through photographing or screenshot, and the first user only needs to execute the track drawing operation aiming at the two-dimension code displayed by the second user on the screen, so that the first user equipment can quickly and conveniently recognize the two-dimension code, the identification aiming at the two-dimension code in the video call process is extremely simple, convenient and accurate, and great convenience can be provided for the users participating in the video call, and only the two-dimension code recognition operation is executed on the video frame image area corresponding to the intercepted area in the video stream, but not on the whole display area of the video stream, so that the recognition speed of the two-dimension code can be accelerated, and the recognition precision and the recognition efficiency of the two-dimension code are improved.
In some embodiments, the step S11 includes: the method comprises the steps that in the process of video call between a first user and a second user, a first user device responds to track drawing starting triggering operation of the first user for a video stream of the second user, and the video stream of the second user is paused to be played; responding to the track drawing operation of the first user for the video stream of the second user, obtaining a track drawn by the first user, and determining a capture area according to the track; and executing two-dimensional code identification operation on the intercepted area in the current video frame corresponding to the video stream, and restoring to play the video stream of the second user. In some embodiments, the track-drawing-start triggering operation may be the first user pressing a finger at a certain position on the screen of the first user device, or may be the first user moving the finger by a distance greater than or equal to a predetermined distance threshold (e.g., 10 pixels, 1 centimeter, etc.) while keeping the finger pressed after pressing, or may be the first user clicking a certain button (e.g., a "start-drawing-track" button) on the current page. In some embodiments, after the capture area is determined according to the track currently drawn by the first user, a two-dimensional code recognition operation is performed on an image area corresponding to the capture area in the current video frame image, and two-dimensional code information included in the two-dimensional code displayed on the image area is recognized and obtained, wherein the current video frame image corresponds to a current video picture of the second user whose playing is currently paused due to the fact that the video stream of the second user is paused previously. In some embodiments, after successful identification, playback of the second user's video stream may resume. In some embodiments, after the identification fails, the playing of the video stream of the second user may also be directly resumed, or, after the identification fails, the playing of the video stream of the second user may not be directly resumed, the first user may execute the track drawing operation on the current video picture again, the first user device may retry to execute the two-dimensional code identification operation on the image area corresponding to the capture area in the current video frame after re-determining the capture area, and if the number of times of the identification failures reaches the predetermined number threshold, the playing of the video stream of the second user may be resumed. In some embodiments, after the identification fails, the second user's video stream will not be resumed directly, but a "resume play" button may be placed on the current page, and the user may resume playing the second user's video stream after clicking the button.
In some embodiments, the step S11 includes: the method comprises the steps that in the process of a video call between a first user and a second user, in response to the fact that the first user starts to trigger operation aiming at the track drawing of a video stream of the second user, first current video frame images corresponding to the video stream are obtained by first user equipment, and the first current video frame images are displayed on the video stream; responding to the track drawing operation of the first user for the first current video frame image, obtaining a track drawn by the first user, and determining a capture area according to the track; and executing two-dimensional code identification operation on the intercepted area in the first current video frame image, and canceling the presentation of the first current video frame image. In some embodiments, in response to a track drawing start triggering operation of a first user on a video stream of a second user, a current video frame image corresponding to the video stream is obtained and is displayed on the video stream in an overlapping manner, after an intercepting area is determined according to a track currently drawn by the first user, a two-dimensional code recognition operation is performed on an image area corresponding to the intercepting area of the current video frame image, and two-dimensional code information contained in a two-dimensional code displayed on the image area is recognized and obtained. In some embodiments, the current video frame image is concealed after successful identification. In some embodiments, after the recognition fails, the current video frame image may also be directly hidden, or after the recognition fails, the current video frame image may not be directly hidden, the first user may perform a trajectory drawing operation on the current video frame again, the first user equipment may try to perform a two-dimensional code recognition operation on an image area corresponding to the capture area in the current video frame again after re-determining the capture area, and if the recognition failure number reaches a predetermined number threshold, the current video frame image may be hidden. In some embodiments, the current video frame image is not hidden directly after the identification fails, but a predetermined button is placed on the current page, and the user can hide the current video frame image after clicking the button.
In some embodiments, the step S11 includes: the method comprises the steps that in the process of video call between a first user and a second user, a first user device responds to track drawing operation of the first user on a video stream of the second user, obtains a track drawn by the first user, determines an intercepting area according to the track, and executes two-dimensional code recognition operation on the intercepting area in a current video frame corresponding to the video stream. In some embodiments, in response to a track drawing end event corresponding to the track drawing operation (for example, a finger of a first user is lifted from a screen of a first user device), a corresponding capture area is determined according to a track currently drawn by the first user, and a two-dimensional code recognition operation is performed on an image area corresponding to the capture area in a current video frame image corresponding to a video stream of a second user, so as to recognize and obtain two-dimensional code information included in a two-dimensional code displayed on the image area.
In some embodiments, the obtaining the trajectory drawn by the first user and determining the intercepting region according to the trajectory includes: and obtaining the track drawn by the first user, responding to a track drawing end event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as a capture area. In some embodiments, the track-drawing end event corresponding to the track-drawing operation may be that the finger of the first user lifts from the screen of the first user device, or the finger of the first user moves out of the display area of the video screen of the second user, or the finger of the first user presses down at a certain position on the screen of the first user device and stays for a time longer than a predetermined time threshold. In some embodiments, detecting whether the currently drawn track of the first user is closed may determine whether to close by detecting whether the currently drawn track of the first user intersects, and if so, determine that the currently drawn track of the first user is closed, and determine an area surrounded by the closing as the intercepting area.
In some embodiments, the method further comprises: if the drawn track of the first user is not closed, the first user equipment determines a virtual closed track corresponding to the track according to a drawing starting point and a drawing ending point corresponding to the track, and determines an area enclosed by the virtual closed track as a capture area. In some embodiments, if the trajectory drawn by the first user is not closed, the drawing start point and the drawing end point may be connected by a virtual straight line to obtain a corresponding virtual closed region, and a region surrounded by the virtual closed trajectory is determined as the cut-out region. In some embodiments, if the trajectory drawn by the first user is not closed, a virtual tangent extension line may be further drawn at the drawing start point and the drawing end point, a corresponding virtual closed area is obtained according to the two drawn virtual switching extension lines and the screen boundary of the first user device or the video picture boundary of the second user, and an area surrounded by the virtual closed trajectory is determined as the capture area.
In some embodiments, the determining the virtual closed area corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed area corresponding to the track. In some embodiments, the virtual straight line may be displayed on the first user device screen, or the virtual straight line may not be displayed on the first user device screen.
In some embodiments, the determining a virtual closed trajectory corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: and respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed area corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream. In some embodiments, the virtual tangent extension line may be displayed on the first user equipment screen, or the virtual tangent extension line may not be displayed on the first user equipment screen. In some embodiments, the video stream boundary may be a boundary of a first user device screen or, alternatively, a boundary of a video picture of a second user.
In some embodiments, the obtaining the trajectory drawn by the first user and determining the intercepting region according to the trajectory includes: and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation. In some embodiments, a trajectory closing event corresponding to a trajectory drawing operation performed by a first user on a video screen of a second user may be directly used as a trajectory drawing end event, and when a currently drawn trajectory of the first user is closed, an area surrounded by the currently drawn trajectory of the first user is determined as a capture area.
In some embodiments, the method further comprises: and if the two-dimension code information is not identified in the intercepting region in the current video frame, the first user equipment executes two-dimension code identification operation on the intercepting region in the target video frame before the current video frame in the video stream. In some embodiments, if the two-dimensional code is not identified in the image area corresponding to the cut-out area in the current video frame image corresponding to the video stream of the second user, the two-dimensional code identification operation is performed on the image area corresponding to the cut-out area in the target video frame image before the current video frame image in the video stream of the second user. In some embodiments, the target video frame image may be a video frame image corresponding to the video stream of the second user at the start time point of the track-drawing operation of the first user. In some embodiments, the target video frame image may also be a video frame image that is previous to the current video frame image in the video stream of the second user.
In some embodiments, if the two-dimensional code information is not identified in the capture area of the current video frame, performing a two-dimensional code identification operation on the capture area of a target video frame that precedes the current video frame in the video stream includes: if the two-dimension code information is not identified in the intercepting region in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimension code identification operation on the intercepting region in the previous video frame, so as to repeat the operation until the two-dimension code information is identified from the intercepting region in the target video frame. In some embodiments, the target video frame image may be a video frame image that is previous to the current video frame image in the video stream of the second user, the two-dimensional code recognition operation is performed on an image area corresponding to the cut-out area in the target video frame image, if the two-dimensional code is not recognized in the previous video frame image, the target video frame image is determined as the video frame image that is previous to the previous video frame image in the video stream of the second user, and the two-dimensional code recognition operation is performed on the image area corresponding to the cut-out area in the target video frame image, so as to perform the above steps until the two-dimensional code information is recognized from the image area corresponding to the cut-out area in the target video frame.
In some embodiments, if the two-dimensional code information is not identified in the capture area of the current video frame, performing a two-dimensional code identification operation on the capture area of a target video frame that precedes the current video frame in the video stream includes: acquiring a starting time point of the track drawing operation; and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame. In some embodiments, in response to a track-drawing operation performed by a first user on a video screen of a second user, a starting time point of the track-drawing operation is recorded, and may be recorded in a memory, or may be recorded locally on a first user device. In some embodiments, a starting time point of the track drawing operation is read, a video frame image corresponding to the starting time point of the video stream of the second user is determined as a target video frame image, and a two-dimensional code recognition operation is performed on an image area corresponding to the intercepting area in the target video frame image.
In some embodiments, the method further comprises: and if the two-dimension code information is not identified in the intercepting area in the current video frame, the first user equipment executes two-dimension code identification operation on all display areas of the current video frame. In some embodiments, if the two-dimensional code is not identified in the image area corresponding to the cut-out area in the current video frame image corresponding to the video stream of the second user, the two-dimensional code identification operation is performed on all display areas in the current video frame image.
In some embodiments, the method further comprises: if the identification is successful, the first user equipment generates identification success prompt information and sends the identification success prompt information to second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment. In some embodiments, if the two-dimensional code is successfully identified, an identification success prompt message is generated and sent to the second user equipment for presentation so as to prompt the second user not to continue to aim the camera of the second user equipment at the two-dimensional code which needs to be presented to the first user, where the identification success prompt message may be directly sent to the second user equipment, or may also be sent to the second user equipment via a server. In some embodiments, the recognition success prompt information may be presented on the second user device in a visual form (e.g., text, icon, text + icon, etc.) or may also be presented on the second user device in a voice-played form.
In some embodiments, the method further comprises: and the first user equipment sends the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time. In some embodiments, when the first user performs the track drawing operation on the video screen of the second user, the first user device sends the track drawn by the first user to the second user device in real time and displays the track.
Fig. 2 shows a flowchart of a method for identifying a two-dimensional code applied to a second user equipment according to an embodiment of the present application, and the method includes steps S21 and S22. In step S21, in the process of a video call between a first user and a second user, in response to a track drawing operation of the second user on a video stream of the second user, a second user device obtains a track drawn by the second user, determines an intercepting area according to the track, and performs a two-dimensional code recognition operation on the intercepting area in the video stream; in step S22, if the second user equipment is successfully identified, the two-dimensional code information obtained through identification is sent to the first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
In step S21, in the process of a video call between a first user and a second user, in response to a track drawing operation of the second user on a video stream of the second user, a second user device obtains a track drawn by the second user, determines an intercepting area according to the track, and performs a two-dimensional code recognition operation on the intercepting area in the video stream. In some embodiments, in the process of a video call between a first user and a second user, the second user aligns a camera of a second user device used by the second user device with a two-dimensional code, so that the two-dimensional code is displayed on a video picture of the second user device displayed on the second user device without the second user exiting the video call or acquiring the two-dimensional code by taking a picture or capturing a picture. The related operations are the same as or similar to those in the previous embodiments, and are not described again here.
In step S22, if the second user equipment is successfully identified, the two-dimensional code information obtained through identification is sent to the first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information. The related operations are the same as or similar to those in the previous embodiments, and are not described again here.
Fig. 3 is a diagram illustrating a first user equipment structure for identifying a two-dimensional code according to an embodiment of the present application, where the first user equipment structure includes a one-module 11 and a two-module 12. A one-to-one module 11, configured to, in a video call between a first user and a second user, obtain a track drawn by the first user in response to a track drawing operation of the first user on a video stream of the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream; and the second module 12 is used for processing the two-dimension code information obtained by identification if the identification is successful.
The one-to-one module 11 is configured to, in a video call between a first user and a second user, obtain a track drawn by the first user in response to a track drawing operation of the first user on a video stream of the second user, determine an interception area according to the track, and perform a two-dimensional code recognition operation on the interception area in the video stream. In some embodiments, in the process of a video call between a first user and a second user, the second user aligns a camera of second user equipment used by the second user to a two-dimensional code to be displayed to the first user, and sends the two-dimensional code to the first user equipment in a video stream mode, so that the two-dimensional code is displayed on a video picture of the second user presented on the first user equipment, the second user is not required to quit the video call at all, and the second user is not required to provide the two-dimensional code to be displayed to the first user in a picture taking or screenshot mode. In some embodiments, when the first user sees that the two-dimensional code is displayed on the video screen of the second user, the first user may perform a track drawing operation on the video screen by using his finger (for example, the finger of the first user is pressed down at a certain position on the screen of the first user device, and the finger is moved while keeping the finger pressed down), at this time, the first user device may obtain a track drawn by the first user, and in response to a track drawing end event corresponding to the track drawing operation (for example, the finger of the first user is lifted up from the screen of the first user device), a corresponding clipping area is determined according to the currently drawn track of the first user. In some embodiments, the first user-drawn track may or may not be displayed on the first user device screen. In some embodiments, if the currently drawn track of the first user is closed, the area enclosed by the closed track is determined as the intercepting area. In some embodiments, if the currently drawn trajectory of the first user is not closed, the drawing starting point (for example, at the position where the finger is pressed) and the drawing ending point (for example, at the position where the finger is lifted) may be connected by a virtual straight line, so as to obtain a virtual closed trajectory corresponding to the drawing trajectory, and an area surrounded by the virtual closed trajectory is determined as the cut-out area. In some embodiments, if the currently drawn trajectory of the first user is not closed, a virtual tangent extension line may be further drawn at the drawing start point and the drawing end point, respectively, a virtual closed trajectory corresponding to the drawn trajectory is obtained according to the two drawn virtual switching extension lines and the screen boundary of the first user equipment, and an area surrounded by the virtual closed trajectory is determined as the clipping area. In some embodiments, a two-dimensional code recognition operation is performed on a video frame image area corresponding to the cut-out area in the video picture of the second user, and two-dimensional code information contained in a two-dimensional code displayed on the video frame image area is obtained through recognition. In some embodiments, the two-dimensional code recognition operation is performed on the video frame image area corresponding to the cut-out area in the video picture of the second user, instead of performing the two-dimensional code recognition operation on all display areas in the video picture of the second user, so that the recognition speed of the two-dimensional code can be increased, and the recognition accuracy and the recognition efficiency of the two-dimensional code can be improved.
And the second module 12 is used for processing the two-dimension code information obtained by identification if the identification is successful. In some embodiments, if the identification is successful, the two-dimensional code information obtained through the identification may be directly processed, or the two-dimensional code information obtained through the identification may be processed according to user authorization information or user identification information (e.g., token, uuid (universal Unique Identifier), etc.) of the first user, or the two-dimensional code information obtained through the identification may be processed according to the personal real identity information of the first user bound to the video call application.
In some embodiments, the module 11 is configured to: in the process of video call between a first user and a second user, in response to the track drawing starting triggering operation of the first user for the video stream of the second user, pausing the playing of the video stream of the second user; responding to the track drawing operation of the first user for the video stream of the second user, obtaining a track drawn by the first user, and determining a capture area according to the track; and executing two-dimensional code identification operation on the intercepted area in the current video frame corresponding to the video stream, and restoring to play the video stream of the second user. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the module 11 is configured to: in the process of a video call between a first user and a second user, responding to the track drawing starting triggering operation of the first user aiming at the video stream of the second user, acquiring a first current video frame image corresponding to the video stream, and displaying the first current video frame image on the video stream; responding to the track drawing operation of the first user for the first current video frame image, obtaining a track drawn by the first user, and determining a capture area according to the track; and executing two-dimensional code identification operation on the intercepted area in the first current video frame image, and canceling the presentation of the first current video frame image. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the module 11 is configured to: in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the current video frame corresponding to the video stream. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the obtaining the trajectory drawn by the first user and determining the intercepting region according to the trajectory includes: and obtaining the track drawn by the first user, responding to a track drawing end event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as a capture area. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: if the drawn track of the first user is not closed, determining a virtual closed track corresponding to the track according to a drawing starting point and a drawing ending point corresponding to the track, and determining an area enclosed by the virtual closed track as an intercepted area. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the determining the virtual closed area corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed area corresponding to the track. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the determining a virtual closed trajectory corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: and respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed area corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the obtaining the trajectory drawn by the first user and determining the intercepting region according to the trajectory includes: and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and if the two-dimension code information is not identified in the intercepting region in the current video frame, executing two-dimension code identification operation on the intercepting region in the target video frame before the current video frame in the video stream. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, if the two-dimensional code information is not identified in the capture area of the current video frame, performing a two-dimensional code identification operation on the capture area of a target video frame that precedes the current video frame in the video stream includes: if the two-dimension code information is not identified in the intercepting region in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimension code identification operation on the intercepting region in the previous video frame, so as to repeat the operation until the two-dimension code information is identified from the intercepting region in the target video frame. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, if the two-dimensional code information is not identified in the capture area of the current video frame, performing a two-dimensional code identification operation on the capture area of a target video frame that precedes the current video frame in the video stream includes: acquiring a starting time point of the track drawing operation; and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and if the two-dimension code information is not identified in the intercepting area in the current video frame, executing two-dimension code identification operation on all display areas of the current video frame. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and if the identification is successful, generating identification success prompt information, and sending the identification success prompt information to second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
In some embodiments, the apparatus is further configured to: and sending the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time. Here, the related operations are the same as or similar to those of the embodiment shown in fig. 1, and therefore are not described again, and are included herein by reference.
Fig. 4 shows a structure diagram of a second user equipment for identifying a two-dimensional code according to an embodiment of the present application, where the second user equipment includes two-in-one modules 21 and two-in-two modules 22. A first module 21, configured to, in a video call between a first user and a second user, respond to a track drawing operation of the second user on a video stream of the second user, obtain a track drawn by the second user, determine an intercepting region according to the track, and perform a two-dimensional code recognition operation on the intercepting region in the video stream; and a second module 22, configured to send the two-dimensional code information obtained through identification to the first user equipment corresponding to the first user if the two-dimensional code information is successfully identified, so that the first user equipment processes the two-dimensional code information.
The first module 21 is configured to, in a video call between a first user and a second user, obtain a track drawn by the second user in response to a track drawing operation of the second user on a video stream of the second user, determine an intercepting area according to the track, and perform a two-dimensional code recognition operation on the intercepting area in the video stream. In some embodiments, in the process of a video call between a first user and a second user, the second user aligns a camera of a second user device used by the second user device with a two-dimensional code, so that the two-dimensional code is displayed on a video picture of the second user device displayed on the second user device without the second user exiting the video call or acquiring the two-dimensional code by taking a picture or capturing a picture. The related operations are the same as or similar to those in the previous embodiments, and are not described again here.
And a second module 22, configured to send the two-dimensional code information obtained through identification to the first user equipment corresponding to the first user if the two-dimensional code information is successfully identified, so that the first user equipment processes the two-dimensional code information. The related operations are the same as or similar to those in the previous embodiments, and are not described again here.
FIG. 5 illustrates an exemplary system that can be used to implement the various embodiments described in this application.
In some embodiments, as shown in FIG. 5, the system 300 can be implemented as any of the devices in the various embodiments described. In some embodiments, system 300 may include one or more computer-readable media (e.g., system memory or NVM/storage 320) having instructions and one or more processors (e.g., processor(s) 305) coupled with the one or more computer-readable media and configured to execute the instructions to implement modules to perform the actions described herein.
For one embodiment, system control module 310 may include any suitable interface controllers to provide any suitable interface to at least one of processor(s) 305 and/or any suitable device or component in communication with system control module 310.
The system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315. Memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
System memory 315 may be used, for example, to load and store data and/or instructions for system 300. For one embodiment, system memory 315 may include any suitable volatile memory, such as suitable DRAM. In some embodiments, the system memory 315 may include a double data rate type four synchronous dynamic random access memory (DDR4 SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controllers to provide an interface to NVM/storage 320 and communication interface(s) 325.
For example, NVM/storage 320 may be used to store data and/or instructions. NVM/storage 320 may include any suitable non-volatile memory (e.g., flash memory) and/or may include any suitable non-volatile storage device(s) (e.g., one or more Hard Disk Drives (HDDs), one or more Compact Disc (CD) drives, and/or one or more Digital Versatile Disc (DVD) drives).
NVM/storage 320 may include storage resources that are physically part of the device on which system 300 is installed or may be accessed by the device and not necessarily part of the device. For example, NVM/storage 320 may be accessible over a network via communication interface(s) 325.
Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device. System 300 may wirelessly communicate with one or more components of a wireless network according to any of one or more wireless network standards and/or protocols.
For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) (e.g., memory controller module 330) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be packaged together with logic for one or more controller(s) of the system control module 310 to form a System In Package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310. For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic for one or more controller(s) of the system control module 310 to form a system on a chip (SoC).
In various embodiments, system 300 may be, but is not limited to being: a server, a workstation, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a holding computing device, a tablet, a netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, a keyboard, a Liquid Crystal Display (LCD) screen (including a touch screen display), a non-volatile memory port, multiple antennas, a graphics chip, an Application Specific Integrated Circuit (ASIC), and speakers.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Those skilled in the art will appreciate that the form in which the computer program instructions reside on a computer-readable medium includes, but is not limited to, source files, executable files, installation package files, and the like, and that the manner in which the computer program instructions are executed by a computer includes, but is not limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding installed program. Computer-readable media herein can be any available computer-readable storage media or communication media that can be accessed by a computer.
Communication media includes media by which communication signals, including, for example, computer readable instructions, data structures, program modules, or other data, are transmitted from one system to another. Communication media may include conductive transmission media such as cables and wires (e.g., fiber optics, coaxial, etc.) and wireless (non-conductive transmission) media capable of propagating energy waves such as acoustic, electromagnetic, RF, microwave, and infrared. Computer readable instructions, data structures, program modules, or other data may be embodied in a modulated data signal, for example, in a wireless medium such as a carrier wave or similar mechanism such as is embodied as part of spread spectrum techniques. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The modulation may be analog, digital or hybrid modulation techniques.
By way of example, and not limitation, computer-readable storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media include, but are not limited to, volatile memory such as random access memory (RAM, DRAM, SRAM); and non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memories (MRAM, FeRAM); and magnetic and optical storage devices (hard disk, tape, CD, DVD); or other now known media or later developed that can store computer-readable information/data for use by a computer system.
An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (19)

1. A method for identifying a two-dimensional code is applied to first user equipment, wherein the method comprises the following steps:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, processing the two-dimensional code information obtained by the identification.
2. The method of claim 1, wherein the obtaining a track drawn by a first user in response to a track drawing operation of the first user on a video stream of a second user during a video call between the first user and the second user, determining a capture area according to the track, and performing a two-dimensional code recognition operation on the capture area in the video stream comprises:
in the process of video call between a first user and a second user, in response to the track drawing starting triggering operation of the first user for the video stream of the second user, pausing the playing of the video stream of the second user;
responding to the track drawing operation of the first user for the video stream of the second user, obtaining a track drawn by the first user, and determining a capture area according to the track;
and executing two-dimensional code identification operation on the intercepted area in the current video frame corresponding to the video stream, and restoring to play the video stream of the second user.
3. The method of claim 1, wherein the obtaining a track drawn by a first user in response to a track drawing operation of the first user on a video stream of a second user during a video call between the first user and the second user, determining a capture area according to the track, and performing a two-dimensional code recognition operation on the capture area in the video stream comprises:
in the process of a video call between a first user and a second user, responding to the track drawing starting triggering operation of the first user aiming at the video stream of the second user, acquiring a first current video frame image corresponding to the video stream, and displaying the first current video frame image on the video stream;
responding to the track drawing operation of the first user for the first current video frame image, obtaining a track drawn by the first user, and determining a capture area according to the track;
and executing two-dimensional code identification operation on the intercepted area in the first current video frame image, and canceling the presentation of the first current video frame image.
4. The method of claim 1, wherein the obtaining a track drawn by a first user in response to a track drawing operation of the first user on a video stream of a second user during a video call between the first user and the second user, determining a capture area according to the track, and performing a two-dimensional code recognition operation on the capture area in the video stream comprises:
in the process of video call between a first user and a second user, responding to the track drawing operation of the first user for the video stream of the second user, obtaining the track drawn by the first user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the current video frame corresponding to the video stream.
5. The method of any of claims 1 to 4, wherein the obtaining the first user-drawn trajectory from which to determine the truncation region comprises:
and obtaining the track drawn by the first user, responding to a track drawing end event corresponding to the track drawing operation, detecting whether the track drawn by the first user is closed, and if so, determining an area enclosed by the drawn track as a capture area.
6. The method of claim 5, wherein the method further comprises:
if the drawn track of the first user is not closed, determining a virtual closed track corresponding to the track according to a drawing starting point and a drawing ending point corresponding to the track, and determining an area enclosed by the virtual closed track as an intercepted area.
7. The method according to claim 6, wherein the determining the virtual closed area corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory comprises:
and connecting the drawing starting point and the drawing ending point through a virtual straight line to obtain a virtual closed area corresponding to the track.
8. The method according to claim 6, wherein the determining a virtual closed trajectory corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory comprises:
and respectively drawing a virtual tangent extension line at the drawing starting point and the drawing ending point, and obtaining a virtual closed area corresponding to the track according to the two drawn virtual switching extension lines and the boundary of the video stream.
9. The method of any of claims 1 to 4, wherein the obtaining the first user-drawn trajectory from which to determine the truncation region comprises:
and obtaining the track drawn by the first user, and determining an area surrounded by the track drawn by the first user as an intercepting area in response to a track closing event corresponding to the track drawing operation.
10. The method of claim 4, wherein the method further comprises:
and if the two-dimension code information is not identified in the intercepting region in the current video frame, executing two-dimension code identification operation on the intercepting region in the target video frame before the current video frame in the video stream.
11. The method of claim 10, wherein the performing a two-dimensional code recognition operation on the truncated region in a target video frame before the current video frame in the video stream if no two-dimensional code information is recognized in the truncated region in the current video frame comprises:
if the two-dimension code information is not identified in the intercepting region in the current video frame, acquiring a previous video frame corresponding to the current video frame, and executing two-dimension code identification operation on the intercepting region in the previous video frame, so as to repeat the operation until the two-dimension code information is identified from the intercepting region in the target video frame.
12. The method of claim 10, wherein the performing a two-dimensional code recognition operation on the truncated region in a target video frame before the current video frame in the video stream if no two-dimensional code information is recognized in the truncated region in the current video frame comprises:
acquiring a starting time point of the track drawing operation;
and acquiring a target video frame corresponding to the starting time point from the video stream, and executing two-dimensional code identification operation on the intercepted area in the target video frame.
13. The method of claim 4, wherein the method further comprises:
and if the two-dimension code information is not identified in the intercepting area in the current video frame, executing two-dimension code identification operation on all display areas of the current video frame.
14. The method of claim 1, wherein the method further comprises:
and if the identification is successful, generating identification success prompt information, and sending the identification success prompt information to second user equipment corresponding to the second user so as to present the identification success prompt information on the second user equipment.
15. The method of claim 1, wherein the method further comprises:
and sending the track drawn by the first user to second user equipment corresponding to the second user in real time so as to present the track drawn by the first user on the second user equipment in real time.
16. A method for identifying a two-dimensional code is applied to a second user equipment, wherein the method comprises the following steps:
in the process of video call between a first user and a second user, responding to the track drawing operation of the second user for the video stream of the second user, obtaining the track drawn by the second user, determining an intercepting area according to the track, and executing two-dimensional code identification operation on the intercepting area in the video stream;
and if the identification is successful, sending the two-dimension code information obtained by the identification to first user equipment corresponding to the first user so that the first user equipment can process the two-dimension code information.
17. An apparatus for recognizing a two-dimensional code, wherein the apparatus comprises:
a processor; and
a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the method of any of claims 1 to 16.
18. A computer-readable medium storing instructions that, when executed by a computer, cause the computer to perform operations of any of the methods of claims 1-16.
19. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method according to any one of claims 1 to 16 when executed by a processor.
CN202011618821.3A 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code Active CN112818719B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011618821.3A CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code
PCT/CN2021/125287 WO2022142620A1 (en) 2020-12-30 2021-10-21 Method and device for recognizing qr code

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011618821.3A CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code

Publications (2)

Publication Number Publication Date
CN112818719A true CN112818719A (en) 2021-05-18
CN112818719B CN112818719B (en) 2023-06-23

Family

ID=75855836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011618821.3A Active CN112818719B (en) 2020-12-30 2020-12-30 Method and equipment for identifying two-dimensional code

Country Status (2)

Country Link
CN (1) CN112818719B (en)
WO (1) WO2022142620A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592468A (en) * 2021-07-12 2021-11-02 见面(天津)网络科技有限公司 Online payment method and device based on two-dimensional code
WO2022142620A1 (en) * 2020-12-30 2022-07-07 上海掌门科技有限公司 Method and device for recognizing qr code

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090761A (en) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 Screenshot application device and method
CN104573608A (en) * 2015-01-23 2015-04-29 苏州海博智能系统有限公司 Coded message scanning method and device
CN109286848A (en) * 2018-10-08 2019-01-29 腾讯科技(深圳)有限公司 A kind of exchange method, device and the storage medium of terminal video information
CN110659533A (en) * 2019-08-26 2020-01-07 福建天晴数码有限公司 Method for identifying two-dimensional code in video and computer readable storage medium
CN111935439A (en) * 2020-08-12 2020-11-13 维沃移动通信有限公司 Identification method and device and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4293111B2 (en) * 2004-10-27 2009-07-08 株式会社デンソー Camera driving device, camera driving program, geometric shape code decoding device, and geometric shape code decoding program
CN101510269B (en) * 2009-02-18 2011-02-02 华为终端有限公司 Method and device for acquiring two-dimensional code in video
CN109636512A (en) * 2018-11-29 2019-04-16 苏宁易购集团股份有限公司 A kind of method and apparatus for realizing shopping process by video
CN111770380A (en) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 Video processing method and device
CN112818719B (en) * 2020-12-30 2023-06-23 上海掌门科技有限公司 Method and equipment for identifying two-dimensional code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090761A (en) * 2014-07-10 2014-10-08 福州瑞芯微电子有限公司 Screenshot application device and method
CN104573608A (en) * 2015-01-23 2015-04-29 苏州海博智能系统有限公司 Coded message scanning method and device
CN109286848A (en) * 2018-10-08 2019-01-29 腾讯科技(深圳)有限公司 A kind of exchange method, device and the storage medium of terminal video information
CN110659533A (en) * 2019-08-26 2020-01-07 福建天晴数码有限公司 Method for identifying two-dimensional code in video and computer readable storage medium
CN111935439A (en) * 2020-08-12 2020-11-13 维沃移动通信有限公司 Identification method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022142620A1 (en) * 2020-12-30 2022-07-07 上海掌门科技有限公司 Method and device for recognizing qr code
CN113592468A (en) * 2021-07-12 2021-11-02 见面(天津)网络科技有限公司 Online payment method and device based on two-dimensional code

Also Published As

Publication number Publication date
CN112818719B (en) 2023-06-23
WO2022142620A1 (en) 2022-07-07

Similar Documents

Publication Publication Date Title
CN110417641B (en) Method and equipment for sending session message
CN112822431B (en) Method and equipment for private audio and video call
CN110765395B (en) Method and equipment for providing novel information
CN112818719B (en) Method and equipment for identifying two-dimensional code
CN110827061A (en) Method and equipment for providing presentation information in novel reading process
CN110781397A (en) Method and equipment for providing novel information
CN112040280A (en) Method and equipment for providing video information
CN112822419A (en) Method and equipment for generating video information
CN110780955A (en) Method and equipment for processing emoticon message
CN109636922B (en) Method and device for presenting augmented reality content
CN114153535B (en) Method, apparatus, medium and program product for jumping pages on an open page
CN112261236B (en) Method and equipment for mute processing in multi-person voice
CN115719053A (en) Method and equipment for presenting reader labeling information
CN112702257B (en) Method and device for deleting friend application
CN113157162B (en) Method, apparatus, medium and program product for revoking session messages
CN114143568A (en) Method and equipment for determining augmented reality live image
CN110460642B (en) Method and device for managing reading mode
CN113965665A (en) Method and equipment for determining virtual live broadcast image
CN114821410A (en) Method and equipment for determining object association information of target object
CN111260401A (en) Method and equipment for displaying presentation information on reading page
CN111552889B (en) Method and device for recommending books
CN110311945B (en) Method and equipment for presenting resource pushing information in real-time video stream
CN112910757B (en) Picture interaction method and equipment
CN114338579B (en) Method, equipment and medium for dubbing
CN111818013B (en) Method and device for adding friends

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant