WO2022142620A1 - 一种识别二维码的方法与设备 - Google Patents

一种识别二维码的方法与设备 Download PDF

Info

Publication number
WO2022142620A1
WO2022142620A1 PCT/CN2021/125287 CN2021125287W WO2022142620A1 WO 2022142620 A1 WO2022142620 A1 WO 2022142620A1 CN 2021125287 W CN2021125287 W CN 2021125287W WO 2022142620 A1 WO2022142620 A1 WO 2022142620A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
trajectory
dimensional code
area
video stream
Prior art date
Application number
PCT/CN2021/125287
Other languages
English (en)
French (fr)
Inventor
黄永生
Original Assignee
上海掌门科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海掌门科技有限公司 filed Critical 上海掌门科技有限公司
Publication of WO2022142620A1 publication Critical patent/WO2022142620A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working

Definitions

  • the present application relates to the field of communications, and in particular, to a technology for identifying two-dimensional codes.
  • two-dimensional codes have been widely used in different scenarios of all walks of life, involving almost all aspects of life. Users can scan the two-dimensional code to get the corresponding two-dimensional code content, for example, through the two-dimensional code It has greatly improved the convenience of people's daily life.
  • An object of the present application is to provide a method and device for identifying a two-dimensional code.
  • a method for identifying a two-dimensional code applied to a first user equipment comprising:
  • the trajectory drawn by the first user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the two-dimensional code information obtained by the identification is processed.
  • a method for identifying a two-dimensional code applied to a second user equipment comprising:
  • a trajectory drawn by the second user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment can process the two-dimensional code information.
  • a first user equipment for identifying a two-dimensional code comprising:
  • Modules 1 and 2 are used to process the QR code information obtained from the recognition if the recognition is successful.
  • a second user equipment for identifying a two-dimensional code comprising:
  • the two-one module 21 is configured to, during the video call between the first user and the second user, in response to the second user's trajectory drawing operation on the video stream of the second user, obtain the data drawn by the second user. track, determine an interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream;
  • the two-two module 22 is configured to send the identified two-dimensional code information to the first user equipment corresponding to the first user if the identification is successful, so that the first user equipment can process the two-dimensional code information .
  • a device for recognizing a two-dimensional code wherein the device includes:
  • memory arranged to store computer-executable instructions which, when executed, cause the processor to:
  • the trajectory drawn by the first user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the two-dimensional code information obtained by the identification is processed.
  • a device for identifying a two-dimensional code wherein the device includes:
  • memory arranged to store computer-executable instructions which, when executed, cause the processor to:
  • a trajectory drawn by the second user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment can process the two-dimensional code information.
  • a computer-readable medium storing instructions that, when executed, cause a system to operate as follows:
  • the trajectory drawn by the first user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the two-dimensional code information obtained by the identification is processed.
  • a computer-readable medium storing instructions that, when executed, cause a system to:
  • a trajectory drawn by the second user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment can process the two-dimensional code information.
  • a computer program product comprising a computer program, when the computer program is executed by a processor, the following method is performed:
  • the trajectory drawn by the first user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the two-dimensional code information obtained by the identification is processed.
  • a computer program product comprising a computer program, when the computer program is executed by a processor, the following method is performed:
  • a trajectory drawn by the second user is obtained, and the trajectory is determined according to the trajectory intercepting an area, and performing a two-dimensional code identification operation on the intercepted area in the video stream;
  • the identified two-dimensional code information is sent to the first user equipment corresponding to the first user, so that the first user equipment can process the two-dimensional code information.
  • the first user in response to the trajectory drawing operation of the first user for the video stream of the second user, the first user can obtain the first The trajectory drawn by the user, the interception area is determined according to the trajectory, and the QR code recognition operation is performed on the interception area in the video stream, so that the second user only needs to point his camera to the first user.
  • the first user equipment can quickly and conveniently identify the two-dimensional code, which makes the identification of the two-dimensional code during the video call extremely simple and accurate, and can provide great convenience for users participating in the video call, and Only perform the QR code recognition operation on the video frame image area corresponding to the intercepted area in the video stream, instead of performing the QR code recognition operation on the entire display area of the video stream, which can speed up the recognition speed of the QR code and improve the two-dimensional code recognition speed. code recognition accuracy and recognition efficiency.
  • FIG. 1 shows a flowchart of a method for identifying a two-dimensional code applied to a first user equipment according to an embodiment of the present application
  • FIG. 2 shows a flowchart of a method for identifying a two-dimensional code applied to a second user equipment according to an embodiment of the present application
  • FIG. 3 shows a structural diagram of a first user equipment for identifying a two-dimensional code according to an embodiment of the present application
  • FIG. 4 shows a structural diagram of a second user equipment for identifying a two-dimensional code according to an embodiment of the present application
  • FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described in this application.
  • the terminal, the device serving the network, and the trusted party all include one or more processors (for example, a central processing unit (CPU)), an input/output interface, a network interface, and Memory.
  • processors for example, a central processing unit (CPU)
  • Memory may include non-persistent memory in computer-readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory ( Flash Memory).
  • RAM random access memory
  • ROM read only memory
  • Flash Memory Flash Memory
  • Memory is an example of a computer-readable medium.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (Static Random-Access Memory, SRAM), Dynamic Random Access Memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically-Erasable Programmable Read -Only Memory, EEPROM), flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD), or other optical storage , magnetic tape cartridges, magnetic tape-disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PCM Phase-Change Memory
  • PRAM Programmable Random Access Memory
  • SRAM Static
  • the equipment referred to in this application includes, but is not limited to, user equipment, network equipment, or equipment formed by integrating user equipment and network equipment through a network.
  • the user equipment includes, but is not limited to, any mobile electronic product that can perform human-computer interaction with the user (for example, human-computer interaction through a touchpad), such as a smart phone, a tablet computer, etc., and the mobile electronic product can use any operation. system, such as Android operating system, iOS operating system, etc.
  • the network device includes an electronic device that can automatically perform numerical calculation and information processing according to pre-set or stored instructions, and its hardware includes but is not limited to microprocessors, application specific integrated circuits (ASICs) ), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), Digital Signal Processor (DSP), embedded devices, etc.
  • the network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud formed by a plurality of servers; here, the cloud is formed by a large number of computers or network servers based on cloud computing, Among them, cloud computing is a kind of distributed computing, a virtual supercomputer composed of a group of loosely coupled computer sets.
  • the network includes but is not limited to the Internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless ad hoc network (Ad Hoc network), and the like.
  • the device may also be a program running on the user equipment, network equipment, or a device formed by user equipment and network equipment, network equipment, touch terminal or network equipment and touch terminal integrated through a network.
  • FIG. 1 shows a flowchart of a method for identifying a two-dimensional code applied to a first user equipment according to an embodiment of the present application, and the method includes step S11 and step S12.
  • step S11 during the video call between the first user and the second user, the first user equipment obtains the first user in response to the first user's trajectory drawing operation on the video stream of the second user The drawn trajectory, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the intercepted area in the video stream; in step S12, if the identification is successful, the first user equipment identifies the two-dimensional code obtained. code information for processing.
  • step S11 during the video call between the first user and the second user, the first user equipment obtains the first user in response to the first user's trajectory drawing operation on the video stream of the second user The drawn trajectory, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream.
  • the second user aligns the camera of the second user equipment used by the second user with the QR code that needs to be displayed to the first user, and the two The QR code is sent to the first user equipment through a video stream, so that the QR code is displayed on the video screen of the second user presented on the first user equipment, without the need for the second user to log out of the video call, and without the need for the second user to exit the video call.
  • the user provides the first user with the two-dimensional code that needs to be displayed to the first user by taking pictures or screenshots.
  • the second user needs to switch the front camera currently used by the second user equipment to the rear camera, and the rear camera needs to be switched.
  • the camera is aimed at the QR code that needs to be displayed to the first user.
  • the first user when the first user sees that the two-dimensional code is displayed on the video screen of the second user, the first user can use his finger to perform a trajectory drawing operation on the video screen (for example, the first user's finger Press at a certain position on the screen of the first user equipment, and move the finger while keeping the finger pressed), at this time, the first user equipment will obtain the trajectory drawn by the first user, and in response to the trajectory drawing operation corresponding
  • a trajectory drawing end event for example, the first user's finger is lifted from the screen of the first user equipment
  • a corresponding interception area is determined according to the trajectory currently drawn by the first user.
  • the trajectory drawn by the first user may be displayed on the screen of the first user equipment, or the trajectory drawn by the first user may not be displayed on the screen of the first user equipment.
  • the area enclosed by the closed track is determined as the interception area.
  • a virtual straight line may be used to connect the drawing start point (eg, where the finger is pressed) and the drawing end point (eg, where the finger is lifted) Then, the virtual closed trajectory corresponding to the drawn trajectory is obtained, and the area enclosed by the virtual closed trajectory is determined as the interception area.
  • a virtual tangent extension line may be drawn at the drawing start point and the drawing end point, respectively. According to the drawn two virtual switching extension lines and the first user equipment to obtain the virtual closed trajectory corresponding to the drawn trajectory, and determine the area enclosed by the virtual closed trajectory as the interception area.
  • a two-dimensional code recognition operation is performed on the video frame image area corresponding to the intercepted area in the video image of the second user, and the two-dimensional code included in the two-dimensional code displayed on the video frame image area is recognized. code information.
  • the two-dimensional code recognition operation is performed on the video frame image area corresponding to the intercepted area in the video image of the second user, instead of performing the two-dimensional code recognition on all display areas in the video image of the second user
  • the operation can speed up the recognition speed of the two-dimensional code, and improve the recognition accuracy and efficiency of the two-dimensional code.
  • step S12 if the identification is successful, the first user equipment processes the identified two-dimensional code information.
  • the identified two-dimensional code information can be directly processed, or, according to the user authorization information or user identification information of the first user (for example, token, uuid (Universally Unique Identifier, Universal unique identification code), etc.), process the identified two-dimensional code information, or, according to the personal real identity information of the first user bound to the video call application, the identified two-dimensional code information can be processed. deal with.
  • the present application can obtain the trajectory drawn by the first user in response to the trajectory drawing operation of the first user on the video stream of the second user during the video call between the first user and the second user.
  • the trajectory determines the interception area, and performs a two-dimensional code recognition operation on the interception area in the video stream, so that the second user only needs to point his camera at the two-dimensional code that needs to be displayed to the first user, and completely There is no need to exit the video, and there is no need to provide the first user with the QR code by taking pictures or screenshots, and the first user only needs to perform a trajectory drawing operation on the screen for the QR code displayed by the second user, so that the first user can
  • the device can quickly and easily recognize the QR code, which makes the identification of the QR code extremely simple and accurate during the video call, which can provide great convenience for users participating in the video call, and only recognize the QR code in the video stream.
  • the QR code recognition operation is performed on the video frame image area corresponding to the intercepted area, instead of performing the QR code recognition operation on the entire display area of the video stream, which can speed up the recognition speed of the QR code and improve the recognition accuracy and efficiency of the QR code. .
  • the step S11 includes: during the video call between the first user and the second user, the first user equipment starts to draw the trajectory of the video stream of the second user in response to the first user triggering an operation to pause playback of the video stream of the second user; in response to a trajectory drawing operation of the first user for the video stream of the second user, obtain the trajectory drawn by the first user, and use the trajectory according to the trajectory Determine the interception area; perform a two-dimensional code identification operation on the interception area in the current video frame corresponding to the video stream, and resume playing the video stream of the second user.
  • the triggering operation to start the trajectory drawing may be that the first user's finger presses a certain position on the screen of the first user equipment, or it may also be that the finger is moved in a state of keeping the finger pressed after pressing.
  • the distance of the finger is greater than or equal to a predetermined distance threshold (eg, 10 pixels, 1 centimeter, etc.), or, the first user may click a specific button on the current page (eg, "start drawing track” button).
  • a two-dimensional code recognition operation is performed on the image area corresponding to the interception area in the current video frame image, and the image area displayed on the image area is recognized.
  • the two-dimensional code information contained in the two-dimensional code of wherein, because the video stream of the second user was previously paused, the current video frame image corresponds to the current video image of the second user whose playback is currently paused. In some embodiments, after the identification is successful, playback of the video stream of the second user is resumed.
  • the video stream of the second user will also be directly resumed, or, after the identification fails, the video stream of the second user will not be directly resumed, and the first user can restart the current video
  • the first user equipment will re-try to perform the two-dimensional code recognition operation on the image area corresponding to the interception area in the current video frame after re-determining the interception area. If the number of recognition failures reaches a predetermined threshold, The second user's video stream will resume.
  • the identification fails the video stream of the second user will not be directly resumed, but a "resume play" button will be placed on the current page. After the user clicks this button, the second user's video stream will be resumed. video stream.
  • the step S11 includes: during the video call between the first user and the second user, the first user equipment starts to draw the trajectory of the video stream of the second user in response to the first user triggering an operation to acquire the first current video frame image corresponding to the video stream, and present the first current video frame image on the video stream; in response to the first user targeting the first current video The trajectory drawing operation of the frame image, obtaining the trajectory drawn by the first user, and determining the interception area according to the trajectory; performing a two-dimensional code recognition operation on the interception area in the first current video frame image, and canceling The first current video frame image is presented.
  • a current video frame image corresponding to the video stream is obtained, and the current video frame image is superimposed and presented on the video stream , after determining the interception area according to the trajectory currently drawn by the first user, perform a two-dimensional code recognition operation on the image area corresponding to the interception area of the current video frame image, and identify the two-dimensional code displayed on the image area. Included QR code information.
  • the current video frame image will be hidden.
  • the current video frame image is also directly hidden, or, after the recognition fails, the current video frame image is not directly hidden, and the first user can re-execute the track on the current video screen For drawing operation, the first user equipment will re-try to perform the QR code recognition operation on the image area corresponding to the intercepted area in the current video frame after re-determining the intercepted area. If the number of recognition failures reaches a predetermined threshold, the current Video frame image. In some embodiments, after the recognition fails, the current video frame image will not be directly hidden, but a predetermined button will be placed on the current page. After the user clicks the button, the current video frame image will be hidden.
  • the step S11 includes: during a video call between the first user and the second user, the first user equipment responds to a trajectory drawing operation by the first user for the video stream of the second user , obtain a trajectory drawn by the first user, determine an interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the current video frame corresponding to the video stream.
  • a corresponding interception is determined according to the trajectory currently drawn by the first user area, and perform a two-dimensional code recognition operation on the image area corresponding to the intercepted area in the current video frame image corresponding to the video stream of the second user, and identify the two-dimensional code contained in the two-dimensional code displayed on the image area. information.
  • the obtaining the track drawn by the first user, and determining the interception area according to the track includes: obtaining the track drawn by the first user, and ending the drawing of the track corresponding to the track drawing operation. event, to detect whether the trajectory drawn by the first user is closed, and if so, determine the area enclosed by the drawn trajectory as the interception area.
  • the track drawing end event corresponding to the track drawing operation may be that the first user's finger is lifted from the screen of the first user equipment, or it may also be that the first user's finger moves out of the video screen of the second user or, it can also be that the time that the first user's finger presses and stays at a certain position on the screen of the first user equipment exceeds a predetermined duration threshold.
  • the detection of whether the trajectories currently drawn by the first user are closed may be determined by detecting whether the trajectories currently drawn by the first user intersect to determine whether the trajectories are closed. And the area enclosed by the closure is determined as the interception area.
  • the method further includes: if the trajectory drawn by the first user is not closed, the first user equipment determines, according to the drawing start point and the drawing end point corresponding to the trajectory, a virtual drawing corresponding to the trajectory. Close the track, and determine the area enclosed by the virtual closed track as the interception area.
  • the drawing start point and the drawing end point can be connected by a virtual straight line to obtain a corresponding virtual closed area, and the virtual closed trajectory can be used to connect the drawing start point and the drawing end point. The area is determined as the interception area.
  • a virtual tangent extension line may be drawn at the drawing start point and the drawing end point, respectively, according to the drawn two virtual switching extension lines and the first user equipment.
  • the corresponding virtual closed area is obtained, and the area enclosed by the virtual closed track is determined as the interception area.
  • the determining the virtual closed area corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: connecting the drawing start point and the drawing end point by a virtual straight line connected to obtain the virtual closed area corresponding to the trajectory.
  • the virtual straight line may be displayed on the screen of the first user equipment, or the virtual straight line may not be displayed on the screen of the first user equipment.
  • the determining the virtual closed track corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: drawing a virtual track at the drawing start point and the drawing end point respectively.
  • the virtual closed area corresponding to the track is obtained according to the drawn two virtual switching extension lines and the boundary of the video stream.
  • the virtual tangent extension line may be displayed on the screen of the first user equipment, or the virtual tangent extension line may not be displayed on the screen of the first user equipment.
  • the video stream boundary may be the boundary of the screen of the first user equipment, or may also be the boundary of the video picture of the second user.
  • the obtaining the trajectory drawn by the first user, and determining the interception area according to the trajectory includes: obtaining the trajectory drawn by the first user, and responding to a trajectory closing event corresponding to the trajectory drawing operation , and the area enclosed by the trajectory drawn by the first user is determined as the interception area.
  • the track closing event corresponding to the track drawing operation performed by the first user on the video screen of the second user may be directly used as the track drawing end event. The area enclosed by the trajectory currently drawn by the first user is determined as the interception area.
  • the method further includes: if the first user equipment does not recognize the two-dimensional code information in the intercepted area in the current video frame, performing an analysis on the current video frame in the video stream The two-dimensional code identification operation is performed on the intercepted area in the previous target video frame. In some embodiments, if the two-dimensional code is not recognized in the image area corresponding to the intercepted area in the current video frame image corresponding to the video stream of the second user, the current video frame in the video stream of the second user is not identified in the image area. The two-dimensional code recognition operation is performed on the image area corresponding to the intercepted area in the target video frame image before the image.
  • the target video frame image may be a video frame image corresponding to the video stream of the second user at the start time point of the trajectory drawing operation of the first user. In some embodiments, the target video frame image may also be the previous video frame image of the current video frame image in the video stream of the second user.
  • Performing a two-dimensional code identification operation in the intercepted area includes: if no two-dimensional code information is identified in the intercepted area in the current video frame, acquiring the previous video frame corresponding to the current video frame, The two-dimensional code identification operation is performed on the intercepted area in the previous video frame, and so on and so forth, until the two-dimensional code information is identified from the intercepted area in the target video frame.
  • the target video frame image may be the previous video frame image of the current video frame image in the video stream of the second user, and the two-dimensional code is executed on the image area corresponding to the intercepted area in the target video frame image
  • the two-dimensional code is also not recognized in the previous video frame image
  • the target video frame image is determined as the previous video frame image of the previous video frame image in the video stream of the second user
  • the target video frame image is determined as the previous video frame image of the previous video frame image in the video stream of the second user.
  • the image area corresponding to the intercepted area in the target video frame image performs the two-dimensional code identification operation, and so on and so forth until the two-dimensional code information is identified from the image area corresponding to the intercepted area in the target video frame.
  • Performing a two-dimensional code identification operation in the interception area includes: obtaining a start time point of the trajectory drawing operation; obtaining a target video frame corresponding to the start time point from the video stream, The interception area of is performed to perform a two-dimensional code identification operation.
  • recording the start time point of the trajectory drawing operation may be recorded in the memory, or may also be recorded in the first user User device local.
  • the starting time point of the trajectory drawing operation is read, and the video frame image corresponding to the second user's video stream at the starting time point is determined as the target video frame image, and the target video frame image is in the target video frame image.
  • the two-dimensional code recognition operation is performed on the image area corresponding to the intercepted area.
  • the method further includes: if the first user equipment does not recognize the two-dimensional code information in the intercepted area in the current video frame, performing two-dimensional code on all display areas of the current video frame QR code identification operation. In some embodiments, if no two-dimensional code is recognized in the image area corresponding to the intercepted area in the current video frame image corresponding to the second user's video stream, execute the execution on all display areas in the current video frame image. QR code identification operation.
  • the method further includes: if the identification is successful, the first user equipment generates identification success prompt information, and sends the identification success prompt information to the second user equipment corresponding to the second user, so as to The identification success prompt information is presented on the second user equipment.
  • a recognition success prompt message will be generated and sent to the second user equipment and presented, so as to prompt the second user not to continue to point the camera of the second user equipment at the need to display to the second user equipment.
  • the identification success prompt information may be sent directly to the second user equipment, or may also be sent to the second user equipment via a server.
  • the recognition success prompt information may be presented on the second user equipment in a visual form (for example, text, icon, text+icon, etc.), or may also be presented on the second user equipment in the form of voice playback superior.
  • the method further includes: the first user equipment sends the trajectory drawn by the first user to a second user equipment corresponding to the second user in real time, so that the trajectory drawn by the first user can be displayed on the second user equipment in real time.
  • the first user drawn trajectory is presented.
  • the first user equipment sends the trajectory drawn by the first user to the second user equipment in real time and presents them.
  • Fig. 2 shows a flowchart of a method for identifying a two-dimensional code applied to a second user equipment according to an embodiment of the present application, the method includes step S21 and step S22.
  • step S21 during the video call between the first user and the second user, the second user equipment, in response to the second user's trajectory drawing operation on the video stream of the second user, obtains the second user The drawn trajectory, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream; in step S22, if the second user equipment is successfully identified, it will identify the obtained two-dimensional code
  • the code information is sent to the first user equipment corresponding to the first user, so that the first user equipment processes the two-dimensional code information.
  • step S21 during the video call between the first user and the second user, the second user equipment, in response to the second user's trajectory drawing operation on the video stream of the second user, obtains the second user The drawn trajectory, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream.
  • the second user points the camera of the second user equipment used by the second user at the two-dimensional code, so that the second user equipment presented on the second user equipment The two-dimensional code is displayed on the video screen of the two users, and there is no need for the second user to exit the video call, and there is no need for the second user to obtain the two-dimensional code by taking pictures or screenshots.
  • the second user needs to The currently used front camera is switched to the rear camera, and the rear camera is aligned with the QR code.
  • the second user needs to switch the video screen of the first user currently displayed on the second user's device to that of the second user. video screen. Relevant operations are the same as or similar to those in the foregoing embodiments, and are not repeated here.
  • step S22 if the identification is successful, the second user equipment sends the identified two-dimensional code information to the first user equipment corresponding to the first user, so that the first user equipment can recognize the two-dimensional code information. to be processed. Relevant operations are the same as or similar to those in the foregoing embodiments, and are not repeated here.
  • FIG. 3 shows a structural diagram of a first user equipment for recognizing a two-dimensional code according to an embodiment of the present application.
  • the equipment includes a one-one module 11 and a one-two module 12 .
  • a module 11 is configured to, during the video call between the first user and the second user, in response to the first user's trajectory drawing operation on the video stream of the second user, obtain the data drawn by the first user. track, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream; the first and second modules 12 are used to process the identified two-dimensional code information if the identification is successful .
  • a module 11 is configured to, during the video call between the first user and the second user, in response to the first user's trajectory drawing operation on the video stream of the second user, obtain the data drawn by the first user. track, determine an interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream.
  • the second user points the camera of the second user equipment used by the second user at the QR code that needs to be displayed to the first user.
  • the QR code is sent to the first user equipment in the manner of the first user equipment, so that the QR code is displayed on the video screen of the second user presented on the first user equipment, without the second user logging out of the video call, and without the second user needing to take pictures or screenshots
  • the QR code that needs to be displayed to the first user is provided to the first user.
  • the first user when the first user sees that the two-dimensional code is displayed on the video screen of the second user, the first user can use his finger to perform a trajectory drawing operation on the video screen (for example, the first user's finger Press at a certain position on the screen of the first user equipment, and move the finger while keeping the finger pressed), at this time, the first user equipment will obtain the trajectory drawn by the first user, and in response to the trajectory drawing operation corresponding In a trajectory drawing end event (for example, the first user's finger is lifted from the screen of the first user equipment), a corresponding interception area is determined according to the trajectory currently drawn by the first user.
  • a trajectory drawing end event for example, the first user's finger is lifted from the screen of the first user equipment
  • the trajectory drawn by the first user may be displayed on the screen of the first user equipment, or the trajectory drawn by the first user may not be displayed on the screen of the first user equipment.
  • the area enclosed by the closed track is determined as the interception area.
  • a virtual straight line may be used to connect the drawing start point (eg, where the finger is pressed) and the drawing end point (eg, where the finger is lifted) Then, the virtual closed trajectory corresponding to the drawn trajectory is obtained, and the area enclosed by the virtual closed trajectory is determined as the interception area.
  • a virtual tangent extension line may be drawn at the drawing start point and the drawing end point, respectively. According to the drawn two virtual switching extension lines and the first user equipment to obtain the virtual closed trajectory corresponding to the drawn trajectory, and determine the area enclosed by the virtual closed trajectory as the interception area.
  • a two-dimensional code recognition operation is performed on the video frame image area corresponding to the intercepted area in the video image of the second user, and the two-dimensional code included in the two-dimensional code displayed on the video frame image area is recognized. code information.
  • the two-dimensional code recognition operation is performed on the video frame image area corresponding to the intercepted area in the video image of the second user, instead of performing the two-dimensional code recognition on all display areas in the video image of the second user
  • the operation can speed up the recognition speed of the two-dimensional code, and improve the recognition accuracy and efficiency of the two-dimensional code.
  • the first and second modules 12 are used to process the two-dimensional code information obtained by the identification if the identification is successful.
  • the identified two-dimensional code information can be directly processed, or, according to the user authorization information or user identification information of the first user (for example, token, uuid (Universally Unique Identifier, Universal unique identification code), etc.), process the identified two-dimensional code information, or, according to the personal real identity information of the first user bound to the video call application, the identified two-dimensional code information can be processed. deal with.
  • the one-to-one module 11 is configured to: in the process of a video call between the first user and the second user, in response to the first user's trajectory drawing for the second user's video stream, start triggering operation to pause the playback of the video stream of the second user; in response to the trajectory drawing operation of the first user for the video stream of the second user, obtain the trajectory drawn by the first user, and determine according to the trajectory An interception area; perform a two-dimensional code identification operation on the interception area in the current video frame corresponding to the video stream, and resume playing the video stream of the second user.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the one-to-one module 11 is configured to: in the process of a video call between the first user and the second user, in response to the first user's trajectory drawing for the second user's video stream, start triggering operation, acquiring the first current video frame image corresponding to the video stream, and presenting the first current video frame image on the video stream; in response to the first user targeting the first current video frame Image trajectory drawing operation, obtain the trajectory drawn by the first user, and determine the interception area according to the trajectory; perform a two-dimensional code recognition operation on the interception area in the first current video frame image, and cancel the presentation the first current video frame image.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the one-to-one module 11 is configured to: during a video call between the first user and the second user, in response to a trajectory drawing operation of the first user for the video stream of the second user, Obtaining a trajectory drawn by the first user, determining an interception area according to the trajectory, and performing a two-dimensional code identification operation on the interception area in the current video frame corresponding to the video stream.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the obtaining the track drawn by the first user, and determining the interception area according to the track includes: obtaining the track drawn by the first user, and ending the drawing of the track corresponding to the track drawing operation. event, to detect whether the trajectory drawn by the first user is closed, and if so, determine the area enclosed by the drawn trajectory as the interception area.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the device is further configured to: if the trajectory drawn by the first user is not closed, determine a virtual closed trajectory corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory, The area enclosed by the virtual closed track is determined as the interception area.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the determining the virtual closed area corresponding to the trajectory according to the drawing start point and the drawing end point corresponding to the trajectory includes: connecting the drawing start point and the drawing end point by a virtual straight line connected to obtain the virtual closed area corresponding to the trajectory.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the determining the virtual closed track corresponding to the track according to the drawing start point and the drawing end point corresponding to the track includes: drawing a virtual track at the drawing start point and the drawing end point respectively.
  • the virtual closed area corresponding to the track is obtained according to the drawn two virtual switching extension lines and the boundary of the video stream.
  • the obtaining the trajectory drawn by the first user, and determining the interception area according to the trajectory includes: obtaining the trajectory drawn by the first user, and responding to a trajectory closing event corresponding to the trajectory drawing operation , and the area enclosed by the trajectory drawn by the first user is determined as the interception area.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the device is further configured to: if the two-dimensional code information is not identified in the intercepted area in the current video frame, perform a detection on the target in the video stream before the current video frame The two-dimensional code identification operation is performed on the intercepted area in the video frame.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , and thus are not described again, but are incorporated herein by reference.
  • Performing a two-dimensional code identification operation in the intercepted area includes: if no two-dimensional code information is identified in the intercepted area in the current video frame, acquiring the previous video frame corresponding to the current video frame, The two-dimensional code identification operation is performed on the intercepted area in the previous video frame, and so on and so forth, until the two-dimensional code information is identified from the intercepted area in the target video frame.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • Performing a two-dimensional code identification operation in the interception area includes: obtaining a start time point of the trajectory drawing operation; obtaining a target video frame corresponding to the start time point from the video stream, The interception area of is performed to perform a two-dimensional code identification operation.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the device is further configured to: if no two-dimensional code information is identified in the intercepted area in the current video frame, perform two-dimensional code identification on all display areas of the current video frame operate.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • the device is further configured to: if the identification is successful, generate a prompt message of successful identification, and send the prompt message of successful identification to the second user equipment corresponding to the second user, so as to be in the first
  • the identification success prompt information is presented on the second user equipment.
  • the device is further configured to: send the trajectory drawn by the first user to a second user equipment corresponding to the second user in real time, so as to present the second user equipment in real time on the second user equipment The trajectory drawn by the first user.
  • the related operations are the same as or similar to the embodiment shown in FIG. 1 , so they are not repeated here, but are incorporated herein by reference.
  • FIG. 4 shows a structural diagram of a second user equipment for recognizing a two-dimensional code according to an embodiment of the present application.
  • the equipment includes a two-one module 21 and a two-two module 22 .
  • the two-one module 21 is configured to, during the video call between the first user and the second user, in response to the second user's trajectory drawing operation on the video stream of the second user, obtain the data drawn by the second user. track, determine the interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream; the second module 22 is used to send the identified two-dimensional code information to the The first user equipment corresponding to the first user, so that the first user equipment can process the two-dimensional code information.
  • the two-one module 21 is configured to, during the video call between the first user and the second user, in response to the second user's trajectory drawing operation on the video stream of the second user, obtain the data drawn by the second user. track, determine an interception area according to the trajectory, and perform a two-dimensional code identification operation on the interception area in the video stream.
  • the second user points the camera of the second user equipment used by the second user at the two-dimensional code, so that the second user equipment is displayed on the second user equipment.
  • the two-dimensional code is displayed on the video screen of the two users, and there is no need for the second user to exit the video call, and there is no need for the second user to obtain the two-dimensional code by taking pictures or screenshots.
  • the second user needs to The currently used front camera is switched to the rear camera, and the rear camera is aligned with the QR code.
  • the second user needs to switch the video screen of the first user currently displayed on the second user's device to that of the second user. video screen. Relevant operations are the same as or similar to those in the foregoing embodiments, and are not repeated here.
  • the two-two module 22 is configured to send the identified two-dimensional code information to the first user equipment corresponding to the first user if the identification is successful, so that the first user equipment can process the two-dimensional code information . Relevant operations are the same as or similar to those in the foregoing embodiments, and are not repeated here.
  • FIG. 5 illustrates an exemplary system that may be used to implement various embodiments described in this application.
  • system 300 can function as any of the devices in each of the described embodiments.
  • system 300 may include one or more computer-readable media (eg, system memory or NVM/storage device 320 ) having instructions and be coupled to the one or more computer-readable media and configured to execute Instructions to implement a module to perform one or more processors (eg, processor(s) 305 ) to perform the actions described herein.
  • processors eg, processor(s) 305
  • system control module 310 may include any suitable interface controller to provide at least one of the processor(s) 305 and/or any suitable device or component in communication with the system control module 310 any appropriate interface.
  • the system control module 310 may include a memory controller module 330 to provide an interface to the system memory 315 .
  • the memory controller module 330 may be a hardware module, a software module, and/or a firmware module.
  • System memory 315 may be used, for example, to load and store data and/or instructions for system 300 .
  • system memory 315 may include any suitable volatile memory, such as suitable DRAM.
  • system memory 315 may include double data rate type quad synchronous dynamic random access memory (DDR4 SDRAM).
  • system control module 310 may include one or more input/output (I/O) controllers to provide interfaces to NVM/storage device 320 and communication interface(s) 325 .
  • I/O input/output
  • NVM/storage device 320 may be used to store data and/or instructions.
  • NVM/storage device 320 may include any suitable non-volatile memory (eg, flash memory) and/or may include any suitable non-volatile storage device(s) (eg, one or more hard drives ( HDD), one or more compact disc (CD) drives and/or one or more digital versatile disc (DVD) drives).
  • HDD hard drives
  • CD compact disc
  • DVD digital versatile disc
  • NVM/storage device 320 may include storage resources that are physically part of the device on which system 300 is installed, or it may be accessed by the device without necessarily being part of the device.
  • the NVM/storage device 320 is accessible via the communication interface(s) 325 over a network.
  • Communication interface(s) 325 may provide an interface for system 300 to communicate over one or more networks and/or with any other suitable device.
  • System 300 may wirelessly communicate with one or more components of a wireless network in accordance with any of one or more wireless network standards and/or protocols.
  • At least one of the processor(s) 305 may be packaged with the logic of one or more controllers of the system control module 310 (eg, memory controller module 330). For one embodiment, at least one of the processor(s) 305 may be packaged with logic of one or more controllers of the system control module 310 to form a system-in-package (SiP). For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with the logic of one or more controllers of the system control module 310 . For one embodiment, at least one of the processor(s) 305 may be integrated on the same die with logic of one or more controllers of the system control module 310 to form a system on a chip (SoC).
  • SoC system on a chip
  • system 300 may be, but is not limited to, a server, workstation, desktop computing device, or mobile computing device (eg, laptop computing device, handheld computing device, tablet computer, netbook, etc.). In various embodiments, system 300 may have more or fewer components and/or different architectures. For example, in some embodiments, system 300 includes one or more cameras, keyboards, liquid crystal display (LCD) screens (including touchscreen displays), non-volatile memory ports, multiple antennas, graphics chips, application specific integrated circuits ( ASIC) and speakers.
  • LCD liquid crystal display
  • ASIC application specific integrated circuits
  • the present application also provides a computer-readable storage medium, where computer codes are stored in the computer-readable storage medium, and when the computer codes are executed, the method described in any preceding item is executed.
  • the present application also provides a computer program product, when the computer program product is executed by a computer device, the method according to any one of the preceding items is executed.
  • the present application also provides a computer device, the computer device comprising:
  • processors one or more processors
  • memory for storing one or more computer programs
  • the one or more computer programs when executed by the one or more processors, cause the one or more processors to implement the method of any preceding item.
  • the present application may be implemented in software and/or a combination of software and hardware, eg, an application specific integrated circuit (ASIC), a general purpose computer, or any other similar hardware device.
  • the software program of the present application may be executed by a processor to implement the steps or functions described above.
  • the software programs of the present application (including associated data structures) may be stored on a computer-readable recording medium, such as RAM memory, magnetic or optical drives or floppy disks, and the like.
  • some steps or functions of the present application may be implemented in hardware, for example, as a circuit that cooperates with a processor to perform various steps or functions.
  • a part of the present application can be applied as a computer program product, such as computer program instructions, which when executed by a computer, through the operation of the computer, can invoke or provide methods and/or technical solutions according to the present application.
  • Those skilled in the art should understand that the existing forms of computer program instructions in computer-readable media include but are not limited to source files, executable files, installation package files, etc.
  • the ways in which computer program instructions are executed by a computer include but are not limited to Limited to: the computer directly executes the instruction, or the computer compiles the instruction and then executes the corresponding compiled program, or the computer reads and executes the instruction, or the computer reads and installs the instruction and then executes the corresponding post-installation program. program.
  • the computer-readable medium can be any available computer-readable storage medium or communication medium that can be accessed by a computer.
  • Communication media includes media by which communication signals containing, for example, computer readable instructions, data structures, program modules or other data are transmitted from one system to another.
  • Communication media may include conducted transmission media such as cables and wires (eg, fiber optic, coaxial, etc.) and wireless (unconducted transmission) media capable of propagating energy waves, such as acoustic, electromagnetic, RF, microwave, and infrared .
  • Computer readable instructions, data structures, program modules or other data may be embodied, for example, as a modulated data signal in a wireless medium such as a carrier wave or similar mechanism such as embodied as part of spread spectrum technology.
  • modulated data signal refers to a signal whose one or more characteristics are altered or set in a manner that encodes information in the signal. Modulation can be analog, digital or hybrid modulation techniques.
  • computer-readable storage media may include volatile and non-volatile, readable storage media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Removable and non-removable media.
  • computer-readable storage media include, but are not limited to, volatile memory, such as random access memory (RAM, DRAM, SRAM); and non-volatile memory, such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or later developed capable of storing data for computer systems Computer readable information/data used.
  • volatile memory such as random access memory (RAM, DRAM, SRAM
  • non-volatile memory such as flash memory, various read-only memories (ROM, PROM, EPROM) , EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM); and magnetic and optical storage devices (hard disks, tapes, CDs, DVDs); or other media now known or later developed capable of storing data for computer systems Computer readable information/data used.
  • an embodiment according to the present application includes an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein, when the computer program instructions are executed by the processor, a trigger is
  • the apparatus operates based on the aforementioned methods and/or technical solutions according to various embodiments of the present application.

Abstract

本申请的目的是提供一种识别二维码的方法与设备,该方法包括:在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;若识别成功,对识别得到的二维码信息进行处理。本申请可以使得视频通话过程中针对二维码的识别极为简便且准确,能够为参与视频通话的用户提供极大的便利,并且仅对视频流中的该截取区域对应的视频帧图像区域执行二维码识别操作,而不是对视频流的全部显示区域执行二维码识别操作,可以加快二维码的识别速度,提高二维码的识别精度和识别效率。

Description

一种识别二维码的方法与设备
本申请是以CN申请号为202011618821.3,申请日为2020.12.30的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中
技术领域
本申请涉及通信领域,尤其涉及一种用于识别二维码的技术。
背景技术
随着时代的发展,二维码已被广泛的应用于各行各业的不同场景,几乎涉及到生活的方方面面,用户可以通过扫描二维码,得到相应的二维码内容,例如,通过二维码进行移动支付、信息识别等等,极大地提升了人们日常生活的便利性。
发明内容
本申请的一个目的是提供一种识别二维码的方法与设备。
根据本申请的一个方面,提供了一种应用于第一用户设备的识别二维码的方法,该方法包括:
在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,对识别得到的二维码信息进行处理。
根据本申请的另一个方面,提供了一种应用于第二用户设备的识别二维码的方法,该方法包括:
在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
根据本申请的一个方面,提供了一种识别二维码的第一用户设备,该设备包括:
一一模块,用于在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
一二模块,用于若识别成功,对识别得到的二维码信息进行处理。
根据本申请的另一个方面,提供了一种识别二维码的第二用户设备,该设备包括:
二一模块21,用于在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
二二模块22,用于若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
根据本申请的一个方面,提供了一种识别二维码的设备,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如下操作:
在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,对识别得到的二维码信息进行处理。
根据本申请的另一个方面,提供了一种识别二维码的设备,其中,该设备包括:
处理器;以及
被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如下操作:
在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
根据本申请的一个方面,提供了一种存储指令的计算机可读介质,所述指令在被执 行时使得系统进行如下操作:
在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,对识别得到的二维码信息进行处理。
根据本申请的另一个方面,提供了一种存储指令的计算机可读介质,所述指令在被执行时使得系统进行如下操作:
在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
根据本申请的一个方面,提供了一种计算机程序产品,包括计算机程序,当所述计算机程序被处理器执行时,执行如下方法:
在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,对识别得到的二维码信息进行处理。
根据本申请的另一个方面,提供了一种计算机程序产品,包括计算机程序,当所述计算机程序被处理器执行时,执行如下方法:
在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
与现有技术相比,本申请能够在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作,从而使得第二用户仅需将其摄像头对准需要展示给第一用户的二维码,而完全无需 退出视频,也无需通过拍照或截图将二维码提供给第一用户,而第一用户仅需在屏幕上针对第二用户所展示的二维码执行轨迹绘制操作,便能使得第一用户设备快速且方便地识别出该二维码,这使得视频通话过程中针对二维码的识别极为简便且准确,能够为参与视频通话的用户提供极大的便利,并且仅对视频流中的该截取区域对应的视频帧图像区域执行二维码识别操作,而不是对视频流的全部显示区域执行二维码识别操作,可以加快二维码的识别速度,提高二维码的识别精度和识别效率。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1示出根据本申请一个实施例的一种应用于第一用户设备的识别二维码的方法流程图;
图2示出根据本申请一个实施例的一种应用于第二用户设备的识别二维码的方法流程图;
图3示出根据本申请一个实施例的一种识别二维码的第一用户设备结构图;
图4示出根据本申请一个实施例的一种识别二维码的第二用户设备结构图;
图5示出可被用于实施本申请中所述的各个实施例的示例性系统。
附图中相同或相似的附图标记代表相同或相似的部件。
具体实施方式
下面结合附图对本申请作进一步详细描述。
在本申请一个典型的配置中,终端、服务网络的设备和可信方均包括一个或多个处理器(例如,中央处理器(Central Processing Unit,CPU))、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(Random Access Memory,RAM)和/或非易失性内存等形式,如只读存储器(Read Only Memory,ROM)或闪存(Flash Memory)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或 其他数据。计算机的存储介质的例子包括,但不限于相变内存(Phase-Change Memory,PCM)、可编程随机存取存储器(Programmable Random Access Memory,PRAM)、静态随机存取存储器(Static Random-Access Memory,SRAM)、动态随机存取存储器(Dynamic Random Access Memory,DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、数字多功能光盘(Digital Versatile Disc,DVD)或其他光学存储、磁盒式磁带,磁带磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
本申请所指设备包括但不限于用户设备、网络设备、或用户设备与网络设备通过网络相集成所构成的设备。所述用户设备包括但不限于任何一种可与用户进行人机交互(例如通过触摸板进行人机交互)的移动电子产品,例如智能手机、平板电脑等,所述移动电子产品可以采用任意操作系统,如Android操作系统、iOS操作系统等。其中,所述网络设备包括一种能够按照事先设定或存储的指令,自动进行数值计算和信息处理的电子设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑器件(Programmable Logic Device,PLD)、现场可编程门阵列(Field Programmable Gate Array,FPGA)、数字信号处理器(Digital Signal Processor,DSP)、嵌入式设备等。所述网络设备包括但不限于计算机、网络主机、单个网络服务器、多个网络服务器集或多个服务器构成的云;在此,云由基于云计算(Cloud Computing)的大量计算机或网络服务器构成,其中,云计算是分布式计算的一种,由一群松散耦合的计算机集组成的一个虚拟超级计算机。所述网络包括但不限于互联网、广域网、城域网、局域网、VPN网络、无线自组织网络(Ad Hoc网络)等。优选地,所述设备还可以是运行于所述用户设备、网络设备、或用户设备与网络设备、网络设备、触摸终端或网络设备与触摸终端通过网络相集成所构成的设备上的程序。
当然,本领域技术人员应能理解上述设备仅为举例,其他现有的或今后可能出现的设备如可适用于本申请,也应包含在本申请保护范围以内,并在此以引用方式包含于此。
在本申请的描述中,“多个”的含义是两个或者更多,除非另有明确具体的限 定。
图1示出根据本申请一个实施例的一种应用于第一用户设备的识别二维码的方法流程图,该方法包括步骤S11和步骤S12。在步骤S11中,第一用户设备在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;在步骤S12中,若识别成功,第一用户设备对识别得到的二维码信息进行处理。
在步骤S11中,第一用户设备在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作。在一些实施例中,在第一用户与第二用户的视频通话过程中,第二用户将其所使用的第二用户设备的摄像头对准需要展示给第一用户的二维码,将该二维码通过视频流的方式发送给第一用户设备,从而在第一用户设备上呈现的第二用户的视频画面上展示该二维码,而完全无需第二用户退出视频通话,也无需第二用户通过拍照或截图的方式将需要展示给第一用户的二维码提供给第一用户,优选地,第二用户需要将第二用户设备当前使用的前置摄像头切换为后置摄像头,将后置摄像头对准需要展示给第一用户的二维码。在一些实施例中,当第一用户看到第二用户的视频画面上展示了二维码时,第一用户可以通过其手指在该视频画面上执行轨迹绘制操作(例如,第一用户的手指在第一用户设备屏幕上的某个位置处按下,在保持手指按下的状态下移动手指),此时第一用户设备会获得第一用户绘制的轨迹,响应于该轨迹绘制操作对应的轨迹绘制结束事件(例如,第一用户的手指从第一用户设备屏幕上抬起),根据第一用户当前绘制的轨迹,确定对应的截取区域。在一些实施例中,可以将第一用户绘制的轨迹显示在第一用户设备屏幕上,或者,也可以不在第一用户设备屏幕上显示第一用户绘制的轨迹。在一些实施例中,若第一用户当前绘制的轨迹闭合,将该闭合轨迹所围成的区域确定为截取区域。在一些实施例中,若第一用户当前绘制的轨迹不闭合,则可以通过一条虚拟直线将绘制起始点(例如,手指按下位置处)及绘制终止点(例如,手指抬起位置处)连接起来,得到该绘制轨迹对应的虚拟闭合轨迹,并将该虚拟闭合轨迹所围成的区域确定为截取区域。在一些实施例中,若第一用户当前绘制的轨迹不闭合,还可以在绘制起始 点及绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟切换延长线及第一用户设备的屏幕边界,得到该绘制轨迹对应的虚拟闭合轨迹,并将该虚拟闭合轨迹所围成的区域确定为截取区域。在一些实施例中,对第二用户的视频画面中的该截取区域对应的视频帧图像区域执行二维码识别操作,识别得到该视频帧图像区域上所展示的二维码中包含的二维码信息。在一些实施例中,对第二用户的视频画面中的该截取区域对应的视频帧图像区域执行二维码识别操作,而不是对第二用户的视频画面中的全部显示区域执行二维码识别操作,可以加快二维码的识别速度,提高二维码的识别精度和识别效率。
在步骤S12中,若识别成功,第一用户设备对识别得到的二维码信息进行处理。在一些实施例中,若识别成功,可以直接对识别得到的二维码信息进行处理,或者,还可以根据第一用户的用户授权信息或用户标识信息(例如,token、uuid(Universally Unique Identifier,通用唯一识别码)等),对识别得到的二维码信息进行处理,或者,还可以根据该视频通话应用已绑定的第一用户的个人真实身份信息,对识别得到的二维码信息进行处理。
本申请能够在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作,从而使得第二用户仅需将其摄像头对准需要展示给第一用户的二维码,而完全无需退出视频,也无需通过拍照或截图将二维码提供给第一用户,而第一用户仅需在屏幕上针对第二用户所展示的二维码执行轨迹绘制操作,便能使得第一用户设备快速且方便地识别出该二维码,这使得视频通话过程中针对二维码的识别极为简便且准确,能够为参与视频通话的用户提供极大的便利,并且仅对视频流中的该截取区域对应的视频帧图像区域执行二维码识别操作,而不是对视频流的全部显示区域执行二维码识别操作,可以加快二维码的识别速度,提高二维码的识别精度和识别效率。
在一些实施例中,所述步骤S11包括:第一用户设备在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,暂停播放所述第二用户的视频流;响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操 作,并恢复播放所述第二用户的视频流。在一些实施例中,轨迹绘制开始触发操作可以是第一用户的手指在第一用户设备屏幕上的某个位置处按下,或者,还可以是在按下后保持手指按下的状态下移动手指的距离大于或等于预定的距离阈值(例如,10像素、1厘米等),或者,还可以是第一用户点击当前页面上的某个特定按钮(例如,“开始绘制轨迹”按钮)。在一些实施例中,在根据第一用户当前绘制的轨迹确定完截取区域后,对当前视频帧图像中的该截取区域对应的图像区域执行二维码识别操作,识别得到该图像区域上所展示的二维码中包含的二维码信息,其中,由于之前暂停播放了第二用户的视频流,该当前视频帧图像对应当前已暂停播放的第二用户的当前视频画面。在一些实施例中,在识别成功后,会恢复播放第二用户的视频流。在一些实施例中,在识别失败后,也会直接恢复播放第二用户的视频流,或者,在识别失败后,不会直接恢复播放第二用户的视频流,第一用户可以重新在当前视频画面上执行轨迹绘制操作,第一用户设备会在重新确定截取区域后重新尝试对当前视频帧中的该截取区域对应的图像区域执行二维码识别操作,若识别失败次数达到预定的次数阈值,会恢复播放第二用户的视频流。在一些实施例中,在识别失败后,不会直接恢复播放第二用户的视频流,但是会在当前页面上放置一个“恢复播放”按钮,用户点击该按钮后,会恢复播放第二用户的视频流。
在一些实施例中,所述步骤S11包括:第一用户设备在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,获取所述视频流对应的第一当前视频帧图像,并将所述第一当前视频帧图像呈现在所述视频流之上;响应于所述第一用户针对所述第一当前视频帧图像的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;对所述第一当前视频帧图像中的所述截取区域执行二维码识别操作,并取消呈现所述第一当前视频帧图像。在一些实施例中,响应于第一用户针对第二用户的视频流的轨迹绘制开始触发操作,获得该视频流对应的当前视频帧图像,并将该当前视频帧图像叠加呈现在视频流之上,在根据第一用户当前绘制的轨迹确定完截取区域后,对该当前视频帧图像的该截取区域对应的图像区域执行二维码识别操作,识别得到该图像区域上所展示的二维码中包含的二维码信息。在一些实施例中,在识别成功后,会隐藏该当前视频帧图像。在一些实施例中,在识别失败后,也会直接隐藏该当前视频帧图像,或者,在识别失败后,不会直接隐藏该当前视频帧图像,第一用 户可以重新在当前视频画面上执行轨迹绘制操作,第一用户设备会在重新确定截取区域后重新尝试对当前视频帧中的该截取区域对应的图像区域执行二维码识别操作,若识别失败次数达到预定的次数阈值,会隐藏该当前视频帧图像。在一些实施例中,在识别失败后,不会直接隐藏该当前视频帧图像,但是会在当前页面上放置一个预定的按钮,用户点击该按钮后,会隐藏该当前视频帧图像。
在一些实施例中,所述步骤S11包括:第一用户设备在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操作。在一些实施例中,响应于该轨迹绘制操作对应的轨迹绘制结束事件(例如,第一用户的手指从第一用户设备屏幕上抬起),根据第一用户当前绘制的轨迹,确定对应的截取区域,并对第二用户的视频流对应的当前视频帧图像中的该截取区域对应的图像区域执行二维码识别操作,识别得到该图像区域上所展示的二维码中包含的二维码信息。
在一些实施例中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹绘制结束事件,检测所述第一用户已绘制的轨迹是否闭合,若是,将所述已绘制的轨迹围成的区域确定为截取区域。在一些实施例中,轨迹绘制操作对应的轨迹绘制结束事件可以是第一用户的手指从第一用户设备屏幕上抬起,或者,还可以是第一用户的手指移出了第二用户的视频画面的显示区域,或者,还可以是第一用户的手指在第一用户设备屏幕上的某个位置处按下停留的时间超过了预定的时长阈值。在一些实施例中,检测第一用户当前已绘制的轨迹是否闭合可以通过检测第一用户当前已绘制的轨迹是否相交来确定是否闭合,若相交,则确定第一用户当前已绘制的轨迹闭合,并将该闭合所围成的区域确定为截取区域。
在一些实施例中,所述方法还包括:若所述第一用户已绘制的轨迹不闭合,第一用户设备根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,将所述虚拟闭合轨迹围成的区域确定为截取区域。在一些实施例中,若第一用户已绘制的轨迹不闭合,可以通过一条虚拟直线将绘制起始点及绘制终止点连接起来,得到对应的虚拟闭合区域,并将该虚拟闭合轨迹所围成的区域确定为截取区域。在一些实施例中,若第一用户已绘制的轨迹不闭合,还可以在绘制起始 点及绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟切换延长线及第一用户设备的屏幕边界或第二用户的视频画面边界,得到对应的虚拟闭合区域,并将该虚拟闭合轨迹所围成的区域确定为截取区域。
在一些实施例中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合区域,包括:通过一条虚拟直线将所述绘制起始点及所述绘制终止点连接起来,得到所述轨迹对应的虚拟闭合区域。在一些实施例中,该虚拟直线可以显示在第一用户设备屏幕上,或者,也可以不在第一用户设备屏幕上显示该虚拟直线。
在一些实施例中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,包括:在所述绘制起始点及所述绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟切换延长线及所述视频流的边界,得到所述轨迹对应的虚拟闭合区域。在一些实施例中,该虚拟切线延长线可以显示在第一用户设备屏幕上,或者,也可以不在第一用户设备屏幕上显示该虚拟切线延长线。在一些实施例中,视频流边界可以是第一用户设备屏幕的边界,或者,还可以是第二用户的视频画面的边界。
在一些实施例中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹闭合事件,将所述第一用户已绘制的轨迹围成的区域确定为截取区域。在一些实施例中,可以直接将第一用户在第二用户的视频画面上执行的轨迹绘制操作对应的轨迹闭合事件作为轨迹绘制结束事件,在第一用户当前已绘制的轨迹达到闭合的时候,将第一用户当前已绘制的轨迹所围成的区域确定为截取区域。
在一些实施例中,所述方法还包括:第一用户设备若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作。在一些实施例中,若在第二用户的视频流对应的当前视频帧图像中的该截取区域对应的图像区域中未识别到二维码,对第二用户的视频流中在该当前视频帧图像之前的目标视频帧图像中的该截取区域对应的图像区域执行二维码识别操作。在一些实施例中,目标视频帧图像可以是第二用户的视频流在第一用户的轨迹绘制操作的起始时间点对应的视频帧图像。在一些实施例中,目标视频帧图像还可以是第二用户的视频流中该当前视频帧图像的 前一视频帧图像。
在一些实施例中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:若在所述当前视频帧中的所述截取区域中未识别到二维码信息,获取所述当前视频帧对应的前一视频帧,对所述前一视频帧中的所述截取区域执行二维码识别操作,以此往复,直至从目标视频帧中的所述截取区域识别到二维码信息。在一些实施例中,目标视频帧图像可以是第二用户的视频流中该当前视频帧图像的前一视频帧图像,对该目标视频帧图像中的该截取区域对应的图像区域执行二维码识别操作,若在该前一视频帧图像中同样也未识别到二维码,将目标视频帧图像确定为第二用户的视频流中该前一视频帧图像的前一视频帧图像,对该目标视频帧图像中的该截取区域对应的图像区域执行二维码识别操作,以此往复,直至从目标视频帧中的截取区域对应的图像区域中识别到二维码信息。
在一些实施例中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:获取所述轨迹绘制操作的起始时间点;从所述视频流中获取所述起始时间点对应的目标视频帧,对所述目标视频帧中的所述截取区域执行二维码识别操作。在一些实施例中,响应于第一用户在第二用户的视频画面上执行的轨迹绘制操作,记录该轨迹绘制操作的起始时间点,可以记录在内存中,或者,还可以记录在第一用户设备本地。在一些实施例中,读取该轨迹绘制操作的起始时间点,将第二用户的视频流在该起始时间点对应的视频帧图像确定为目标视频帧图像,对该目标视频帧图像中的该截取区域对应的图像区域执行二维码识别操作。
在一些实施例中,所述方法还包括:第一用户设备若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述当前视频帧的全部显示区域执行二维码识别操作。在一些实施例中,若在第二用户的视频流对应的当前视频帧图像中的该截取区域对应的图像区域中未识别到二维码,则对该当前视频帧图像中的全部显示区域执行二维码识别操作。
在一些实施例中,所述方法还包括:若识别成功,第一用户设备生成识别成功提示信息,并将所述识别成功提示信息发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上呈现所述识别成功提示信息。在一些实施例中,若识别二 维码成功,会生成识别成功提示信息发送给第二用户设备并进行呈现,以提示第二用户不用再继续将第二用户设备的摄像头对准该需要展示给第一用户的二维码,识别成功提示信息可以是直接发送给第二用户设备,或者,还可以是经由服务器发送给第二用户设备。在一些实施例中,识别成功提示信息可以通过可视化的形式(例如,文本、图标、文本+图标等)呈现在第二用户设备上,或者,还可以通过语音播放的形式呈现在第二用户设备上。
在一些实施例中,所述方法还包括:第一用户设备将所述第一用户绘制的轨迹实时发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上实时呈现所述第一用户绘制的轨迹。在一些实施例中,第一用户在第二用户的视频画面上执行轨迹绘制操作的时候,第一用户设备会将第一用户绘制的轨迹实时发送给第二用户设备并进行呈现。
图2示出根据本申请一个实施例的一种应用于第二用户设备的识别二维码的方法流程图,该方法包括步骤S21和步骤S22。在步骤S21中,第二用户设备在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;在步骤S22中,第二用户设备若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
在步骤S21中,第二用户设备在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作。在一些实施例中,在第一用户与第二用户的视频通话过程中,第二用户将其所使用的第二用户设备的摄像头对准二维码,从而在第二用户设备上呈现的第二用户的视频画面上展示该二维码,而完全无需第二用户退出视频通话,也无需第二用户通过拍照或截图的方式获取二维码,优选地,第二用户需要将第二用户设备当前使用的前置摄像头切换为后置摄像头,将后置摄像头对准二维码,优选地,第二用户需要将第二用户设备上当前呈现的第一用户的视频画面切换为第二用户的视频画面。相关操作与前述实施例中的相关操作相同或者相似,在此不再赘述。
在步骤S22中,第二用户设备若识别成功,将识别得到的二维码信息发送至所 述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。相关操作与前述实施例中的相关操作相同或者相似,在此不再赘述。
图3示出根据本申请一个实施例的一种识别二维码的第一用户设备结构图,该设备包括一一模块11和一二模块12。一一模块11,用于在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;一二模块12,用于若识别成功,对识别得到的二维码信息进行处理。
一一模块11,用于在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作。在一些实施例中,在第一用户与第二用户的视频通话过程中,第二用户将其所使用的第二用户设备的摄像头对准需要展示给第一用户的二维码,通过视频流的方式发送给第一用户设备,从而在第一用户设备上呈现的第二用户的视频画面上展示该二维码,而完全无需第二用户退出视频通话,也无需第二用户通过拍照或截图的方式将需要展示给第一用户的二维码提供给第一用户。在一些实施例中,当第一用户看到第二用户的视频画面上展示了二维码时,第一用户可以通过其手指在该视频画面上执行轨迹绘制操作(例如,第一用户的手指在第一用户设备屏幕上的某个位置处按下,在保持手指按下的状态下移动手指),此时第一用户设备会获得第一用户绘制的轨迹,响应于该轨迹绘制操作对应的轨迹绘制结束事件(例如,第一用户的手指从第一用户设备屏幕上抬起),根据第一用户当前绘制的轨迹,确定对应的截取区域。在一些实施例中,可以将第一用户绘制的轨迹显示在第一用户设备屏幕上,或者,也可以不在第一用户设备屏幕上显示第一用户绘制的轨迹。在一些实施例中,若第一用户当前绘制的轨迹闭合,将该闭合轨迹所围成的区域确定为截取区域。在一些实施例中,若第一用户当前绘制的轨迹不闭合,则可以通过一条虚拟直线将绘制起始点(例如,手指按下位置处)及绘制终止点(例如,手指抬起位置处)连接起来,得到该绘制轨迹对应的虚拟闭合轨迹,并将该虚拟闭合轨迹所围成的区域确定为截取区域。在一些实施例中,若第一用户当前绘制的轨迹不闭合,还可以在绘制起始点及绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟 切换延长线及第一用户设备的屏幕边界,得到该绘制轨迹对应的虚拟闭合轨迹,并将该虚拟闭合轨迹所围成的区域确定为截取区域。在一些实施例中,对第二用户的视频画面中的该截取区域对应的视频帧图像区域执行二维码识别操作,识别得到该视频帧图像区域上所展示的二维码中包含的二维码信息。在一些实施例中,对第二用户的视频画面中的该截取区域对应的视频帧图像区域执行二维码识别操作,而不是对第二用户的视频画面中的全部显示区域执行二维码识别操作,可以加快二维码的识别速度,提高二维码的识别精度和识别效率。
一二模块12,用于若识别成功,对识别得到的二维码信息进行处理。在一些实施例中,若识别成功,可以直接对识别得到的二维码信息进行处理,或者,还可以根据第一用户的用户授权信息或用户标识信息(例如,token、uuid(Universally Unique Identifier,通用唯一识别码)等),对识别得到的二维码信息进行处理,或者,还可以根据该视频通话应用已绑定的第一用户的个人真实身份信息,对识别得到的二维码信息进行处理。
在一些实施例中,所述一一模块11用于:在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,暂停播放所述第二用户的视频流;响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操作,并恢复播放所述第二用户的视频流。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述一一模块11用于:在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,获取所述视频流对应的第一当前视频帧图像,并将所述第一当前视频帧图像呈现在所述视频流之上;响应于所述第一用户针对所述第一当前视频帧图像的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;对所述第一当前视频帧图像中的所述截取区域执行二维码识别操作,并取消呈现所述第一当前视频帧图像。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述一一模块11用于:在第一用户与第二用户的视频通话 过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操作。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹绘制结束事件,检测所述第一用户已绘制的轨迹是否闭合,若是,将所述已绘制的轨迹围成的区域确定为截取区域。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述设备还用于:若所述第一用户已绘制的轨迹不闭合,根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,将所述虚拟闭合轨迹围成的区域确定为截取区域。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合区域,包括:通过一条虚拟直线将所述绘制起始点及所述绘制终止点连接起来,得到所述轨迹对应的虚拟闭合区域。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,包括:在所述绘制起始点及所述绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟切换延长线及所述视频流的边界,得到所述轨迹对应的虚拟闭合区域。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹闭合事件,将所述第一用户已绘制的轨迹围成的区域确定为截取区域。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述设备还用于:若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作。在此,相关操作与图1所示实施例相同或相近, 故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:若在所述当前视频帧中的所述截取区域中未识别到二维码信息,获取所述当前视频帧对应的前一视频帧,对所述前一视频帧中的所述截取区域执行二维码识别操作,以此往复,直至从目标视频帧中的所述截取区域识别到二维码信息。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:获取所述轨迹绘制操作的起始时间点;从所述视频流中获取所述起始时间点对应的目标视频帧,对所述目标视频帧中的所述截取区域执行二维码识别操作。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述设备还用于:若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述当前视频帧的全部显示区域执行二维码识别操作。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述设备还用于:若识别成功,生成识别成功提示信息,并将所述识别成功提示信息发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上呈现所述识别成功提示信息。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
在一些实施例中,所述设备还用于:将所述第一用户绘制的轨迹实时发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上实时呈现所述第一用户绘制的轨迹。在此,相关操作与图1所示实施例相同或相近,故不再赘述,在此以引用方式包含于此。
图4示出根据本申请一个实施例的一种识别二维码的第二用户设备结构图,该设备包括二一模块21和二二模块22。二一模块21,用于在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操 作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;二二模块22,用于若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
二一模块21,用于在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作。在一些实施例中,在第一用户与第二用户的视频通话过程中,第二用户将其所使用的第二用户设备的摄像头对准二维码,从而在第二用户设备上呈现的第二用户的视频画面上展示该二维码,而完全无需第二用户退出视频通话,也无需第二用户通过拍照或截图的方式获取二维码,优选地,第二用户需要将第二用户设备当前使用的前置摄像头切换为后置摄像头,将后置摄像头对准二维码,优选地,第二用户需要将第二用户设备上当前呈现的第一用户的视频画面切换为第二用户的视频画面。相关操作与前述实施例中的相关操作相同或者相似,在此不再赘述。
二二模块22,用于若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。相关操作与前述实施例中的相关操作相同或者相似,在此不再赘述。
图5示出了可被用于实施本申请中所述的各个实施例的示例性系统。
如图5所示在一些实施例中,系统300能够作为各所述实施例中的任意一个设备。在一些实施例中,系统300可包括具有指令的一个或多个计算机可读介质(例如,系统存储器或NVM/存储设备320)以及与该一个或多个计算机可读介质耦合并被配置为执行指令以实现模块从而执行本申请中所述的动作的一个或多个处理器(例如,(一个或多个)处理器305)。
对于一个实施例,系统控制模块310可包括任意适当的接口控制器,以向(一个或多个)处理器305中的至少一个和/或与系统控制模块310通信的任意适当的设备或组件提供任意适当的接口。
系统控制模块310可包括存储器控制器模块330,以向系统存储器315提供接口。存储器控制器模块330可以是硬件模块、软件模块和/或固件模块。
系统存储器315可被用于例如为系统300加载和存储数据和/或指令。对于一个 实施例,系统存储器315可包括任意适当的易失性存储器,例如,适当的DRAM。在一些实施例中,系统存储器315可包括双倍数据速率类型四同步动态随机存取存储器(DDR4SDRAM)。
对于一个实施例,系统控制模块310可包括一个或多个输入/输出(I/O)控制器,以向NVM/存储设备320及(一个或多个)通信接口325提供接口。
例如,NVM/存储设备320可被用于存储数据和/或指令。NVM/存储设备320可包括任意适当的非易失性存储器(例如,闪存)和/或可包括任意适当的(一个或多个)非易失性存储设备(例如,一个或多个硬盘驱动器(HDD)、一个或多个光盘(CD)驱动器和/或一个或多个数字通用光盘(DVD)驱动器)。
NVM/存储设备320可包括在物理上作为系统300被安装在其上的设备的一部分的存储资源,或者其可被该设备访问而不必作为该设备的一部分。例如,NVM/存储设备320可通过网络经由(一个或多个)通信接口325进行访问。
(一个或多个)通信接口325可为系统300提供接口以通过一个或多个网络和/或与任意其他适当的设备通信。系统300可根据一个或多个无线网络标准和/或协议中的任意标准和/或协议来与无线网络的一个或多个组件进行无线通信。
对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器(例你如,存储器控制器模块330)的逻辑封装在一起。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑封装在一起以形成系统级封装(SiP)。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上。对于一个实施例,(一个或多个)处理器305中的至少一个可与系统控制模块310的一个或多个控制器的逻辑集成在同一模具上以形成片上系统(SoC)。
在各个实施例中,系统300可以但不限于是:服务器、工作站、台式计算设备或移动计算设备(例如,膝上型计算设备、持有计算设备、平板电脑、上网本等)。在各个实施例中,系统300可具有更多或更少的组件和/或不同的架构。例如,在一些实施例中,系统300包括一个或多个摄像机、键盘、液晶显示器(LCD)屏幕(包括触屏显示器)、非易失性存储器端口、多个天线、图形芯片、专用集成电路(ASIC)和扬声器。
本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计 算机代码,当所述计算机代码被执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机程序产品,当所述计算机程序产品被计算机设备执行时,如前任一项所述的方法被执行。
本申请还提供了一种计算机设备,所述计算机设备包括:
一个或多个处理器;
存储器,用于存储一个或多个计算机程序;
当所述一个或多个计算机程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如前任一项所述的方法。
需要注意的是,本申请可在软件和/或软件与硬件的组合体中被实施,例如,可采用专用集成电路(ASIC)、通用目的计算机或任何其他类似硬件设备来实现。在一个实施例中,本申请的软件程序可以通过处理器执行以实现上文所述步骤或功能。同样地,本申请的软件程序(包括相关的数据结构)可以被存储到计算机可读记录介质中,例如,RAM存储器,磁或光驱动器或软磁盘及类似设备。另外,本申请的一些步骤或功能可采用硬件来实现,例如,作为与处理器配合从而执行各个步骤或功能的电路。
另外,本申请的一部分可被应用为计算机程序产品,例如计算机程序指令,当其被计算机执行时,通过该计算机的操作,可以调用或提供根据本申请的方法和/或技术方案。本领域技术人员应能理解,计算机程序指令在计算机可读介质中的存在形式包括但不限于源文件、可执行文件、安装包文件等,相应地,计算机程序指令被计算机执行的方式包括但不限于:该计算机直接执行该指令,或者该计算机编译该指令后再执行对应的编译后程序,或者该计算机读取并执行该指令,或者该计算机读取并安装该指令后再执行对应的安装后程序。在此,计算机可读介质可以是可供计算机访问的任意可用的计算机可读存储介质或通信介质。
通信介质包括藉此包含例如计算机可读指令、数据结构、程序模块或其他数据的通信信号被从一个系统传送到另一系统的介质。通信介质可包括有导的传输介质(诸如电缆和线(例如,光纤、同轴等))和能传播能量波的无线(未有导的传输)介质,诸如声音、电磁、RF、微波和红外。计算机可读指令、数据结构、程序模块或其他数据可被体现为例如无线介质(诸如载波或诸如被体现为扩展频谱技术的一部分的类似机制)中的已调制数据信号。术语“已调制数据信号”指的是其一个或多个特征 以在信号中编码信息的方式被更改或设定的信号。调制可以是模拟的、数字的或混合调制技术。
作为示例而非限制,计算机可读存储介质可包括以用于存储诸如计算机可读指令、数据结构、程序模块或其它数据的信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动的介质。例如,计算机可读存储介质包括,但不限于,易失性存储器,诸如随机存储器(RAM,DRAM,SRAM);以及非易失性存储器,诸如闪存、各种只读存储器(ROM,PROM,EPROM,EEPROM)、磁性和铁磁/铁电存储器(MRAM,FeRAM);以及磁性和光学存储设备(硬盘、磁带、CD、DVD);或其它现在已知的介质或今后开发的能够存储供计算机系统使用的计算机可读信息/数据。
在此,根据本申请的一个实施例包括一个装置,该装置包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被该处理器执行时,触发该装置运行基于前述根据本申请的多个实施例的方法和/或技术方案。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。装置权利要求中陈述的多个单元或装置也可以由一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。

Claims (19)

  1. 一种识别二维码的方法,应用于第一用户设备,其中,所述方法包括:
    在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
    若识别成功,对识别得到的二维码信息进行处理。
  2. 根据权利要求1所述的方法,其中,所述在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作,包括:
    在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,暂停播放所述第二用户的视频流;
    响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;
    对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操作,并恢复播放所述第二用户的视频流。
  3. 根据权利要求1所述的方法,其中,所述在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作,包括:
    在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制开始触发操作,获取所述视频流对应的第一当前视频帧图像,并将所述第一当前视频帧图像呈现在所述视频流之上;
    响应于所述第一用户针对所述第一当前视频帧图像的轨迹绘制操作,获得所述第一用户绘制的轨迹,并根据所述轨迹确定截取区域;
    对所述第一当前视频帧图像中的所述截取区域执行二维码识别操作,并取消呈现所述第一当前视频帧图像。
  4. 根据权利要求1所述的方法,其中,所述在第一用户与第二用户的视频通话过程 中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作,包括:
    在第一用户与第二用户的视频通话过程中,响应于所述第一用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流对应的当前视频帧中的所述截取区域执行二维码识别操作。
  5. 根据权利要求1至4中任一项所述的方法,其中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:
    获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹绘制结束事件,检测所述第一用户已绘制的轨迹是否闭合,若是,将所述已绘制的轨迹围成的区域确定为截取区域。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    若所述第一用户已绘制的轨迹不闭合,根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,将所述虚拟闭合轨迹围成的区域确定为截取区域。
  7. 根据权利要求6所述的方法,其中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合区域,包括:
    通过一条虚拟直线将所述绘制起始点及所述绘制终止点连接起来,得到所述轨迹对应的虚拟闭合区域。
  8. 根据权利要求6所述的方法,其中,所述根据所述轨迹对应的绘制起始点及绘制终止点,确定所述轨迹对应的虚拟闭合轨迹,包括:
    在所述绘制起始点及所述绘制终止点分别绘制一条虚拟切线延长线,根据所绘制的两条虚拟切换延长线及所述视频流的边界,得到所述轨迹对应的虚拟闭合区域。
  9. 根据权利要求1至4中任一项所述的方法,其中,所述获得所述第一用户绘制的轨迹,根据所述轨迹确定截取区域,包括:
    获得所述第一用户绘制的轨迹,响应于所述轨迹绘制操作对应的轨迹闭合事件,将所述第一用户已绘制的轨迹围成的区域确定为截取区域。
  10. 根据权利要求4所述的方法,其中,所述方法还包括:
    若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在 所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作。
  11. 根据权利要求10所述的方法,其中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:
    若在所述当前视频帧中的所述截取区域中未识别到二维码信息,获取所述当前视频帧对应的前一视频帧,对所述前一视频帧中的所述截取区域执行二维码识别操作,以此往复,直至从目标视频帧中的所述截取区域识别到二维码信息。
  12. 根据权利要求10所述的方法,其中,所述若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述视频流中在所述当前视频帧之前的目标视频帧中的所述截取区域执行二维码识别操作,包括:
    获取所述轨迹绘制操作的起始时间点;
    从所述视频流中获取所述起始时间点对应的目标视频帧,对所述目标视频帧中的所述截取区域执行二维码识别操作。
  13. 根据权利要求4所述的方法,其中,所述方法还包括:
    若在所述当前视频帧中的所述截取区域中未识别到二维码信息,对所述当前视频帧的全部显示区域执行二维码识别操作。
  14. 根据权利要求1所述的方法,其中,所述方法还包括:
    若识别成功,生成识别成功提示信息,并将所述识别成功提示信息发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上呈现所述识别成功提示信息。
  15. 根据权利要求1所述的方法,其中,所述方法还包括:
    将所述第一用户绘制的轨迹实时发送给所述第二用户对应的第二用户设备,以在所述第二用户设备上实时呈现所述第一用户绘制的轨迹。
  16. 一种识别二维码的方法,应用于第二用户设备,其中,所述方法包括:
    在第一用户与第二用户的视频通话过程中,响应于所述第二用户针对所述第二用户的视频流的轨迹绘制操作,获得所述第二用户绘制的轨迹,根据所述轨迹确定截取区域,并对所述视频流中的所述截取区域执行二维码识别操作;
    若识别成功,将识别得到的二维码信息发送至所述第一用户对应的第一用户设备,以使所述第一用户设备对所述二维码信息进行处理。
  17. 一种识别二维码的设备,其中,所述设备包括:
    处理器;以及
    被安排成存储计算机可执行指令的存储器,所述可执行指令在被执行时使所述处理器执行如权利要求1至16中任一项所述的方法。
  18. 一种存储指令的计算机可读介质,所述指令在被计算机执行时使得所述计算机进行如权利要求1至16中任一项所述方法的操作。
  19. 一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1至16中任一项所述方法的步骤。
PCT/CN2021/125287 2020-12-30 2021-10-21 一种识别二维码的方法与设备 WO2022142620A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011618821.3A CN112818719B (zh) 2020-12-30 2020-12-30 一种识别二维码的方法与设备
CN202011618821.3 2020-12-30

Publications (1)

Publication Number Publication Date
WO2022142620A1 true WO2022142620A1 (zh) 2022-07-07

Family

ID=75855836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/125287 WO2022142620A1 (zh) 2020-12-30 2021-10-21 一种识别二维码的方法与设备

Country Status (2)

Country Link
CN (1) CN112818719B (zh)
WO (1) WO2022142620A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818719B (zh) * 2020-12-30 2023-06-23 上海掌门科技有限公司 一种识别二维码的方法与设备
CN113592468B (zh) * 2021-07-12 2022-11-01 见面(天津)网络科技有限公司 基于二维码的在线支付方法以及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060086796A1 (en) * 2004-10-27 2006-04-27 Denso Corporation Image signal output device, coded image signal generation method, image signal output program, camera operation system, camera operation program, matrix code decoding device, and matrix code decoding program
CN101510269A (zh) * 2009-02-18 2009-08-19 深圳华为通信技术有限公司 获取视频中的二维码的方法和装置
CN109636512A (zh) * 2018-11-29 2019-04-16 苏宁易购集团股份有限公司 一种通过视频实现购物过程的方法及设备
CN111770380A (zh) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 一种视频处理方法和装置
CN111935439A (zh) * 2020-08-12 2020-11-13 维沃移动通信有限公司 一种识别方法、装置及电子设备
CN112818719A (zh) * 2020-12-30 2021-05-18 上海掌门科技有限公司 一种识别二维码的方法与设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090761B (zh) * 2014-07-10 2017-09-29 福州瑞芯微电子股份有限公司 一种截图应用装置和方法
CN104573608B (zh) * 2015-01-23 2018-06-19 苏州海博智能系统有限公司 一种编码信息扫描方法及装置
CN109286848B (zh) * 2018-10-08 2020-08-04 腾讯科技(深圳)有限公司 一种终端视频信息的交互方法、装置及存储介质
CN110659533A (zh) * 2019-08-26 2020-01-07 福建天晴数码有限公司 视频内二维码的识别方法及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060086796A1 (en) * 2004-10-27 2006-04-27 Denso Corporation Image signal output device, coded image signal generation method, image signal output program, camera operation system, camera operation program, matrix code decoding device, and matrix code decoding program
CN101510269A (zh) * 2009-02-18 2009-08-19 深圳华为通信技术有限公司 获取视频中的二维码的方法和装置
CN109636512A (zh) * 2018-11-29 2019-04-16 苏宁易购集团股份有限公司 一种通过视频实现购物过程的方法及设备
CN111770380A (zh) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 一种视频处理方法和装置
CN111935439A (zh) * 2020-08-12 2020-11-13 维沃移动通信有限公司 一种识别方法、装置及电子设备
CN112818719A (zh) * 2020-12-30 2021-05-18 上海掌门科技有限公司 一种识别二维码的方法与设备

Also Published As

Publication number Publication date
CN112818719B (zh) 2023-06-23
CN112818719A (zh) 2021-05-18

Similar Documents

Publication Publication Date Title
WO2022142620A1 (zh) 一种识别二维码的方法与设备
WO2021013125A1 (zh) 一种发送会话消息的方法与设备
WO2015081841A1 (en) Devices and methods for test scenario reproduction
CN112822431B (zh) 一种私密音视频通话的方法与设备
CN108984234B (zh) 一种移动终端与摄像装置的调用提示方法
CN110290557B (zh) 一种加载应用内页面标签的方法与设备
CN110321189B (zh) 一种在宿主程序中呈现寄宿程序的方法与设备
CN110336733B (zh) 一种呈现表情包的方法与设备
CN111162990B (zh) 一种呈现消息通知的方法与设备
WO2022142504A1 (zh) 一种会议群组合并的方法与设备
CN110780955A (zh) 一种用于处理表情消息的方法与设备
CN112261236B (zh) 一种在多人语音中用于静音处理的方法与设备
WO2022142617A1 (zh) 一种会议群组拆分的方法与设备
WO2021036561A1 (zh) 一种在视频通话过程中传递信息的方法与设备
CN113157162B (zh) 一种用于撤回会话消息的方法、设备、介质及程序产品
WO2017129068A1 (zh) 事件执行方法和装置及系统
CN111680249B (zh) 一种推送呈现信息的方法与设备
CN103942313B (zh) 网站页面的展示方法、装置及终端
CN110460642B (zh) 一种管理阅读模式的方法与设备
CN110336913B (zh) 一种在电话呼叫过程中呈现呼叫视频的方法、设备与计算机可读介质
US10996919B2 (en) Providing historical captured audio data to applications
CN114153535A (zh) 用于在开屏页跳转页面的方法、设备、介质及程序产品
CN110321205B (zh) 一种在宿主程序中管理寄宿程序的方法与设备
CN112688856A (zh) 一种解除好友关系的方法与设备
CN110958315A (zh) 一种呈现消息通知的方法与设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21913365

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21913365

Country of ref document: EP

Kind code of ref document: A1