WO2023045619A1 - 一种数据处理方法、装置、设备以及可读存储介质 - Google Patents

一种数据处理方法、装置、设备以及可读存储介质 Download PDF

Info

Publication number
WO2023045619A1
WO2023045619A1 PCT/CN2022/112398 CN2022112398W WO2023045619A1 WO 2023045619 A1 WO2023045619 A1 WO 2023045619A1 CN 2022112398 W CN2022112398 W CN 2022112398W WO 2023045619 A1 WO2023045619 A1 WO 2023045619A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
buffer
data
area
Prior art date
Application number
PCT/CN2022/112398
Other languages
English (en)
French (fr)
Inventor
刘海洋
许敏华
高威
曹瑞鹏
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP22871677.5A priority Critical patent/EP4282499A1/en
Priority to JP2023555773A priority patent/JP2024518227A/ja
Publication of WO2023045619A1 publication Critical patent/WO2023045619A1/zh
Priority to US18/196,364 priority patent/US20230281861A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/31Communication aspects specific to video games, e.g. between several handheld game devices at close range
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • A63F13/355Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/457Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/50Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
    • A63F2300/53Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
    • A63F2300/534Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for network load management, e.g. bandwidth optimization, latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of computer technology, and in particular to a data processing method, device, equipment and readable storage medium.
  • a corresponding virtual object for example, a virtual animation object
  • the virtual object can be displayed in the cloud game.
  • the terminal collects the user's picture through the camera
  • the user's portrait is directly recognized and extracted on the terminal, so as to obtain and display the corresponding virtual object.
  • the computing power of the terminal Since the computing power of the terminal is not high, it is likely to cause the problem of low efficiency of image recognition due to insufficient computing power, which will lead to a large delay in the process of sending the portrait recognition result to the cloud by the terminal. As a result, there will be a time delay when the game displays the virtual object, resulting in that the virtual behavior of the virtual object displayed by the game does not match the current behavior state of the user.
  • Embodiments of the present application provide a data processing method, device, device, and readable storage medium, which can reduce image transmission delay and improve image recognition efficiency.
  • an embodiment of the present application provides a data processing method, the method is executed by a computer device, and the method includes:
  • the first image data is the image data including the object obtained by the first client when running the cloud application;
  • the first object image data contained in the first object area is sent to the target cloud application server, and the update receiving queue has The second image data with the latest time stamp is received for image recognition processing; the target cloud application server is used to render the first object image data to obtain rendered data, and send the rendered data to the first client.
  • An embodiment of the present application provides a data processing device on the one hand, the device is deployed on a computer device, and the device includes:
  • the data acquisition module is configured to acquire the first image data sent by the first client, and store the first image data in the receiving queue; the first image data is the included object acquired by the first client when running the cloud application image data;
  • An image recognition module configured to perform image recognition processing on the first image data in the receiving queue
  • the queue update module is used to store the continuously obtained second image data sent by the first client into the receiving queue during the image recognition processing of the first image data, so as to obtain an updated receiving queue;
  • the area sending module is used to send the first object image data contained in the first object area to the target cloud application server when the first object area where the object is located in the first image data is extracted through image recognition processing;
  • the cloud application server is used to render the image data of the first object to obtain rendering data, and send the rendering data to the first client;
  • the area sending module is further configured to synchronously perform image recognition processing on the second image data with the latest receiving time stamp in the update receiving queue.
  • the embodiment of the present application provides another data processing method, the method is executed by a computer device, and the method includes:
  • the first object image data is the image contained in the first object area data, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data;
  • the first image data is sent to the business server by the first client,
  • the first image data is the image data including the object obtained by the first client when running the cloud application;
  • the second object image data sent by the service server is received, and the second object image data is stored in the second buffer whose working state is in the storage state;
  • the second object image data is the second object
  • the image data included in the area, the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data, the second object area is the area where the object is located in the second image data ;
  • the second image data is the image data with the latest receiving timestamp obtained from the update receiving queue when the service server extracts the first object area;
  • the image data is continuously obtained from the first client during the process of image recognition processing;
  • the working state of the first buffer is adjusted to the storage state
  • the working state of the second buffer is adjusted to the reading state
  • the working state of the second buffer is in the reading state.
  • the second object image data is read from the second buffer, and the second object image data is rendered.
  • the embodiment of the present application provides another data processing device, which is deployed on computer equipment, and the device includes:
  • the area storage module is used to receive the first object image data sent by the service server, and store the first object image data in the first buffer in the buffer set whose working state is in the storage state; the first object image data is the first The image data contained in the object area, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data; the first image data is provided by the first client For sending to the service server, the first image data is the image data including the object obtained by the first client when running the cloud application;
  • the area rendering module is used to adjust the working state of the first buffer to the reading state and convert the second buffer to The working state of the first object is adjusted to the storage state, and the first object area is read from the first buffer whose working state is in the reading state, and the first object area is rendered;
  • the area receiving module is used to receive the second object image data sent by the service server during the rendering process of the first object area, and store the second object image data in the second buffer whose working state is in the storage state; the second object The image data is the image data included in the second object area.
  • the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data.
  • the second object area is the object in the second image The area where the data is located; the second image data is the image data with the latest receiving timestamp obtained from the update receiving queue when the business server extracts the first object area; the second image in the update receiving queue
  • the data is obtained continuously from the first client during the process of the business server performing image recognition processing on the first image data;
  • a state adjustment module configured to adjust the working state of the first buffer to a storage state, and adjust the working state of the second buffer to a reading state when the rendering data corresponding to the first object image data is obtained, from the working state to The second object image data is read from the second buffer in the reading state, and the second object image data is rendered.
  • An embodiment of the present application provides a computer device, including: a processor and a memory;
  • the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the method in the embodiment of the present application.
  • Embodiments of the present application provide, on the one hand, a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and the computer program includes program instructions.
  • the program instructions are executed by a processor, the method in the embodiment of the present application is executed.
  • One aspect of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method provided in one aspect of the embodiments of the present application.
  • the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
  • the relevant computer equipment such as the service server
  • the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
  • the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
  • the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
  • the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
  • Recognition of data one by one can improve the recognition efficiency of image data.
  • image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
  • the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
  • FIG. 1 is a network architecture diagram provided by an embodiment of the present application
  • Fig. 2a is a schematic diagram of a scene provided by an embodiment of the present application.
  • Fig. 2b is a schematic diagram of a scene provided by an embodiment of the present application.
  • Fig. 3 is a schematic flow chart of a data processing method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of frame skipping processing provided by an embodiment of the present application.
  • FIG. 5 is a schematic flow diagram of a data processing method provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a scene for part fusion provided by an embodiment of the present application.
  • Fig. 7 is a schematic flow diagram of sending the first object image data to the target cloud application server according to the embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a state change of a double buffer provided by an embodiment of the present application.
  • FIG. 10 is a system architecture diagram provided by an embodiment of the present application.
  • Fig. 11 is a schematic flow diagram of a system provided by an embodiment of the present application.
  • Fig. 12 is an interactive flowchart provided by the embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • Fig. 14 is a schematic structural diagram of another data processing device provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 1 is a network architecture diagram provided by an embodiment of the present application.
  • the network architecture may include a service server 1000, a terminal device cluster, and a cloud application server cluster 10000.
  • the terminal device cluster may include one or more terminal devices, and the number of terminal devices will not be limited here.
  • a plurality of terminal devices may include a terminal device 100a, a terminal device 100b, a terminal device 100c,..., a terminal device 100n; as shown in Figure 1, a terminal device 100a, a terminal device 100b, a terminal device 100c,...,
  • the terminal devices 100n can respectively be connected to the service server 1000 through a network, so that each terminal device can perform data interaction with the service server 1000 through the network connection.
  • the cloud application server cluster 10000 may include one or more cloud application servers, and the number of cloud application servers will not be limited here.
  • a plurality of cloud application servers may include cloud application server 10001, cloud application server 10002, ..., cloud application server 1000n; as shown in Figure 1, cloud application server 10001, cloud application server 10002, ..., cloud application
  • the servers 1000n can be respectively connected to the service server 1000 through a network, so that each cloud application server can exchange data with the service server 1000 through the network connection.
  • each cloud application server can be a cloud application server, and one terminal device can correspond to one cloud application server (multiple terminal devices can correspond to the same cloud application server).
  • When a terminal device runs a cloud application its corresponding cloud application
  • the server provides corresponding functional services (such as computing services) for it.
  • the cloud application is a cloud game application
  • the cloud application server may be a cloud game server, and when the terminal device runs the cloud game application, its corresponding cloud game server provides corresponding functional services for it.
  • each terminal device shown in FIG. 1 can be installed with a cloud application, and when the cloud application runs in each terminal device, it can perform data interaction with the service server 1000 shown in FIG. 1 respectively. , so that the service server 1000 can receive service data from each terminal device.
  • the cloud application may include an application having a function of displaying data information such as text, image, audio and video.
  • the cloud application may be an entertainment application (for example, a game application), and the entertainment application may be used for game entertainment by the user.
  • the service server 1000 in this application can obtain service data according to these cloud applications.
  • the service data can be the image data (which can be called the first an image data).
  • the service server 1000 can store the first image data in the receiving queue, and then obtain the first image data from the receiving queue, and the service server 1000 can store the first image data in the receiving queue. Perform image recognition processing. It should be understood that after the terminal device acquires the first image data and sends it to the service server 1000, it can continuously acquire the image data (which can be referred to as the second image data) containing the object, and the service server 1000 is processing the first image data During the process of image recognition processing, the second image data acquired by the terminal device may also be continuously acquired from the terminal device. Like the first image data, the service server 1000 may also store the second image data in the receiving queue, thereby obtaining an updated receiving queue containing one or more second image data.
  • the service server 1000 may not store the first image data in the receiving queue, and the service server 1000 may directly process the first image data Image recognition processing, and in the process of image recognition processing, continuously obtain the second image data collected by the terminal device from the terminal device (that is, the second image data after the first image data, the third image data data, the third image data%), store the second image data in the receiving queue.
  • the service server 1000 After the service server 1000 extracts the area where the object is located in the first image data (which may be referred to as the first object area) through image recognition processing, the service server 1000 can obtain the first object in the first image data For the image data contained in the area (which can be referred to as the first object image data), the service server 1000 can send the first object image data to the cloud application server corresponding to the terminal device, and the cloud application server can The image data of an object is read and rendered, and after the rendering is completed, the rendered data can be sent to the terminal device, and the terminal device can display and output the rendered data in the cloud application.
  • the service server 1000 when the service server 1000 extracts the first object area where the object is located in the first image data, the service server 1000 can then perform image recognition processing on the rest of the image data, for example, the service server 1000 can include In the update receiving queue of the second image data, the second image data (that is, the latest image data received) with the latest receiving time stamp is obtained, and the service server 1000 can then perform the second image data with the latest receiving time stamp.
  • the second image data (may be referred to as target image data) is subjected to image recognition processing.
  • the service server 1000 can continuously acquire the image data (which can be called the third image data) containing the object from the terminal device, and store the third image data Go to the update receiving queue to get a new receiving queue.
  • the image data which can be called the third image data
  • the service server 1000 may also obtain the image data contained in the second object area in the target image data (which may be referred to as the second object image data), the service server may also send the second object image data to the cloud application server corresponding to the terminal device; at the same time, the service server 1000 may acquire the The third image data (which may be referred to as new target image data) with a time stamp is received latest, and the service server 1000 may then perform image recognition processing on the new target image data.
  • the service server 1000 in this application can continuously receive the rest of the image data during the process of image recognition processing for certain image data, so that the synchronization of recognition and reception can be realized, and there is no need to wait for the recognition to be completed before receiving. This can reduce the reception delay of image data.
  • the business server will perform frame skip processing (that is, it will obtain the current image data with the latest received timestamp, and It performs image recognition processing; instead of obtaining the next image data of the currently processed image data (receiving the image data with the closest time stamp) to perform image recognition processing on it), frame skip processing can reduce the queuing time of image data Delay, the image data with the latest received timestamp is the collected user's current behavior, then after performing image recognition processing and displaying the image data with the latest received timestamp, the rendered data displayed in the cloud application can be Synchronized and matched with the user's current behavior.
  • one terminal device may be selected among multiple terminal devices to perform data interaction with the service server 1000, and the terminal device may include: smart phones, tablet computers, notebook computers, desktop computers, smart TVs, smart speakers, desktop Smart terminals such as computers, smart watches, and smart vehicles that carry multimedia data processing functions (eg, video data playback functions, music data playback functions), but are not limited thereto.
  • the above-mentioned cloud application may be integrated in the terminal device 100a shown in FIG.
  • the method provided in the embodiment of the present application can be executed by a computer device, and the computer device includes but is not limited to a user terminal or a service server.
  • the business server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud Cloud application servers for basic cloud computing services such as communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • terminal device and the service server may be connected directly or indirectly through wired or wireless communication, which is not limited in this application.
  • the above-mentioned computer equipment may be a node in a distributed system, wherein the distributed system may be a block chain system
  • the The blockchain system can be a distributed system formed by connecting multiple nodes through network communication.
  • the peer-to-peer (P2P, Peer To Peer) network that can be formed between nodes
  • the P2P protocol is an application layer protocol that runs on the Transmission Control Protocol (TCP, Transmission Control Protocol) protocol.
  • TCP Transmission Control Protocol
  • any form of computer equipment such as business servers, terminal equipment and other electronic equipment, can become a node in the blockchain system by joining the peer-to-peer network.
  • blockchain is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Organize and encrypt into a ledger, so that it cannot be tampered with and forged, and at the same time, data can be verified, stored and updated.
  • the computer device is a block chain node
  • the data in this application (such as the first image data, the first object area, the second object image data, etc. ) has authenticity and security, which can make the results obtained after relevant data processing based on these data more reliable.
  • FIG. 2a is a schematic diagram of a scenario provided by an embodiment of the present application.
  • the terminal device 100a shown in FIG. 2a may be the terminal device 100a in the terminal device cluster 100 in the embodiment corresponding to FIG. 1;
  • the service server 1000 shown in FIG. 2a may be the terminal device 100a in the embodiment corresponding to FIG.
  • the cloud application server 10001 shown in FIG. 2a may be the cloud application server 10001 in the embodiment corresponding to FIG. 1 above.
  • the terminal device 100a may contain a game application.
  • the terminal device 100a may capture a picture containing object a (which may be referred to as an original image) through the camera component 200a. frame 20a), the terminal device may perform encoding processing (such as H264 encoding processing) on the original image frame to obtain image data.
  • the terminal device 100a can send the image data to the service server 1000 .
  • the service server 1000 can store the image data in the receiving queue, and then, the service server 1000 can obtain the image data from the receiving queue, and the service server 1000 can decode the image data to obtain the original image frame 20a.
  • the service server 1000 will store it in the receiving queue, and for the first image data received by the service server 1000, of course, it can also be Choose not to store, but directly decode it.
  • the service server 1000 can first store it in the receiving queue according to the storage rules, and then obtain it in the receiving queue; The server 1000 may not store it, and may directly decode it to obtain the original image frame 20a.
  • the service server 1000 can perform image recognition processing on the original image frame 20a, and the service server 1000 can determine the object edge curve P1 corresponding to the object a in the original image frame 20a through the image recognition processing. It should be understood that user a will continue to produce action behaviors (such as raising his hands, shaking his head, squatting, etc.), then after the terminal device 100a collects the original image frame 20a, it can continue to collect images containing object a through the camera component 200a. For the original image frame, every time the terminal device 100a successfully acquires an original image frame containing the object a, the terminal device 100a can encode it to obtain image data, and send it to the service server 1000 .
  • action behaviors such as raising his hands, shaking his head, squatting, etc.
  • the service server 1000 can continuously acquire different images from the terminal device 100a during the process of image recognition processing. data, the service server 1000 may temporarily store these image data in the receiving queue.
  • the service server 1000 can extract the entire area covered by the object edge curve P1 (which can be called the object area P2) in the original image frame 20a, which can be obtained in the original image frame 20a. All the image contents contained in the object area can thus be obtained from all the image contents contained in the object area (which can be referred to as object image data); the service server 1000 can obtain the cloud application server corresponding to the terminal device 100a (such as cloud application server 10001), the service server 1000 may send the object image data included in the object area P2 to the cloud application server 10001. After acquiring the object image data, the cloud application server 10001 may perform rendering processing on the object image data, thereby obtaining rendering data P3, and the cloud game server may send the rendering data P3 to its corresponding terminal device 100a.
  • the cloud application server 10001 may perform rendering processing on the object image data, thereby obtaining rendering data P3, and the cloud game server may send the rendering data P3 to its corresponding terminal device 100a.
  • FIG. 2b is a schematic diagram of a scenario provided by an embodiment of the present application.
  • the terminal device 100a may display the rendering data P3 in the game application.
  • the virtual environment corresponding to the game (which can be understood as a game scene) includes a virtual background (virtual house background), a dancing virtual object 2000a (dancing) and a dancing virtual object 2000b.
  • rendering data P3 can be displayed.
  • the service server 1000 may further process the image data in the receiving queue.
  • the service server 1000 can perform frame skipping processing, that is, the service server 1000 can obtain the image data with the latest receiving time stamp in the receiving queue, and It performs decoding and image recognition processing.
  • the image data with the latest reception time stamp can be understood as the last image data sent by the terminal device 100a at the current moment, and the image data may correspond to the latest real-time behavior of the object a. Then, after extracting the corresponding object area and performing rendering output, the rendered data presented is consistent with the actual action behavior of the object.
  • the object a may be a game player, and the corresponding portrait rendering data (such as rendering data P3) is displayed in the game application, that is, the player portrait is projected into the game scene, thereby It can enable game players to "place themselves" in the game scene, and can improve the game player's sense of immersion.
  • the embodiment of the present application can realize the synchronization of image recognition and reception through the receiving queue, which can reduce the receiving delay of image data; in addition, through frame skipping processing, it can speed up the recognition efficiency of image data and further reduce the time delay. , and can also improve the matching rate between the player portrait displayed in the game and the player.
  • FIG. 3 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
  • the method may be executed by a computer device, and the computer device may be a terminal device (for example, any terminal device in the terminal device cluster shown in FIG. 1 above, such as terminal device 100a) or a service server (such as the The shown service server 1000) is executed, and the computer device may also include a terminal device and a service server, so that the terminal device and the service server are jointly executed.
  • this embodiment takes the method executed by the above-mentioned service server as an example for description.
  • the data processing method may at least include the following S101-S103:
  • S101 Acquire first image data sent by a first client, and store the first image data in a receiving queue; the first image data is image data including an object acquired by the first client when running a cloud application.
  • the first client may be understood as a terminal device, and an application may be deployed in the first client, and the application may be a cloud application (such as a game application) or the like.
  • the cloud application as an example, when the user uses the first client, the user can start the cloud application in the first client, for example, the user can click the cloud application and click the start control to run the cloud application.
  • the first client may refer to any client.
  • the first client can capture a picture containing the user (which can be called an object) through the camera component, including The user's picture may be referred to as an original image frame.
  • the first client can perform coding processing on the original image frame, thereby obtaining a coded image file, which can be referred to as image data here.
  • the first client can send the image data to the service server (the service server can refer to a server with image decoding function and image recognition function, which can be used to obtain the encoded file sent by the first client, and perform decoding and image recognition processing).
  • the H264 encoding method As an encoding format, has a higher compression ratio. After the same image is encoded by H264, it will occupy less bandwidth in transmission. Therefore, H264 is widely used in mobile video It has a wide range of applications. Then, in this application, in order to reduce the transmission bandwidth between the first client and the service server, the coding method for the original image frame may be preferentially selected as the H264 coding method.
  • the encoding method used by the first client to encode the original image frame can also be any encoding method other than H264, such as the encoding method can be H262 encoding method, H263 encoding method , H265 encoding method, etc., this application will not limit them.
  • the service server may store the image data in the receiving queue.
  • the first image data is taken as an example.
  • the service server After the service server acquires the first image data, it can store the first image data in the receiving queue.
  • the specific method can be: receiving the first image data sent by the first client (No. One image data is the data obtained after the first client encodes the original image frame); then, the service server can obtain the receiving time stamp of the first image data, and can associate the first image data with the receiving time stamp stored in the receive queue. That is to say, when storing each piece of image data, the service server can also store its receiving time.
  • the receiving time of the business server (which can be used as a receiving time stamp) is 19:09 on September 5, 2021, then the business server can compare the image data A with the receiving time of September 5, 2021 at 19:09 :09 Associated storage to the receive queue. It should be understood that when no image data is stored in the receiving queue, the receiving queue may be empty.
  • the first image data may be acquired from the receiving queue and subjected to image recognition processing.
  • the first image data since the first image data is actually an image encoding file, it can be decoded and restored to obtain the original image frame first, and then image recognition processing is performed on the original image frame.
  • the specific method can be as follows: the first image data can be decoded to obtain decoded image data with an original image format; subsequently, format conversion can be performed on the decoded image data to obtain an original image frame with a standard image format; subsequently, the original image frame can be obtained Image recognition processing is performed on raw image frames in a standard image format.
  • the standard image format may refer to a specified image format for unified image recognition processing. For example, if it is specified that an image for image recognition processing must have a color format (Red Green Blue color mode, RGB color mode), then the RGB format is May be referred to as a standard image format.
  • the decoded image data with the original image format can be obtained; if the original image format is the standard image format, the decoded image data can be determined as the original image frame with the standard image format ; and if the original image format is different from the standard image format, it can be converted into the standard image format to obtain the original image frame with the standard image format. For example, when the original image format is YUV format, the YUV format can be converted into RGB format, thereby obtaining the original image frame in RGB format.
  • the service server After the service server decodes the original image frame corresponding to the first image data, it can perform image recognition processing on the original image frame with a standard image format, and determine the area where the object is located in the original image frame (which can be called the first image frame).
  • An object area for a specific implementation manner of performing image recognition processing to determine the first object area, reference may be made to the subsequent description in the embodiment corresponding to FIG. 5 .
  • the first client can continue to capture the picture (new original image frame) containing the object, and the first client Each original image frame may be encoded to obtain an image encoded file (which may be called second image data).
  • the first client can continuously send each second image data to the service server.
  • the service server will not suspend the reception of the image data, and the service server can continuously obtain the second image data from the first client, and the service server can The second image data is temporarily stored in the receiving queue, so that the receiving queue can be updated.
  • the first object area where the object is located in the original image frame can be determined through the above-mentioned image recognition processing, then after the first object area is determined, the first object area can be obtained in the original image frame
  • the included image content (which may be referred to as the first object image data)
  • the service server may extract the first object area and the first object image data included in the first object area.
  • the service server can obtain the target cloud application server corresponding to the first client, and the service server can send the first object image data to the target cloud application server .
  • the target cloud application server may refer to the cloud application server corresponding to the first client.
  • the cloud application server When the first client runs the cloud application, the cloud application server provides computing services for the first client, such as central processing Central Processing Unit (CPU) computing services, Graphics Processing Unit (GPU) computing services, etc.
  • the target cloud application server can render the first object image data, thereby obtaining the rendering data corresponding to the first object image data, and the target cloud application server can send the rendering data to the first client, and the first client This rendered data can be displayed in a cloud application.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the service server may also continue to perform decoding and image recognition processing on the rest of the image data.
  • the service server may acquire the second image data with the latest receiving time stamp in the update receiving queue, and the service server may perform decoding and image recognition processing on the second image data.
  • FIG. 4 is a schematic diagram of frame skipping processing provided by an embodiment of the present application.
  • image data 1, image data 2, image data 3, image data 4, image data 5, image data 6, image data 7, image data 8, image data 9 may be included in the receiving queue 40a, wherein, The image data 1 to image data 9 are sorted according to the order of receiving timestamps from early to late, which are represented by labels 1, 2, 3, 4, 5, 6, 7, 8, and 9 in sequence in FIG.
  • image data 1 is the earliest receiving time stamp
  • image data 9 is the latest time stamp.
  • image data 1, image data 2, image data 3, image data 4, and image data 5 are image data that have been processed
  • image data 6 is image data currently being processed by the business server
  • image data 7, image data 8 The image data 9 is the image data received by the service server when processing the image data 6, and the image data 7, image data 8, and image data 9 are queuing up for processing.
  • the service server when the service server extracts the object area where the object is located in the image data 6, the service server can obtain the image data 9 from the end of the receiving queue 40a (that is, obtain the latest receiving time stamp image data), the service server can skip the image data 7 and the image data 8, and then decode and process the image data 9, which is the frame skipping process.
  • the time required for the business server to decode and perform image recognition processing is 30 ms, and the time interval between the business server receiving the two image data before and after it is 10 ms (that is, receiving One image data, the next image data is received at 10ms, and the next image data is received at 20ms); then when the business server processes the image data 6, during this process, the business server will continue to receive Image data 7 (it can be stored at the end of the receiving queue at this time: after image data 6), image data 8 (it can be stored at the end of the receiving queue at this time: after image data 7), image data 9 ( At this point it can be stored at the end of the receive queue: after image data 8).
  • the receiving queue is as shown in the receiving queue 40a, and the service server can directly obtain the latest image data (that is, the image data 9) from the end of the receiving queue at this time, skipping the image data 7 and image data 8 (although image data 7 and image data 8 have not been processed, they have actually been skipped and will not be processed again, so image data 7 and image data 8 can be determined as processed images data).
  • the business server when processing the image data 9, can also continuously receive the image data 10 and image data 11 (represented by labels 10 and 11 in Figure 4), and obtain the receiving queue 40b; then When the object area of the object in the image data 9 is extracted, the image data at the end of the queue in the receiving queue 40b (that is, the image data 11 with the latest reception time stamp) can be obtained, and the service server can skip the image data 10 , perform decoding and image recognition processing on the image data 11 .
  • the service server can continuously receive the rest of the image data to obtain the receiving queue 40c, and when the image data 11 is processed, the image data at the end of the queue in the receiving queue 40c can be obtained again , so that it is executed repeatedly, and this process will not be repeated here.
  • the previous image data (that is, the processed image data) can be cleared, thereby increasing the storage space of the receiving queue.
  • the processed image data in the receiving queue 40a includes image data 1 to image data 5, then image data 1 to image data 5 can be deleted, and now the receiving queue 40a only includes image data 6 to image data 9.
  • the image data with the latest receiving time stamp can be obtained in the receiving queue, and after the image data is obtained, it will be arranged before the image data (that is, the receiving time stamp is earlier than
  • the image data (which may be referred to as historical image data) of the image data) is deleted.
  • the acquired image data 9 is the image data to be processed, and the historical image data (including image data 1 - image data 8 ) before the image data 9 has been obtained.
  • the processed image data (including the first image data) can be cleared first, and then the image with the most The second image data with a later time stamp is received.
  • the second image data with the latest receiving time stamp can be obtained first, and then the historical image data whose receiving time stamp is earlier than the second image data can be obtained Perform deletion and clearing (that is, first obtain the second image data with the latest receiving time stamp in the update receiving queue, and then perform image recognition processing on the second image data, and synchronously delete and update the historical image data in the receiving queue; wherein, the history The image data is the image data whose receiving timestamp in the update receiving queue is earlier than the second image data).
  • the service server receives the coded code stream (that is, image data) sent by the first client, if the duration of decoding and image recognition processing (hereinafter referred to as image processing) is longer than the received two frames of image data (For example, the image processing time is 30ms, and the receiving interval between two frames of image data is 10ms), then if there is no receiving queue, the first client will always wait for the service server to respond to the current image data Image processing, which greatly increases the transmission delay of image data, will seriously affect the transmission efficiency of image data.
  • this application can store the image data through the receiving queue, so that the service server can continuously receive the image data sent by the first client during the process of image processing.
  • the service server performs image processing on the image data sequentially, it will cause The image data recognized by the business server does not match the latest state of the object seriously, and the image data recognized by the business server is seriously behind. Therefore, through frame skipping processing, the business server can process the latest image data every time. Image processing can reduce the time delay of image recognition, and at the same time, due to the high computing power of the business server, it can also improve the efficiency of image recognition.
  • the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the first client, and the first image data can be processed by a business server with high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this
  • the service server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the image recognition process of the first image data, and The second image data is stored in the receiving queue to obtain an updated receiving queue.
  • the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
  • the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
  • the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
  • the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
  • Recognition of data one by one can improve the recognition efficiency of image data.
  • image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
  • the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
  • FIG. 5 is a schematic flowchart of a data processing method provided by an embodiment of the present application. This process may correspond to the process of performing image recognition processing on the original image frame to determine the first object area in the above-mentioned embodiment corresponding to FIG. 3 , as shown in FIG. 5 , the process may include at least the following S501-S503:
  • the key points of the object edge here may refer to the key points of the object contour of the object, and the original image frame contains the key parts of the object, so the key points of the object contour here may refer to the key points of the contour of the key parts. If the key part is the head, the key point of the object edge may refer to the key point of the head contour; if the key part is the neck, then the key point of the object edge may refer to the key point of the neck contour.
  • the object edge key points of the identified object can be identified by an artificial intelligence algorithm, a dedicated graphics processing unit (Graphics Processing Unit, GPU) to identify, etc., and this application will not limit it.
  • an artificial intelligence algorithm a dedicated graphics processing unit (Graphics Processing Unit, GPU) to identify, etc., and this application will not limit it.
  • the object edge curve corresponding to the object (which can be understood as the object outline) can be obtained.
  • the curve P1 shown in FIG. 2a can be regarded as the object profile of the object a.
  • the area covered by the object edge curve can be determined in the original image frame, and this area can be used as the first object area where the object is located in the original image frame.
  • the area covered by the object edge curve P2 is area P2 (this area P2 is the area where the object a is located), the The region P2 can then be determined as the region where the object is located in the original image frame (herein may be referred to as the first object region).
  • the area covered by the above-mentioned object edge curve can be called the initial object area, and after the initial object area is determined, the initial object area may not be determined as the final first object area temporarily , but to determine the first object area according to the initial object area, the specific method can be: the key parts of the object presented by the initial object area can be obtained; then, the object recognition configuration information for the object can be obtained, and the object recognition configuration The object recognition configuration part indicated by the information can match the object recognition configuration part with the key part of the object; if the object recognition configuration part matches the object key part, the step of determining the first object area according to the initial object area can be performed; and If the object recognition configuration part does not match the object key part, it can be determined that the first object region cannot be extracted through image recognition processing.
  • the object key parts of the object presented in the initial object area can be obtained (the object key parts may refer to body parts of the object, such as head, neck, arm parts , abdomen, legs, feet, etc.); then, the object recognition configuration parts of the objects that need to be contained in the original image frame can be obtained (that is, the recognition rule, which specifies the original image frame collected by the terminal device) , the part of the object that needs to be included).
  • the object recognition configuration part as the leg as an example, it is assumed that the original image frame collected by the terminal device needs to contain the user's leg, and after decoding the received image data and performing image recognition processing, the extracted
  • the key parts of the object presented in the initial object area of the original image frame are head and neck, then it can be determined that the key parts of the object (head and neck) do not match the object recognition configuration parts (legs), and the original image frame If the key parts of the presented object do not meet the requirements, then it can be directly determined that the first object region cannot be extracted through image recognition processing (that is, the parts do not meet the requirements, and the extraction fails).
  • the extracted initial object area presents the key object If the part is a leg, then it can be determined that the key parts of the object presented in the original image frame meet the requirements. At this time, the initial object area can be determined as the first object area.
  • the service server can obtain the update Receive the next image data of the current image data in the queue, and then perform image processing on the next image data.
  • the current image data is the first image data
  • the next image data of the first image data in the receiving queue can be acquired (that is, the image with the earliest receiving time stamp among the image data whose receiving time stamp is later than the first image data) data)
  • the service server can then perform image processing on the next image data.
  • the specific method can be as follows: the second image data with the earliest received time stamp in the update receiving queue can be determined as the image data to be recognized; subsequently, image recognition processing can be performed on the image data to be recognized, when the object is extracted through the image recognition processing When the object area to be processed in the image data is to be identified, the object area to be processed can be sent to the target cloud application server.
  • the service server processes the current image data (such as the first image data) with a sufficiently short duration and fast enough efficiency, after determining that the first object region cannot be extracted through image recognition processing,
  • the next image data (image data with the earliest received time stamp) of the current image data can be acquired, and image processing is performed on the next image data.
  • the purpose is that when the user performs an action, the first client obtains the image frame, encodes it and sends it to the service server, and the service server quickly recognizes that the key parts of the object it contains do not meet the specifications, and cannot perform the object area and its object If the image data included in the area is extracted, the cloud application server cannot receive the extracted object image data, and cannot align, render and display it.
  • the business server can perform image processing on the next image data to extract the next image object area of the data, and then send the contained object image data to the cloud application server for rendering output.
  • the jumpiness of the user portrait displayed in the cloud application can be reduced, and its coherence can be increased.
  • the second image data with the latest reception time stamp instead of the second image data with the earliest time-stamped image data
  • manual experience can be used according to actual conditions. Circumstances set, this application will not limit it.
  • the initial object area when it is determined that the object recognition configuration part matches the key part of the object, the initial object area can be directly determined as the first object area.
  • the specific method for determining the first object area can also be: the key part of the object of the object presented by the initial object area can be obtained; if the key part of the object If the part has part integrity, the initial object area can be determined as the first object area; and if the key part of the object does not have part integrity, then N (N is a positive integer) sample image frames in the sample database can be obtained, which can be A sample image frame to be processed corresponding to the object is acquired from the N sample image frames, and a first object area is determined according to the sample image frame to be processed and the initial object area.
  • the specific method for determining the first object area according to the sample image frame to be processed and the initial object area can be: the overall part information in the sample image frame to be processed can be obtained; then, according to the key parts of the object, in the overall part information Determine the area of the part to be fused; the area of the part to be fused can be fused with the initial object area, thereby obtaining the first object area.
  • this application can collect the complete portrait sample data of the user in advance (complete portrait sample data from the head to the feet), one user can correspond to one sample image frame, and one sample image frame can present one User's complete overall portrait data. Then when the initial object area is extracted and it is determined that the object recognition configuration part matches the key part of the object, it can be determined whether the key part of the object has part integrity. If it has part integrity, the initial object can be directly The area is determined as the first object area; and if the key part of the object does not have integrity, the sample image frame to be processed corresponding to the object in the sample database can be obtained, and then the image frame of the object to be processed can be obtained in the sample image frame to be processed. For the overall part information, complete the initial object area according to the overall part information to obtain a complete first object area including the complete part.
  • FIG. 6 is a schematic diagram of a scene for performing part fusion provided by an embodiment of the present application.
  • the initial object area is the initial object area 600a
  • the key parts of the object presented in the initial object area 600a include head, neck, arm, chest, abdomen (that is, the upper body part of the user);
  • the object recognition configuration part is also the user's upper body part, that is to say, the first client needs to collect the user's upper body part, so it can be seen that the initial object area meets the requirements. Then further, it can be determined whether the key parts of the object have part integrity.
  • part integrity refers to the user's overall portrait integrity (that is, it needs to include upper body parts and lower body parts, that is, from head to body). foot)
  • the service server 1000 can obtain the sample image frame to be processed corresponding to the object in the sample database (assumed to be Sample image frame to be processed 600b).
  • the overall part information presented in the sample image frame to be processed contains complete information from the head to the feet of the object.
  • the The lower body part in the sample image frame to be processed is determined as the region of the part to be fused (that is, the region 600c), and the region 600c of the part to be fused can be extracted. Further, the part region to be fused 600c can be fused (for example, spliced) with the initial object region 600a, so as to obtain the first object region 600d including the upper body part and the lower body part. It should be understood that by collecting the user's overall part information in advance (for example, from the head and feet), it is possible for the first client to obtain the user's picture each time without strictly requiring the user to stand on a fixed frame that can be collected.
  • the user can move flexibly to the location of the complete part.
  • the first client only needs to obtain part of the part information. It performs supplementary splicing, so that a complete part can also be obtained. In this way, the user's sense of experience and immersion can be increased.
  • part integrity refers to the integrity of the user's upper body parts
  • the key parts of the object presented in the initial object area 600a actually have part integrity, then At this time, the initial object area 600a may be directly determined as the first object area.
  • the specific method for obtaining the sample image frame to be processed corresponding to the object in the sample database can be: through face matching; or when collecting the sample image frame of the user, using the user identification (such as user name , user number, etc.) to identify its corresponding sample image frame, so that each sample image frame is equipped with a user ID (can be called a sample ID); and when the first client sends image data to the service server, it can carry After sending the user ID of the user included in the image data, the service server can match the corresponding sample image frame to be processed by using the carried user ID and the sample ID of the sample image frame.
  • the specific implementation manner of obtaining the sample image frame to be processed corresponding to the object in the sample database is of course not limited to the above-described manner, and the present application does not limit the specific implementation manner.
  • the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
  • the relevant computer equipment such as the service server
  • the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
  • the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
  • the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
  • the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
  • Recognition of data one by one can improve the recognition efficiency of image data.
  • image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
  • the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
  • the cloud application server corresponding to each client may send a registration request to the service server, and the registration request is used to request to register the device with the service server, After registration, the service server can add the device identifier corresponding to the cloud application server to the set of stored device identifiers, thereby proving that the cloud application server is a registered cloud application server.
  • it can indicate that it is a legal cloud application server, and at this time, the business server can exchange data with the legal cloud application server.
  • the service server when the first client sends the image data (such as the first image data) to the service server, it can carry the device identification (which can be called the to-be confirm the device identification), the service server is used to confirm whether it has been registered (whether it is legal) through the device identification, and when it is determined that the bound cloud application server is registered, the bound cloud application server is then determined as the target cloud application server, Then send the first object image data to the target cloud application server. That is to say, after the first object area is determined through the above, before sending the first object image data to the target cloud application server, the service server can first determine whether the cloud application server corresponding to the first client has been registered. When it is registered, the first object image data is sent to its corresponding target cloud application server.
  • the device identification which can be called the to-be confirm the device identification
  • FIG. 7 is a schematic flowchart of sending the first object image data to a target cloud application server according to an embodiment of the present application.
  • This process is illustrated by taking the first image data carrying the device identifier to be confirmed (the device identifier to be confirmed is the device identifier bound to the cloud application server, and the bound cloud application server has a binding relationship with the first client), as shown in Figure 7
  • the process may include at least the following S701-S704:
  • the stored device identifier set includes M stored device identifiers, one stored device identifier corresponds to one registered cloud application server, and M is a positive integer.
  • the cloud application server corresponding to each client can send a registration request to the service server, and the registration request is used to request to register the device with the service server, and After registration, the service server can add the device identifier corresponding to the cloud application server to the set of stored device identifiers, thereby proving that the cloud application server is a registered cloud application server.
  • the specific method can be: when the user uses the client to open the cloud application, the second client can respond to the application opening operation and generate an application opening notification, and the second client can Send the application opening notification to its corresponding cloud application server (may be referred to as the cloud application server to be registered), and the cloud application server to be registered can send a registration request to the service server based on the application opening notification at this time; and the service server can receive The registration request sent by the cloud application server to be registered; then, the business server can detect the device index information of the cloud application server to be registered according to the registration request; when the device index information meets the processing quality conditions, then obtain the storage The device ID is stored in the stored device ID set, the cloud application server to be registered is converted into a registered cloud application server, and the device ID to be stored is converted into a stored device ID.
  • the device index information may include network quality parameters, device version, function module quality index, storage space index, etc.
  • the detection of device index information here may be to detect whether a certain index is qualified, for example, to detect whether the network quality parameter is qualified , the network quality parameter is qualified, then it can be considered that the device index information of the cloud application server to be registered meets the processing quality conditions; the detection of the device index information can also be to detect whether two or more indicators are qualified, only It is confirmed that the device index information of the cloud application server to be registered satisfies the processing quality condition only when all are qualified.
  • the following will take the device index information including network quality parameters and device version as an example to describe the specific method of detecting the device index information of the cloud application server to be registered.
  • the specific method can be: according to the registration request, obtain the cloud application server to be registered Network quality parameters and device version; if the network quality parameter reaches the parameter threshold, and the device version matches the quality standard version (which can be understood as a qualified quality version), it can be determined that the device index information meets the processing quality conditions; and if the network quality parameter does not If the parameter threshold is reached, or the version of the device does not match the version of the quality standard, it can be determined that the device index information does not meet the processing quality condition.
  • the quality standard version which can be understood as a qualified quality version
  • the stored device set has stored device IDs corresponding to different registered cloud application servers, then after obtaining the device ID to be confirmed sent by the first client, the stored device ID can be obtained set, and then match the device ID to be confirmed with the stored device ID set.
  • follow-up S703 may be performed; if not, follow-up S704 may be performed.
  • the bound cloud application server indicated by the device ID to be confirmed belongs to a registered cloud application server, and the device to be confirmed is identified
  • the bound cloud application server indicated by the identification is determined as the target cloud application server, and the first object image data is sent to the target cloud application server.
  • the binding cloud indicated by the device ID to be confirmed can be determined. If the application server belongs to a registered cloud application server, then the bound cloud application server can be determined as the target cloud application server at this time, and the first object image data can be sent to the target cloud application server.
  • the binding cloud indicated by the device ID to be confirmed can be determined.
  • the application server is an unregistered cloud application server, the bound cloud application server is not registered, and the service server cannot send the first object image data to the bound cloud application server.
  • the service server can generate the device abnormality prompt information (may refer to the server unregistered prompt information), and the service server can return the device abnormality prompt information to the first client, and the first client can prompt based on the device abnormality
  • the information sends a registration notification to its corresponding bound cloud application server, and the bound cloud application server can apply for registration to the service server based on the registration notification.
  • this application can determine the corresponding relationship between the client and the cloud application server by pre-storing the set of device identifiers, and the client sends the image data with the device identifier of the corresponding cloud game server, and at the same time It can also be determined whether the cloud application server has been registered, so that the user picture collected by the client can be sent to the correct registered cloud application server, which can improve the correctness of the user picture displayed by the cloud application.
  • Steps can include the following 3 steps:
  • the target cloud application server needs to write the portrait data into the buffer first, then read and render the portrait data, and then continue to receive the portrait data and write it into the buffer after the rendering is completed.
  • the amount of received data will be large, so the target cloud application server will consume a lot of time when allocating buffers and data copies, seriously affecting the time for subsequent reception of portrait data, resulting in a large amount of time delay , it can be seen that although the frame skipping processing of the service server can reduce the image receiving delay on the service server side through the above, but there is still a delay problem on the cloud application server side.
  • the present application provides a data processing method, which is to allocate double buffers on the side of the cloud application server.
  • FIG. 8 is a schematic flowchart of a data processing method provided by an embodiment of the present application.
  • the process is executed by a computer device, which may be a target cloud application server such as a cloud game server.
  • This process may correspond to the data processing process of the target cloud application server after receiving the object image data.
  • the process may include at least the following S801-S804:
  • the service server After the service server extracts the first object area and obtains the first object image data, it can send the first object image data to the target cloud application server, and the target cloud application server can store the first object image data in the buffer In the zone set, the working status is in the first buffer of the storage status.
  • the target cloud application server can pre-allocate two receiving buffers (Buffer) of the same size, and set the working state of one of the buffers to the storage state, that is, the buffer is actually a storage buffer, and the target cloud application
  • the server can store the received data in the storage buffer; at the same time, the working state of another buffer can be set to the read state, that is, the buffer is actually a read buffer, and the target cloud application server can When reading and rendering data, read from this read buffer.
  • the specific method of allocating double buffers to generate a buffer set can be: the first buffer and the second buffer can be pre-allocated; then, the initial pointer identification of the first buffer can be set as Store pointer identification, the initial pointer identification of the second buffer zone is set to read pointer identification; It should be understood that the working state of the first buffer zone with the storage pointer indicator is the storage state; the second buffer zone with the read pointer indicator The working state is the reading state; subsequently, a buffer set can be generated according to the first buffer whose working state is in the storage state and the second buffer whose working state is in the reading state. Then at this time, when the first object image data is received, because the working state of the first buffer is in the storage state, the first object image data can be stored in the first buffer at this time.
  • the initial working state of the first buffer is the storage state
  • the initial working state of the second buffer is the reading state
  • the target cloud application server can synchronize the second buffer when receiving and storing the first image data
  • the data already stored in the region may be referred to as stored image data
  • the second buffer does not contain unprocessed object image data, so after storing the first object image data in the first buffer, the The storage pointer identification of the first buffer is switched to the reading pointer identification, and the storage pointer identification of the second buffer is switched to the storage pointer identification, so the working states of the first buffer and the second buffer are mutually In other words, the current working state of the first buffer becomes the reading state, and the current working state of the second buffer becomes the storage state.
  • the image data of the first object can be read from the first buffer, and Perform rendering processing, and meanwhile continue to receive the second object image data and store it in the second buffer.
  • the target cloud application server can realize the synchronization of reading and receiving, and can receive the rest of the data without waiting for the completion of rendering, which can greatly reduce the receiving delay.
  • the initial working state of the first buffer is the storage state
  • the initial working state of the second buffer is the reading state.
  • the target cloud application server receives and stores the first image data, it can synchronously
  • the data already stored in the second buffer (which may be referred to as stored image data) is read and rendered. If there is image data stored in the second buffer, but all the image data has been read and rendered, then the processed image data in the second buffer can be cleared at this time, then this time also It can be determined that the second buffer does not contain unprocessed image data, and the working state of the first buffer can be adjusted to a reading state and the working state of the second buffer can be adjusted to a storage state by switching pointer marks. Then read the first object image data from the first buffer in the reading state, and perform rendering processing on it.
  • the second object image data is The image data included in the second object area.
  • the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data.
  • the second object area is the object in the second image data.
  • the area at the location; the second image data is the image data with the latest receiving time stamp obtained from the update receiving queue when the service server extracts the first object area; the second image data in the update receiving queue is the service
  • the process of the server performing image recognition processing on the first image data it is obtained continuously from the first client.
  • the second target area may refer to the area extracted by the above-mentioned service server after performing image recognition processing on the second image data
  • the second target image data may refer to the image contained in the second target area in the second image data data.
  • the specific extraction method may be the same as the method for extracting the first object region, which will not be repeated here.
  • the second object image data sent by the service server can be received.
  • the target cloud application server can receive the data synchronously during the process of reading the data and store it in the current Second buffer to store state, thus reducing latency.
  • the working state of the first buffer can be adjusted to the storage state, and the working state of the second buffer can be adjusted to the reading state.
  • the second object image data can be read from the second buffer in the reading state , performing rendering processing on the second object image data; it is also possible to receive the rest of the object image data synchronously and store them in the first buffer in a storage state.
  • the specific implementation manner of adjusting the working state of the first buffer to the storage state and adjusting the working state of the second buffer to the reading state may also be a switching manner of pointer identification.
  • the specific method can be: when the first rendering data corresponding to the first object area is obtained, the read pointer identifier corresponding to the first buffer and used to represent the read status can be obtained, and the identifier of the read pointer corresponding to the second buffer can be obtained.
  • the storage pointer identifier is switched to the read pointer identifier; the working state of the second buffer with the read pointer identifier is the read state.
  • FIG. 9 is a schematic diagram of a state change of a double buffer provided by an embodiment of the present application.
  • the working state of the buffer 900a is the read state
  • the objects stored in the buffer 900a The image data may include object image data 1 to object image data 10 , which are denoted by numbers 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 in sequence in FIG. 9 .
  • the target image data 1 to the target image data 7 are the data that have been read
  • the target image data 8 to the target image data 10 are the data to be read.
  • the working state of the buffer 900b is a storage state.
  • the target cloud application server can continuously receive object image data and store it in the buffer 900b.
  • the buffer The received data in 900b includes object image data 11 to object image data 14 (there are still 6 remaining space positions in buffer 900b for receiving object image data).
  • the buffer 900a when the data in the buffer 900a has been read (that is, the object image data 7 to the object image data 9 have been read), the buffer 900a can be emptied, and at this time, the buffer 900b
  • the received data includes object image data 11 to object image data 20 (indicated by reference numerals 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 in order in FIG. 9 ).
  • the working state of the buffer 900a can be switched to the storage state, and the working state of the buffer 900b can be switched to the reading state, thus, the target cloud application server can read data from the buffer 900b (such as from the object The image data 11 starts to be read sequentially); at the same time, the target cloud application server can receive the object image data synchronously and store it in the buffer 900a. For example, after receiving new object image data 1 to new object image data 3, they can be stored in the buffer 900a.
  • buffer zone 900a and buffer zone 900b are examples for easy understanding, and do not have practical reference significance.
  • the image receiving delay on the side of the business server can be reduced, and the image recognition efficiency can be improved; through the double buffer allocation processing of the target cloud application server, no data copying is required, only It is only necessary to switch the working status of the two buffers (such as pointer switching), and it is not necessary to allocate buffers every time.
  • receiving data and processing data can be performed at the same time. Need to wait for each other, can reduce the delay. That is to say, on the basis of reducing the delay at the business server side, the delay can be further reduced by setting the double buffer.
  • FIG. 10 is a system architecture diagram provided by an embodiment of the present application.
  • the system architecture diagram shown in FIG. 10 takes a cloud application as an example, and the cloud application server corresponding to the cloud application may be a cloud game server.
  • the system architecture can include client clusters (may include client 1, client 2, ..., client n), business servers (may include streaming sub-servers and image recognition sub-servers.
  • the streaming The sub-server can be used to receive the coded image file uploaded by the client and decode it; the image recognition sub-server can perform image recognition processing on the decoded image data decoded by the streaming sub-server), cloud game server.
  • client clusters may include client 1, client 2, ..., client n
  • business servers may include streaming sub-servers and image recognition sub-servers.
  • the streaming The sub-server can be used to receive the coded image file uploaded by the client and decode it; the image recognition sub-server can perform image recognition processing on the decoded image data decoded by the streaming sub-server), cloud game server.
  • Client cluster When each client runs a cloud application (such as a cloud game application), it can display the screen of the cloud application (such as the screen of the cloud game application).
  • the cloud game application When the cloud game application is running, the user screen can be collected through the camera, and encoded, and the obtained image data is uploaded to the streaming sub-server in the service server (the streaming sub-server can be any one with data receiving function and decoding function)
  • the server is mainly used to receive the image encoding file uploaded by the client and decode it).
  • Streaming sub-server It can receive the image data uploaded by the client and perform decoding processing to obtain a decoded image in the original image format (such as YUV format), which can be sent to the image recognition sub-server.
  • a decoded image in the original image format such as YUV format
  • Image recognition sub-server It can convert the decoded image from YUV format to RGB format, and then identify and extract user portrait data or key points of the human body in the image, and send the user portrait or key points of the human body to the cloud game server.
  • the streaming sub-server and the image recognition sub-server can jointly form a service server, so that the service server can have image decoding and image recognition functions.
  • the streaming sub-server and the image recognition sub-server can also be used as independent servers, each performing corresponding tasks (that is, the streaming sub-server receives and decodes the encoded image file; the image recognition sub-server performs image recognition processing on the decoded data). It should be understood that, in order to reduce the data receiving delay, both the streaming sub-server and the image recognition sub-server can perform frame skipping processing.
  • the cloud game server can refer to the cloud application server corresponding to the client.
  • the cloud game server provides the corresponding computing server.
  • the cloud game server can receive user portrait data or key points of the human body, and can render and display the user portrait data.
  • the cloud game server can use the key points of the human body to manipulate the virtual cartoon doll in the cloud game application to realize animation (that is, instead of projecting the user's portrait in the cloud game application, the virtual cartoon doll is operated to synchronize the user's real action state).
  • FIG. 11 is a schematic flowchart of a system provided by an embodiment of the present application. This process may correspond to the system architecture shown in FIG. 10 . As shown in Figure 11, the process may include S31-S36:
  • S31 The client collects camera images.
  • the client encodes the collected image.
  • the streaming sub-server decodes the encoded data.
  • the image recognition sub-server converts the decoded image from YUV format to RGB format.
  • the cloud responds to the game server to identify key points of the user's portrait or human body.
  • the client when the client is collecting objects, if another object is collected at the same time (which can be understood as another user entering the mirror), then at this time, the client can generate object selection prompt information, and the user can choose who to use as The final collection object.
  • the client automatically determines based on the clarity and area occupied by the object. For example, if the client captures Object 1 and Object 2 at the same time, but Object 2 is far away from the lens and the captured image is not clear, while Object 1 is relatively close to the lens and the captured image is clear, then the client can automatically use Object 1 as the final captured object .
  • FIG. 12 is an interaction flowchart provided by an embodiment of the present application.
  • the interaction process may be an interaction process between the client, the streaming sub-server, the image recognition sub-server, and the cloud application server (taking the cloud game server as an example).
  • the interaction process may at least include the following S41-S54:
  • the client can establish a connection with the streaming sub-server (for example, establish a Websocket persistent connection).
  • the cloud game server (which can be integrated with the cloud game software tool development kit (SDK)) can establish a connection with the image recognition sub-server (for example, establish a Transmission Control Protocol (TCP) )connect).
  • SDK cloud game software tool development kit
  • the cloud game server sends a collection notification to its corresponding client.
  • the acquisition notification may be a notification message for the start of image acquisition.
  • the client sends a streaming message to the streaming sub-server.
  • the client can turn on the camera based on the collection notification, and notify the streaming sub-server of the device ID of the cloud game server, the collected user ID, and the width and height of the image captured by the camera, so that the push sub-server can The streaming subserver is ready to receive data.
  • the streaming message may include the device ID of the cloud game server, the collected user ID, and the width and height of the image captured by the camera.
  • the streaming sub-server sends a streaming message to the image recognition sub-server.
  • the streaming sub-server After the streaming sub-server receives the streaming message from the client, it can establish a TCP connection with the image recognition server, and sends the streaming message to the image recognition sub-server.
  • the client sends encoded data to the push stream sub-server.
  • the streaming sub-server sends the decoded data to the image recognition sub-server.
  • the image recognition sub-server converts the format of the decoded data, and performs image recognition.
  • the image recognition sub-server sends the recognition data to the cloud game server.
  • the cloud game server renders the identification data to obtain rendering data.
  • the cloud game server sends the rendering data to the client.
  • FIG. 13 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
  • the data processing device may be a computer program (including program code) running in a computer device, for example, the data processing device is an application software; the data processing device may be used to execute the method shown in FIG. 3 .
  • the data processing device 1 may include: a data acquisition module 11 , an image recognition module 12 , a queue update module 13 and an area sending module 14 .
  • the data acquisition module 11 is configured to acquire the first image data sent by the first client, and store the first image data in the receiving queue; the first image data is obtained by the first client when running the cloud application, including image data of the object;
  • An image recognition module 12 configured to perform image recognition processing on the first image data in the receiving queue
  • the queue update module 13 is used to store the continuously obtained second image data sent by the first client in the receiving queue during the image recognition processing of the first image data, so as to obtain an updated receiving queue;
  • the area sending module 14 is used to send the first object image data contained in the first object area to the target cloud application server when the first object area where the object is located in the first image data is extracted through image recognition processing;
  • the target cloud application server is used to render the first object image data to obtain rendering data, and send the rendering data to the first client;
  • the area sending module 14 is further configured to synchronously perform image recognition processing on the second image data with the latest receiving time stamp in the update receiving queue.
  • the specific implementation manners of the data acquisition module 11, the image recognition module 12, the queue update module 13, and the area sending module 14 can refer to the description of S101-S101-S103 in the embodiment corresponding to FIG. 3 above, and will not be repeated here. .
  • the data acquisition module 11 may include: an image receiving unit 111 and a storage unit 112 .
  • the image receiving unit 111 is configured to receive the first image data sent by the first client; the first image data is the data obtained after the first client encodes the original image frame; the original image frame is the first client Collected when running cloud applications;
  • the storage unit 112 is configured to acquire a receiving time stamp of receiving the first image data, and associate and store the first image data and the receiving time stamp in a receiving queue.
  • the image recognition module 12 may include: a data decoding unit 121 , a format conversion unit 122 and an image recognition unit 123 .
  • a data decoding unit 121 configured to decode the first image data to obtain decoded image data in an original image format
  • a format conversion unit 122 configured to perform format conversion on the decoded image data to obtain an original image frame with a standard image format
  • the image recognition unit 123 is configured to perform image recognition processing on the original image frame with a standard image format.
  • the data decoding unit 121 For specific implementations of the data decoding unit 121 , the format conversion unit 122 and the image recognition unit 123 , refer to the description of S101 in the embodiment corresponding to FIG. 3 , and details will not be repeated here.
  • the image recognition unit 123 may include: a key point recognition subunit 1231 , a curve connection subunit 1232 and an area determination subunit 1233 .
  • a key point identification subunit 1231 configured to identify object edge key points of the object in the original image frame
  • the curve connection subunit 1232 is used to connect the key points of the object edge to obtain the object edge curve of the object;
  • the area determining subunit 1233 is configured to determine the initial object area where the object is located in the original image frame from the area covered by the object edge curve in the original image frame; and determine the first object area according to the initial object area.
  • the specific implementation manners of the key point identification subunit 1231, the curve connection subunit 1232, and the region determination subunit 1233 can refer to the description of S102 in the embodiment corresponding to FIG. 3 above, and will not be repeated here.
  • the image recognition unit 123 is also used to obtain the object recognition configuration information for the object, and the object recognition configuration parts indicated by the object recognition configuration information, and match the object recognition configuration parts with the key parts of the object; if the object recognition If the configuration part matches the key part of the object, the step of determining the first object region according to the initial object region is performed; if the object recognition configuration part does not match the key part of the object, it is determined that the first object region cannot be extracted through image recognition processing.
  • the image recognition unit 123 is also configured to determine the image data with the earliest received time stamp in the update receiving queue as the image data to be recognized; perform image recognition processing on the image data to be recognized, when the object is extracted through the image recognition processing
  • the area sending module 14 is also configured to send the object area to be processed to the target cloud application server.
  • the image recognition unit 123 is specifically used for:
  • the initial object area is determined as the first object area
  • N is a positive integer.
  • the image recognition unit 123 is specifically used for:
  • the area of the part to be fused is fused with the initial object area to obtain a first object area.
  • the first image data carries a device identifier to be confirmed;
  • the device identifier to be confirmed is a device identifier bound to a cloud application server, and the bound cloud application server has a binding relationship with the first client;
  • the area sending module 14 includes: a set acquiring unit 141 and an identifier matching unit 142 .
  • the set acquisition unit 141 is used to acquire a set of stored device identifiers;
  • the stored device identifier set includes M stored device identifiers, one stored device identifier corresponds to a registered cloud application server, and M is a positive integer;
  • the identification matching unit 142 is configured to determine that the bound cloud application server indicated by the equipment identification to be confirmed belongs to a registered cloud application server if there is an existing equipment identification matching the equipment identification to be confirmed among the M stored equipment identifications, The bound cloud application server indicated by the identification of the device to be confirmed is determined as the target cloud application server, and the first object image data is sent to the target cloud application server.
  • the data processing device 1 may further include: a registration request receiving module 15 , an indicator detection module 16 and an identification adding module 17 .
  • the registration request receiving module 15 is used to receive the registration request sent by the cloud application server to be registered; the registration request is generated by the cloud application server to be registered after receiving the application opening notification sent by the second client; the application opening notification is the second Generated by the client in response to the application start operation for the cloud application;
  • the index detection module 16 is used for detecting the device index information of the cloud application server to be registered according to the registration request;
  • the identification adding module 17 is used for obtaining the storage device identification of the cloud application server to be registered when the equipment index information satisfies the processing quality condition, storing the storage device identification to the stored equipment identification set, and converting the registration cloud application server into The cloud application server has been registered, and the device ID to be stored is converted into the stored device ID.
  • the device index information includes network quality parameters and device version
  • the index detection module 16 may include: a parameter acquisition unit 161 and an index determination unit 162 .
  • the parameter obtaining unit 161 is used to obtain the network quality parameter and device version of the cloud application server to be registered according to the registration request;
  • An index determining unit 162 configured to determine that the device index information meets the processing quality condition if the network quality parameter reaches the parameter threshold and the device version matches the quality standard version;
  • the index determining unit 162 is further configured to determine that the device index information does not meet the processing quality condition if the network quality parameter does not reach the parameter threshold, or the device version does not match the quality standard version.
  • the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
  • the relevant computer equipment such as the service server
  • the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
  • the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
  • the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
  • the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
  • Recognition of data one by one can improve the recognition efficiency of image data.
  • image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
  • the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
  • FIG. 14 is a schematic structural diagram of another data processing device provided by an embodiment of the present application.
  • the data processing device can be a computer program (including program code) running in the computer equipment, for example, the data processing device is an application software; the data processing device can be used to execute the method shown in FIG. 8 .
  • the data processing device 2 may include: an area storage module 21 , an area rendering module 22 , an area receiving module 23 and a state adjustment module 24 .
  • the area storage module 21 is used to receive the first object image data sent by the service server, and store the first object image data in the first buffer zone whose working state is in the storage state in the buffer set; the first object image data is the first object image data The image data contained in an object area, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data; the first image data is provided by the first client The first image data is the image data containing the object obtained by the first client when running the cloud application;
  • the area rendering module 22 is used to adjust the working state of the first buffer to the reading state when the second buffer in the buffer set whose working state is in the read state does not contain unprocessed object image data, and convert the second buffer to the read state.
  • the working state of the area is adjusted to the storage state, the first object area is read from the first buffer whose working state is in the reading state, and the first object area is rendered;
  • the area receiving module 23 is used to receive the second object image data sent by the service server during the rendering process of the first object area, and store the second object image data in the second buffer whose working state is in the storage state;
  • the second The object image data is the image data included in the second object area, which is obtained by the service server after extracting the first object area and performing image recognition processing on the second image data, and the second object area is the object in the second object area.
  • the area in the image data; the second image data is the image data with the latest receiving time stamp obtained by the service server from the receiving queue when extracting the first object area; the second image data in the receiving queue is During the process of performing image recognition processing on the first image data by the business server, it is obtained continuously from the first client;
  • a state adjustment module 24 configured to adjust the working state of the first buffer to a storage state, and adjust the working state of the second buffer to a reading state when the first rendering data corresponding to the first object image data is acquired, The second object image data is read from the second buffer in the read state, and the second object image data is rendered.
  • the state adjustment module 24 may include: an identification obtaining unit 241 and an identification switching unit 242 .
  • the identification obtaining unit 241 is configured to obtain the reading pointer identification corresponding to the first buffer and used to represent the reading state when the first rendering data corresponding to the first object area is obtained, and the identification of the reading pointer corresponding to the second buffer.
  • An identifier switching unit 242 configured to switch the read pointer identifier corresponding to the first buffer to a storage pointer identifier; the working state of the first buffer with the storage pointer identifier is a storage state;
  • the identification switching unit 242 is further configured to switch the storage pointer identification of the second buffer to the reading pointer identification; the working state of the second buffer with the reading pointer identification is the reading state.
  • the data processing device 2 may further include: a buffer allocation module 25 , an identification setting module 26 and a set generation module 27 .
  • a buffer allocation module 25 configured to allocate the first buffer and the second buffer
  • the identification setting module 26 is used to set the initial pointer identification of the first buffer zone as the storage pointer identification, and set the initial pointer identification of the second buffer area as the reading pointer identification; the working state of the first buffer area with the storage pointer identification is the storage state; the working state of the second buffer with the read pointer identification is the read state;
  • the set generating module 27 is configured to generate a buffer set according to the first buffer whose working state is in the storage state and the second buffer whose working state is in the reading state.
  • the image receiving delay on the side of the business server can be reduced, and the image recognition efficiency can be improved; through the double buffer allocation processing of the cloud game server, there is no need to copy data, only need It is enough to switch the working status of the two buffers, and there is no need to allocate buffers every time.
  • receiving data and processing data (such as reading and rendering data) can be carried out at the same time, without waiting for each other, which can reduce delay. That is to say, on the basis of reducing the delay at the business server side, the delay can be further reduced by setting the double buffer.
  • FIG. 15 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the device 1 in the above embodiment corresponding to FIG. 13 or the device 2 in the embodiment corresponding to FIG. 14 can be applied to the above computer equipment 8000, and the above computer equipment 8000 can include: a processor 8001, a network interface 8004 and memory 8005, in addition, the computer device 8000 also includes: a user interface 8003, and at least one communication bus 8002.
  • the communication bus 8002 is used to realize connection and communication between these components.
  • the user interface 8003 may include a display screen (Display) and a keyboard (Keyboard), and the optional user interface 8003 may also include a standard wired interface and a wireless interface.
  • the network interface 8004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 8005 can be a high-speed RAM memory, or a non-volatile memory, such as at least one disk memory.
  • the memory 8005 may also be at least one storage device located away from the aforementioned processor 8001 .
  • the memory 8005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a device control application program.
  • the network interface 8004 can provide a network communication function; the user interface 8003 is mainly used to provide an input interface for the user; and the processor 8001 can be used to call the device control application stored in the memory 8005 program to implement the data processing methods provided in the foregoing embodiments.
  • the computer device 8000 described in the embodiment of the present application can execute the description of the data processing method in the previous embodiment corresponding to FIG. 3 to FIG. 8 , and can also execute the data processing method in the previous embodiment corresponding to FIG. 13
  • the device 1, or the description of the data processing device 2 in the embodiment corresponding to FIG. 14 will not be repeated here.
  • the description of the beneficial effect of adopting the same method will not be repeated here.
  • the embodiment of the present application also provides a computer-readable storage medium, and the above-mentioned computer-readable storage medium stores the computer program executed by the aforementioned data processing computer device 8000, and
  • the above-mentioned computer program includes program instructions.
  • the above-mentioned processor executes the above-mentioned program instructions, it can execute the description of the above-mentioned data processing method in the embodiments corresponding to FIG. 3 to FIG. 8 above, so details will not be repeated here.
  • the description of the beneficial effect of adopting the same method will not be repeated here.
  • the above-mentioned computer-readable storage medium may be the data processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the above-mentioned computer equipment, such as a hard disk or memory of the computer equipment.
  • the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (smart media card, SMC), a secure digital (secure digital, SD) card, Flash card (flash card), etc.
  • the computer-readable storage medium may also include both an internal storage unit of the computer device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the computer device.
  • the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
  • One aspect of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method provided in one aspect of the embodiments of the present application.
  • each flow and/or of the method flow charts and/or structural diagrams can be implemented by computer program instructions or blocks, and combinations of processes and/or blocks in flowcharts and/or block diagrams.
  • These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a A device for realizing the functions specified in one or more steps of the flowchart and/or one or more blocks of the structural diagram.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device implements the functions specified in one or more blocks of the flow chart and/or one or more blocks of the structural schematic diagram.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby
  • the instructions provide steps for implementing the functions specified in one or more steps of the flowchart and/or one or more blocks in the structural illustration.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种数据处理方法、装置、设备以及可读存储介质,方法包括:获取第一客户端发送的第一图像数据,将第一图像数据存储至接收队列中;对接收队列中的第一图像数据进行图像识别处理,在第一图像数据的图像识别处理过程中,将持续获取到的第一客户端所发送的第二图像数据,存储至接收队列中,得到更新接收队列;当通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,将第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,同步对更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理。采用本申请,可以减少图像传输时延,提高图像识别效率。

Description

一种数据处理方法、装置、设备以及可读存储介质
本申请要求于2021年09月24日提交中国专利局、申请号202111123508.7、申请名称为“一种数据处理方法、装置、设备以及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种数据处理方法、装置、设备以及可读存储介质。
背景技术
电子设备的相关技术的迅猛发展和互联网的普及,使得依靠电子设备而存在和运行的游戏行业得到了突飞猛进的发展机会。尤其是以智能手机、平板电脑等为代表的智能终端出现之后,游戏行业的发展潜能得到了更大的凸显。
为了给用户提供沉浸式体验,可以根据用户人体形象在云游戏中创建对应的虚拟对象(例如,虚拟动画对象),并在云游戏中显示该虚拟对象,也就是说,可以通过虚拟对象将用户置身于虚拟的云游戏场景中,给予用户对于云游戏的沉浸式体验。通常情况下,对于此过程,通常是终端通过摄像头采集到用户画面后,直接在终端上进行用户人像的识别与提取,从而得到对应的虚拟对象并显示。
由于终端的计算能力并不高,很可能产生因计算能力不足而导致图像识别的效率低下的问题,进而导致终端在将人像识别结果发送至云端的过程中,也会存在较大的时延,从而会使得游戏在显示虚拟对象时也存在时延,导致游戏所显示的虚拟对象的虚拟行为与用户当前的行为状态并不匹配。
发明内容
本申请实施例提供一种数据处理方法、装置、设备以及可读存储介质,可以减少图像传输时延,提高图像识别效率。
本申请实施例一方面提供了一种数据处理方法,该方法由计算机设备执行,该方法包括:
获取第一客户端发送的第一图像数据,将第一图像数据存储至接收队列中;第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
对接收队列中的第一图像数据进行图像识别处理,在第一图像数据的图像识别处理过程中,将持续获取到的第一客户端所发送的第二图像数据,存储至接收队列中,得到更新接收队列;
当通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,将第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,同步对更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;目标云应用服务器用于对第一对象图像数据进行渲染得到渲染数据,并将渲染数据发送至第一客户端。
本申请实施例一方面提供了一种数据处理装置,该装置部署在计算机设备上,该装置包括:
数据获取模块,用于获取第一客户端发送的第一图像数据,将第一图像数据存储至接收队列中;第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
图像识别模块,用于对接收队列中的第一图像数据进行图像识别处理;
队列更新模块,用于在第一图像数据的图像识别处理过程中,将持续获取到的第一客户端所发送的第二图像数据,存储至接收队列中,得到更新接收队列;
区域发送模块,用于当通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,将第一对象区域所包含的第一对象图像数据发送至目标云应用服务器;目标云应用服务器用于对第一对象图像数据进行渲染得到渲染数据,并将渲染数据发送至第一客户端;
区域发送模块,还用于同步对更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理。
本申请实施例一方面提供了另一种数据处理方法,该方法由计算机设备执行,该方法包括:
接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象区域,将第一对象区域进行渲染处理;
在第一对象区域的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;更新接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到;
在获取到第一对象图像数据对应的渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
本申请实施例一方面提供了另一种数据处理装置,该装置部署在计算机设备上,该装置包括:
区域存储模块,用于接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业 务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
区域渲染模块,用于当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象区域,将第一对象区域进行渲染处理;
区域接收模块,用于在第一对象区域的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;更新接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到;
状态调整模块,用于在获取到第一对象图像数据对应的渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
本申请实施例一方面提供了一种计算机设备,包括:处理器和存储器;
存储器存储有计算机程序,计算机程序被处理器执行时,使得处理器执行本申请实施例中的方法。
本申请实施例一方面提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序包括程序指令,程序指令当被处理器执行时,执行本申请实施例中的方法。
本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例中一方面提供的方法。
在本申请实施例中,客户端(如第一客户端)在获取到包含对象的第一图像数据时,可以将该第一图像数据发送至相关计算机设备(如业务服务器),由该业务服务器进行图像识别处理,无需在客户端本地进行图像识别,可以使得第一图像数据由具备较高计算能力的业务服务器来进行图像识别处理,可以提高图像识别效率与清晰度;同时,在本申请中,业务服务器可以将接收到的第一图像数据存储至接收队列中,在对第一图像数据进行图像识别处理的过程中,可以持续从第一客户端同步获取到第二图像数据,并将该第二图像数据存储至接收队列中,得到更新接收队列。也就是说,本申请中的业务服务器在对第一图像数据进行图像识别处理时,并不会暂停第二图像数据的接收,通过接收队列可以实现图像处理与图像接收的同步进行,由此可以减少图像传输时延。进一步地,业务服务器在通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,业务服务器可以将该第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,由目标云应用 服务器进行渲染并将渲染得到的渲染数据发送至第一客户端,由此可以在云应用中进行显示。同时,在提取出第一对象区域后,业务服务器可以获取到接收队列中具有最晚接收时间戳的第二图像数据,并继续对该第二图像数据进行处理。可以看出,本申请在对某个图像数据进行图像识别处理后,接下来是从接收队列中获取到具有最晚接收时间戳的图像数据进行处理,并非是按照接收时间戳的时间顺序对图像数据进行一一识别,可以提高对图像数据的识别效率,同时由于具有最晚接收时间戳的图像数据是根据对象当前的行为所采集得到,那么对具有最晚接收时间戳的图像数据进行图像识别并显示时,也是与对象当前的行为相匹配的。综上,本申请可以提高图像识别效率,减少图像传输时延,保证云应用所显示的虚拟对象的虚拟行为与对象当前的行为状态匹配。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种网络架构图;
图2a是本申请实施例提供的一种场景示意图;
图2b是本申请实施例提供的一种场景示意图;
图3是本申请实施例提供的一种数据处理方法的流程示意图;
图4是本申请实施例提供的一种跳帧处理示意图;
图5是本申请实施例提供的一种数据处理方法的流程示意图;
图6是本申请实施例提供的一种进行部位融合的场景示意图;
图7是本申请实施例提供一种将第一对象图像数据发送至目标云应用服务器的流程示意图;
图8是本申请实施例提供的一种数据处理方法的流程示意图;
图9是本申请实施例提供的一种双缓冲区状态变化的示意图;
图10是本申请实施例提供的一种系统架构图;
图11是本申请实施例提供的一种系统流程示意图;
图12是本申请实施例提供的一种交互流程图;
图13是本申请实施例提供的一种数据处理装置的结构示意图;
图14是本申请实施例提供的另一种数据处理装置的结构示意图;
图15是本申请实施例提供的一种计算机设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参见图1,图1是本申请实施例提供的一种网络架构图。如图1所示,该网络架构可以包括业务服务器1000、终端设备集群以及云应用服务器集群10000,终端设备集群可 以包括一个或者多个终端设备,这里将不对终端设备的数量进行限制。如图1所示,多个终端设备可以包括终端设备100a、终端设备100b、终端设备100c、…、终端设备100n;如图1所示,终端设备100a、终端设备100b、终端设备100c、…、终端设备100n可以分别与业务服务器1000进行网络连接,以便于每个终端设备可以通过该网络连接与业务服务器1000之间进行数据交互。云应用服务器集群10000可以包括一个或者多个云应用服务器,这里将不对云应用服务器的数量进行限制。如图1所示,多个云应用服务器可以包括云应用服务器10001、云应用服务器10002、…、云应用服务器1000n;如图1所示,云应用服务器10001、云应用服务器10002、…、云应用服务器1000n可以分别与业务服务器1000进行网络连接,以便于每个云应用服务器可以通过该网络连接与业务服务器1000之间进行数据交互。其中,每个云应用服务器可以为云应用服务器,一个终端设备可以对应于一个云应用服务器(多个终端设备可以对应相同的云应用服务器),当终端设备运行云应用时,其对应的云应用服务器为其提供相应的功能服务(如计算服务)。例如,云应用为云游戏应用时,该云应用服务器可为云游戏服务器,当终端设备运行云游戏应用时,其对应的云游戏服务器为其提供相应的功能服务。
可以理解的是,如图1所示的每个终端设备均可以安装有云应用,当该云应用运行于各终端设备中时,可以分别与图1所示的业务服务器1000之间进行数据交互,使得业务服务器1000可以接收来自于每个终端设备的业务数据。
其中,该云应用可以包括具有显示文字、图像、音频以及视频等数据信息功能的应用。如,云应用可以为娱乐类应用(例如,游戏应用),该娱乐类应用可以用于用户进行游戏娱乐。本申请中的业务服务器1000可以根据这些云应用获取到业务数据,如,该业务数据可以为终端设备通过摄像头组件所采集到的包含用户(可称为对象)的图像数据(可称之为第一图像数据)。
随后,业务服务器1000在获取到第一图像数据后,可以将该第一图像数据存储至接收队列中,再从接收队列中获取到该第一图像数据,业务服务器1000可对该第一图像数据进行图像识别处理。应当理解,终端设备在获取到第一图像数据并发送至业务服务器1000后,可以持续获取包含对象的图像数据(可称之为第二图像数据),而业务服务器1000在对第一图像数据进行图像识别处理的过程中,也可以从终端设备处持续获取到终端设备所获取到的第二图像数据。同第一图像数据一样,业务服务器1000也可将该第二图像数据存储至接收队列中,由此可以得到包含一个或多个第二图像数据的更新接收队列。应当理解,当第一图像数据为业务服务器1000所接收到的首个图像数据时,业务服务器1000可以不用将第一图像数据存储至接收队列中,业务服务器1000可直接对该第一图像数据进行图像识别处理,并在该图像识别处理的过程中,持续从终端设备处获取到终端设备所采集到的第二图像数据(也就是首个图像数据之后的第二个图像数据、第三个图像数据、第三个图像数据……),将该第二图像数据存储至接收队列中。
业务服务器1000在通过图像识别处理,提取出对象在第一图像数据中所处的区域(可称为第一对象区域)后,业务服务器1000可在该第一图像数据中获取到该第一对象区域所包含的图像数据(可称之为第一对象图像数据),业务服务器1000可将该第一对象图像数 据发送至该终端设备所对应的云应用服务器中,该云应用服务器可对该第一对象图像数据进行读取并渲染,在渲染完成得到渲染数据后可以发送至终端设备,而终端设备可将该渲染数据在云应用中进行显示输出。同时,应当理解,在业务服务器1000提取出对象在第一图像数据中所处的第一对象区域时,业务服务器1000可接着对其余的图像数据进行图像识别处理,例如,业务服务器1000可在包含第二图像数据的更新接收队列中,获取到具有最晚接收时间戳的第二图像数据(也就是最晚接收到的图像数据),业务服务器1000可以接着对该具有最晚接收时间戳的第二图像数据(可称为目标图像数据)进行图像识别处理。应当理解,在业务服务器1000对目标图像数据进行图像识别处理时,业务服务器1000可持续获取来自终端设备的包含对象的图像数据(可称为第三图像数据),并将该第三图像数据存储至更新接收队列中,得到新的接收队列。在业务服务器1000提取出对象在目标图像数据中所处的区域(可称为第二对象区域)时,业务服务器1000也可在目标图像数据中获取到该第二对象区域所包含的图像数据(可称为第二对象图像数据),业务服务器也可将该第二对象图像数据发送至终端设备对应的云应用服务器;同时,该业务服务器1000可在当前的新的接收队列中,获取到具有最晚接收时间戳的第三图像数据(可称为新目标图像数据),业务服务器1000可接着对该新目标图像数据进行图像识别处理。
应当理解,本申请中的业务服务器1000在对某个图像数据进行图像识别处理的过程中,可持续接收其余的图像数据,由此可以实现识别与接收的同步,无需等待识别完成再接收,由此可以减少图像数据的接收时延。同时,在每识别完成某个图像数据后(提取出对象所处的区域后),业务服务器会进行跳帧处理(也就是说,会获取到当前具有最晚接收时间戳的图像数据,并对其进行图像识别处理;而不是获取到当前处理完成的图像数据的下一个图像数据(接收时间戳最为接近的图像数据)对其进行图像识别处理),通过跳帧处理可以减少图像数据的排队时延,具有最晚接收时间戳的图像数据是采集的用户当前的行为,那么对具有最晚接收时间戳的图像数据进行图像识别处理并显示后,可以使得在云应用中所显示的渲染数据,与用户当前的行为是同步的且匹配的。
本申请实施例可以在多个终端设备中选择一个终端设备与业务服务器1000进行数据交互,该终端设备可以包括:智能手机、平板电脑、笔记本电脑、桌上型电脑、智能电视、智能音箱、台式计算机、智能手表、智能车载等携带多媒体数据处理功能(例如,视频数据播放功能、音乐数据播放功能)的智能终端,但并不局限于此。例如,本申请实施例可以在图1所示的终端设备100a中集成有上述云应用,此时,该终端设备100a可以通过该云应用与业务服务器1000之间进行数据交互。
可以理解的是,本申请实施例提供的方法可以由计算机设备执行,计算机设备包括但不限于用户终端或业务服务器。其中,业务服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云应用服务器。
其中,终端设备以及业务服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
可以理解的是,上述计算机设备(如上述业务服务器1000、终端设备100a、终端设备100b等等)可以是一个分布式系统中的一个节点,其中,该分布式系统可以为区块链系统,该区块链系统可以是由该多个节点通过网络通信的形式连接形成的分布式系统。其中,节点之间可以组成的点对点(P2P,Peer To Peer)网络,P2P协议是一个运行在传输控制协议(TCP,Transmission Control Protocol)协议之上的应用层协议。在分布式系统中,任意形式的计算机设备,比如业务服务器、终端设备等电子设备都可以通过加入该点对点网络而成为该区块链系统中的一个节点。为便于理解,以下将对区块链的概念进行说明:区块链是一种分布式数据存储、点对点传输、共识机制以及加密算法等计算机技术的新型应用模式,主要用于对数据按时间顺序进行整理,并加密成账本,使其不可被篡改和伪造,同时可进行数据的验证、存储和更新。当计算机设备为区块链节点时,由于区块链的不可被篡改特性与防伪造特性,可以使得本申请中的数据(如第一图像数据、第一对象区域、第二对象图像数据等等)具备真实性与安全性,从而可以使得基于这些数据进行相关数据处理后,得到的结果更为可靠。
为便于理解,请参见图2a,图2a是本申请实施例提供的一种场景示意图。其中,如图2a所示的终端设备100a可以为上述图1所对应实施例中终端设备集群100中的终端设备100a;如图2a所示的业务服务器1000可为上述图1所对应实施例中业务服务器1000;如图2a所示的云应用服务器10001可为上述图1所对应实施例中云应用服务器10001。
如图2a所示,终端设备100a中可包含游戏应用,当用户a(可称为对象a)开启游戏应用后,终端设备100a可通过摄像头组件200a采集包含对象a的画面(可称为原始图像帧20a),终端设备可以对该原始图像帧进行编码处理(如H264编码处理),得到图像数据。终端设备100a可将该图像数据发送至业务服务器1000。业务服务器1000可将该图像数据存储至接收队列中,随后,业务服务器1000可从该接收队列中获取到该图像数据,业务服务器1000可将该图像数据进行解码处理,得到该原始图像帧20a。可以理解的是,对于接收到的每一个图像数据(包括首个图像数据),业务服务器1000均会将之存储至接收队列中,而对于业务服务器1000接收到的首个图像数据,当然也可以选择不进行存储,而是直接将其进行解码处理。例如,若上述原始图像帧20a对应的图像数据为业务服务器1000接收到的第一个图像数据,业务服务器1000可按照存储规则先将其存储至接收队列中,再在接收队列中获取到;业务服务器1000也可不对其进行存储,可直接将其进行解码处理,得到原始图像帧20a。
业务服务器1000可对该原始图像帧20a进行图像识别处理,业务服务器1000可通过图像识别处理确定出对象a在该原始图像帧20a中所对应的对象边缘曲线P1。应当理解,用户a会持续产生动作行为(如双手举高、摆头、下蹲等等),那么终端设备100a在采集到原始图像帧20a后,可持续通过摄像头组件200a采集到包含对象a的原始图像帧,终端设备100a每成功获取到一个包含对象a的原始图像帧,终端设备100a均可将其进行编码处理,得到图像数据,并将其发送至业务服务器1000。而业务服务器1000即使目前在对原始图像帧20a进行图像识别处理,但是并不影响对图像数据的接收,业务服务器1000可 在图像识别处理的过程中,持续从终端设备100a处获取到不同的图像数据,业务服务器1000可将这些图像数据暂时存储至接收队列中。
在确定出对象边缘曲线P1后,业务服务器1000可在原始图像帧20a中,提取出对象边缘曲线P1所覆盖到的全部区域(可称为对象区域P2),可在原始图像帧20a中获取到该对象区域所包含的全部图像内容,由此可得到该对象区域所包含的全部图像内容(可称为对象图像数据);业务服务器1000可获取到该终端设备100a所对应的云应用服务器(如云应用服务器10001),业务服务器1000可将该对象区域P2所包含的对象图像数据发送至该云应用服务器10001。而云应用服务器10001在获取到该对象图像数据后,可对该对象图像数据进行渲染处理,由此可得到渲染数据P3,云游戏服务器可将该渲染数据P3发送至其对应的终端设备100a。
请参见图2b,图2b是本申请实施例提供的一种场景示意图。如图2b所示,终端设备100a在接收到该渲染数据P3后,可将该渲染数据P3显示于游戏应用中。如图2b所示,在游戏对应的虚拟环境(可理解为游戏场景)中,包含有虚拟背景(虚拟房屋背景)、正在跳舞的虚拟对象2000a(正在跳舞)与正在跳舞的虚拟对象2000b,在该虚拟环境中,可显示渲染数据P3。
可以理解的是,在上述业务服务器1000从原始图像帧20a中提取出对象区域P2时,业务服务器1000还可以接着对接收队列中的图像数据进行处理。为了使得终端设备100a所显示的渲染数据与对象a当前的动作行为相符合,业务服务器1000可以进行跳帧处理,即业务服务器1000可以获取到接收队列中具有最晚接收时间戳的图像数据,将其进行解码以及图像识别处理。具有最晚接收时间戳的图像数据可以理解为终端设备100a在当前时刻下,所发送的最后的一个图像数据,该图像数据可对应于对象a最新的实时的动作行为。那么提取出其对应的对象区域并进行渲染输出后,所呈现的渲染数据是与对象实际的动作行为相符合的。
应当理解,若云应用为游戏应用,对象a可以为游戏玩家,将其对应的人像渲染数据(如渲染数据P3)显示于该游戏应用中,即是将玩家人像投射于游戏场景中,由此可以使得游戏玩家能够“置身于”游戏场景中,能够提高游戏玩家的沉浸感。同时,本申请实施例通过接收队列可以实现图像识别与接收的同步,可以减少图像数据的接收时延;此外,通过跳帧处理,可以在加快对图像数据的识别效率、进一步减少时延的同时,还可以提高游戏中所显示的玩家人像与玩家之间的匹配率。
请参见图3,图3是本申请实施例提供的一种数据处理方法的流程示意图。其中,该方法可以由计算机设备执行,该计算机设备可以是终端设备(例如,上述图1所示的终端设备集群中的任一终端设备,如终端设备100a)或业务服务器(如,上述图1所示的业务服务器1000)执行,该计算机设备也可以包括终端设备和业务服务器,从而由终端设备和业务服务器共同执行。为便于理解,本实施例以该方法由上述业务服务器执行为例进行说明。其中,该数据处理方法至少可以包括以下S101-S103:
S101,获取第一客户端发送的第一图像数据,将第一图像数据存储至接收队列中;第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据。
本申请中,第一客户端可以理解为终端设备,第一客户端中可以部署有应用程序,该应用程序可以为云应用(如游戏应用)等等。以云应用为例,当用户使用第一客户端,用户可以在第一客户端中启动该云应用,例如,用户可以点击该云应用,并点击启动控件,以运行该云应用。第一客户端可以指任一客户端。
应当理解,当用户启动第一客户端中的云应用后,也就是第一客户端在运行云应用时,第一客户端可以通过摄像头组件采集到包含用户(可称为对象)的画面,包含用户的画面可称为原始图像帧。第一客户端可将该原始图像帧进行编码处理,由此可以得到图像编码文件,这里可将图像编码文件称为图像数据。第一客户端可以将图像数据发送至业务服务器(业务服务器可以是指具有图像解码功能与图像识别功能的服务器,可用于获取第一客户端发送的编码文件,并进行解码以及图像识别处理)。其中,在不同的编码方式中,H264编码方式作为一种编码格式,其具有较高的压缩比,对于同样的图像经过H264编码后,在传输中会占用更少的带宽,因而H264在移动视频应用中具有广泛应用。那么在本申请中,为了减少第一客户端与业务服务器之间的传输带宽,可将对原始图像帧的编码方式优先选择为该H264编码方式。当然,对于第一客户端对原始图像帧进行编码的编码方式,还可以为除H264以外的任意一种能够对原始图像帧进行编码的方式,如该编码方式可为H262编码方式、H263编码方式、H265编码方式等等,本申请将不对其进行限制。
在本申请中,业务服务器在获取到第一客户端发送的图像数据后,可将该图像数据存储至接收队列中。第一图像数据为例,业务服务器在获取到第一图像数据后,可将该第一图像数据存储至接收队列中,其具体方法可为:接收第一客户端发送的第一图像数据(第一图像数据是由第一客户端对原始图像帧进行编码处理后得到的数据);随后,业务服务器可获取接收到第一图像数据的接收时间戳,可将第一图像数据与接收时间戳关联存储至接收队列中。也就是说,在存储每一个图像数据时,业务服务器可一并存储其接收时刻。例如,对于图像数据A,业务服务器的接收时刻(可以作为接收时间戳)为2021年9月5日19:09,那么业务服务器可将该图像数据A与该接收时刻2021年9月5日19:09关联存储至接收队列中。应当理解,当该接收队列还未存储有图像数据时,该接收队列可为空。
S102,对接收队列中的第一图像数据进行图像识别处理,在第一图像数据的图像识别处理过程中,将持续获取到的第一客户端所发送的第二图像数据,存储至接收队列中,得到更新接收队列。
本申请中,在将第一图像数据存储至接收队列后,可从该接收队列中获取到该第一图像数据并对其进行图像识别处理。其中,因该第一图像数据实际为图像编码文件,则可先对其进行解码处理还原得到原始图像帧后,再对原始图像帧进行图像识别处理。其具体方法可为:可对第一图像数据进行解码处理,得到具有初始图像格式的解码图像数据;随后,可对解码图像数据进行格式转换,得到具有标准图像格式的原始图像帧;随后,可对具有标准图像格式的原始图像帧进行图像识别处理。
应当理解,标准图像格式可以是指规定的统一进行图像识别处理的图像格式,例如,规定进行图像识别处理的图像需具备色彩格式(Red Green Blue color mode,RGB color mode),则该RGB格式即可称为标准图像格式。当对第一图像数据进行解码处理后,可 得到具备初始图像格式的解码图像数据;若该初始图像格式为该标准图像格式,则可将该解码图像数据确定为具有标准图像格式的原始图像帧;而若该初始图像格式与标准图像格式不同,则可将其进行格式转换,转换为标准图像格式,得到具有标准图像格式的原始图像帧。如,初始图像格式为YUV格式时,可将YUV格式转换为RGB格式,由此可得到具有RGB格式的原始图像帧。
在业务服务器解码得到第一图像数据对应的原始图像帧后,可对该具有标准图像格式的原始图像帧进行图像识别处理,确定出对象在该原始图像帧中所处的区域(可称为第一对象区域),对于进行图像识别处理确定第一对象区域的具体实现方式可以参见后续图5所对应实施例中的描述。
可以理解的是,在本申请中,第一客户端在向业务服务器发送第一图像数据后,第一客户端可持续采集到包含对象的画面(新的原始图像帧),而第一客户端可将每个原始图像帧进行编码处理,得到图像编码文件(可称为第二图像数据)。第一客户端可持续将每个第二图像数据发送至业务服务器。而业务服务器在对第一图像数据进行图像识别处理的过程种,也并不会暂停对图像数据的接收,业务服务器可持续从第一客户端处获取到第二图像数据,业务服务器可将该第二图像数据暂时存储至接收队列中,由此可得到更新接收队列。
S103,当通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,将第一对象区域发送至目标云应用服务器,同步对更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;目标云应用服务器用于对第一对象区域进行渲染得到渲染数据,将渲染数据发送至第一客户端。
本申请中,通过上述图像识别处理可确定出对象在原始图像帧中所处的第一对象区域,则在确定出第一对象区域后,可在该原始图像帧中获取到该第一对象区域所包含的图像内容(可称为第一对象图像数据),业务服务器可提取出第一对象区域以及该第一对象区域所包含的第一对象图像数据。在提取出第一对象区域以及该第一对象图像数据时,业务服务器可获取到第一客户端所对应的目标云应用服务器,业务服务器可将该第一对象图像数据发送至该目标云应用服务器。其中,该目标云应用服务器可以是指第一客户端所对应的云应用服务器,当该第一客户端运行云应用时,该云应用服务器为第一客户端提供计算服务,计算服务如中央处理器(Central Processing Unit,CPU)运算服务、图形处理器(Graphics Processing Unit,GPU)运算服务等等。目标云应用服务器可对该第一对象图像数据进行渲染处理,由此可得到第一对象图像数据对应的渲染数据,目标云应用服务器可将该渲染数据发送至第一客户端,第一客户端可在云应用中显示该渲染数据。
同时,应当理解,在业务服务器提取出第一对象区域后,业务服务器也可继续对其余的图像数据进行解码以及图像识别处理。例如,业务服务器可获取到更新接收队列中具有最晚接收时间戳的第二图像数据,业务服务器可将该第二图像数据进行解码以及图像识别处理。
其中,业务服务器在更新接收队列中获取具有最晚接收时间戳的第二图像数据,并对其进行解码以及图像识别处理的过程,可称之为跳帧处理。为便于理解,请一并参见图4, 图4是本申请实施例提供的一种跳帧处理示意图。如图4所示,接收队列40a中可包含图像数据1、图像数据2、图像数据3、图像数据4、图像数据5、图像数据6、图像数据7、图像数据8、图像数据9,其中,该图像数据1至图像数据9是按照接收时间戳从早至晚的顺序进行排序的,在图4中依次通过标号1、2、3、4、5、6、7、8、9表示,即图像数据1的接收时间戳为最早接收时间戳,图像数据9的接收时间戳为最晚时间戳。其中,图像数据1、图像数据2、图像数据3、图像数据4、图像数据5为已经经过处理的图像数据,图像数据6为当前业务服务器正在处理的图像数据,图像数据7、图像数据8、图像数据9为业务服务器在处理图像数据6时,所接收到的图像数据,图像数据7、图像数据8、图像数据9正在排队等待处理。
如图4所示,当业务服务器提取出对象在图像数据6中所处的对象区域时,业务服务器可从接收队列40a的末尾处获取到图像数据9(即,获取到具有最晚接收时间戳的图像数据),业务服务器可跳过图像数据7与图像数据8,紧接着对图像数据9进行解码以及图像识别处理,这也就是跳帧处理过程。为便于理解,以下将进行举例详细阐述,假设业务服务器解码并进行图像识别处理所需时长为30ms,而业务服务器接收到前后两个图像数据的时长间隔为10ms(即,在第0ms时接收到一个图像数据,在第10ms时接收到下一个图像数据,在第20ms时接收到下一个图像数据);那么当业务服务器对图像数据6进行处理时,在此过程中,业务服务器会持续接收到图像数据7(此时可将之存储至接收队列的末尾处:图像数据6之后)、图像数据8(此时可将之存储至接收队列的末尾处:图像数据7之后)、图像数据9(此时可将之存储至接收队列的末尾处:图像数据8之后)。那么在处理完图像数据6时,此时接收队列如接收队列40a所示,此时业务服务器可直接从接收队列的末尾处获取到最新的图像数据(即图像数据9),跳过图像数据7与图像数据8(虽然图像数据7与图像数据8并未经过处理,但实际已经将之跳过不会再对其进行处理,所以可将图像数据7与图像数据8确定为已经处理过的图像数据)。
同理,如图4所示,当在处理图像数据9时,业务服务器也可以持续接收到图像数据10与图像数据11(在图4中用标号10、11表示),得到接收队列40b;那么当提取出对象在图像数据9中的对象区域时,可以获取到接收队列40b中的队列末尾处的图像数据(即具有最晚接收时间戳的图像数据11),业务服务器可跳过图像数据10,对图像数据11进行解码以及图像识别处理。同理,在对图像数据11进行处理时,业务服务器可持续接收到其余的图像数据得到接收队列40c,当处理完图像数据11时,又可获取到接收队列40c中的队列末尾处的图像数据,如此反复执行,对于此过程,这里将不再进行重复赘述。
可以理解的是,每处理一个图像数据,可对排列于之前的图像数据(即已经处理过的图像数据)进行清空,由此可提高接收队列的存储空间。例如,接收队列40a中已经处理过的图像数据包含图像数据1至图像数据5,那么可将图像数据1至图像数据5进行删除,此时接收队列40a中只包含图像数据6至图像数据9。或者说,每处理完一个图像数据,可在接收队列中获取到具有最晚接收时间戳的图像数据,可在获取到该图像数据后,将排列于该图像数据之前(即接收时间戳早于该图像数据)的图像数据(可称为历史图像数据) 进行删除。例如,在接收队列40a中,获取到图像数据9为即将要进行图像处理的图像数据,此时已将位于图像数据9之前的历史图像数据(包含图像数据1-图像数据8)。
也就是说,在提取出第一图像数据中的第一对象区域时,可以先清空掉已经处理过的图像数据(包含第一图像数据),再在剩下的未处理图像数据中获取具有最晚接收时间戳的第二图像数据。同样的,在提取出第一图像数据中的第一对象区域时,可以先获取到具有最晚接收时间戳的第二图像数据,再将接收时间戳早于该第二图像数据的历史图像数据进行删除清空(即先获取更新接收队列中具有最晚接收时间戳的第二图像数据,随后,再对第二图像数据进行图像识别处理,同步删除更新接收队列中的历史图像数据;其中,历史图像数据为更新接收队列中接收时间戳早于第二图像数据的图像数据)。应当理解,无论是哪一种队列清空方式,均是为了改善队列的存储空间,本申请对于队列清空方式将不进行具体限定。
应当理解,业务服务器在接收到第一客户端发送的编码码流(即图像数据)后,如果解码以及图像识别处理(以下将称之为图像处理)的时长,大于了接收到两帧图像数据之间的间隔时长(如图像处理的时长为30ms,两帧图像数据之间的接收间隔时长为10ms),那么如果不存在接收队列,那么第一客户端会一直等待业务服务器对当前的图像数据进行图像处理,这大大增加了图像数据的传输时延,会严重影响图像数据的传输效率。而本申请可通过接收队列来存储图像数据,使得业务服务器可以在进行图像处理的过程中,持续接收第一客户端发送的图像数据,然而若业务服务器对图像数据依次进行图像处理,那么会使得业务服务器所识别的图像数据与对象最新的状态严重不匹配,业务服务器所识别的图像数据严重落后,所以通过跳帧处理,可以使得业务服务器每次进行图像处理时,均是对最新的图像数据进行图像处理,可以减少图像识别时延,同时由于业务服务器的高计算能力,也可以提高图像识别效率。
在本申请实施例中,客户端(如第一客户端)在获取到包含对象的第一图像数据时,可以将该第一图像数据发送至相关计算机设备(如业务服务器),由该业务服务器进行图像识别处理,无需在第一客户端本地进行图像识别,可以使得第一图像数据由具备较高计算能力的业务服务器来进行图像识别处理,可以提高图像识别效率与清晰度;同时,在本申请中,业务服务器可以将接收到的第一图像数据存储至接收队列中,在对第一图像数据进行图像识别处理的过程中,可以持续从第一客户端同步获取到第二图像数据,并将该第二图像数据存储至接收队列中,得到更新接收队列。也就是说,本申请中的业务服务器在对第一图像数据进行图像识别处理时,并不会暂停第二图像数据的接收,通过接收队列可以实现图像处理与图像接收的同步进行,由此可以减少图像传输时延。进一步地,业务服务器在通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,业务服务器可以将该第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,由目标云应用服务器进行渲染并将渲染得到的渲染数据发送至第一客户端,由此可以在云应用中进行显示。同时,在提取出第一对象区域后,业务服务器可以获取到接收队列中具有最晚接收时间戳的第二图像数据,并继续对该第二图像数据进行处理。可以看出,本申请在对某个图像数据进行图像识别处理后,接下来是从接收队列中获取到具有最晚接收时间戳的图 像数据进行处理,并非是按照接收时间戳的时间顺序对图像数据进行一一识别,可以提高对图像数据的识别效率,同时由于具有最晚接收时间戳的图像数据是根据对象当前的行为所采集得到,那么对具有最晚接收时间戳的图像数据进行图像识别并显示时,也是与对象当前的行为相匹配的。综上,本申请可以提高图像识别效率,减少图像传输时延,保证云应用所显示的虚拟对象的虚拟行为与对象当前的行为状态匹配。
请参见图5,图5是本申请实施例提供的一种数据处理方法的流程示意图。该流程可以对应于上述图3所对应实施例中,对原始图像帧进行图像识别处理确定第一对象区域的流程,如图5所示,该流程可以包括至少以下S501-S503:
S501,识别对象在原始图像帧中的对象边缘关键点。
这里的对象边缘关键点可以是指对象的对象轮廓关键点,原始图像帧中包含对象的关键部位,则这里的对象轮廓关键点可以是指关键部位的轮廓关键点。如关键部位为头部,则对象边缘关键点可以是指头部轮廓的关键点,如关键部位为颈部,则对象边缘关键点可以是指颈部轮廓的关键点。
识别对象的对象边缘关键点可以由人工智能算法来识别、专用的图形处理器(Graphics Processing Unit,GPU)来识别等等方式,本申请将不对其进行限制。
S502,将对象边缘关键点进行连接,得到对象的对象边缘曲线。
在确定出对象边缘关键点后,通过将这些对象边缘点进行连接(如,每相邻两个点则进行连接),即可得到对象对应的对象边缘曲线(可以理解为对象轮廓)。示例性地,可参见上述图2a的场景实施例,如图2a所示的曲线P1即可认为是对象a的对象轮廓。
S503,将原始图像帧中对象边缘曲线所覆盖的区域,确定对象在原始图像帧中的初始对象区域,并根据初始对象区域确定第一对象区域。
在确定出对象a的对象边缘曲线后,即可在原始图像帧中确定出对象边缘曲线所覆盖的区域,该区域即可作为该对象在原始图像帧中所处的第一对象区域。例如,参见上述图2a的场景实施例,如图2a所示,在原始图像帧20a中,对象边缘曲线P2所覆盖到的区域为区域P2(该区域P2即为对象a所在的区域),该区域P2即可确定为对象在原始图像帧中所处的区域(这里可称之为第一对象区域)。
在一种可行的实施例中,可将上述对象边缘曲线所覆盖的区域,称之为初始对象区域,在确定出初始对象区域后,可暂时不将初始对象区域确定为最终的第一对象区域,而是根据初始对象区域确定出第一对象区域,其具体方法可为:可获取初始对象区域所呈现的对象的对象关键部位;随后,可获取针对对象的对象识别配置信息,以及对象识别配置信息所指示的对象识别配置部位,可将对象识别配置部位与对象关键部位进行匹配;若对象识别配置部位与对象关键部位相匹配,则可执行根据初始对象区域确定第一对象区域的步骤;而若对象识别配置部位与对象关键部位不匹配,则可确定通过图像识别处理未能提取出第一对象区域。
应当理解,在确定出对象的初始对象区域后,可获取到初始对象区域中所呈现的对象的对象关键部位(对象关键部位可以是指对象的身体部位,如,头部、颈部、手臂部位、腹部、腿部、脚部等);随后,可获取到原始图像帧中所需要包含的对象的对象识别配置 部位(也就是识别规则,该识别规则规定了终端设备所采集的原始图像帧中,所需要包含的对象的部位)。以对象识别配置部位为腿部为例,假设规定终端设备所采集的原始图像帧中需要包含用户的腿部,而在通过对接收到的图像数据进行解码并进行图像识别处理后,所提取出来的初始对象区域所呈现的对象关键部位为头部与颈部,则可确定该对象关键部位(头部与颈部)与对象识别配置部位(腿部)并不匹配,该原始图像帧中所呈现的对象的关键部位是不符合要求的,那么此时可直接确定通过图像识别处理无法提取出第一对象区域(也就是部位不符合要求,提取失败)。而若假设规定终端设备所采集的原始图像帧中需要包含用户的腿部,而在通过对接收到的图像数据进行解码并进行图像识别处理后,所提取出来的初始对象区域所呈现的对象关键部位为腿部,那么此时可确定该原始图像帧中所呈现的对象的关键部位是符合要求的,此时可将初始对象区域确定为第一对象区域。
在一种可行的实施例中,通过上述介绍,若确定出对象识别配置部位与对象关键部位不匹配,确定通过图像识别处理未能提取出所述第一对象区域之后,业务服务器可以获取到更新接收队列中当前图像数据的下一个图像数据,然后接着对下一个图像数据进行图像处理。例如,当前图像数据为第一图像数据,可以获取到接收队列中第一图像数据的下一个图像数据(即在接收时间戳晚于第一图像数据的图像数据中,具有最早接收时间戳的图像数据),业务服务器可接着对该下一个图像数据进行图像处理。其具体方法可为:可将更新接收队列中具有最早接收时间戳的第二图像数据,确定为待识别图像数据;随后,可对待识别图像数据进行图像识别处理,当通过图像识别处理提取出对象在待识别图像数据中所处的待处理对象区域时,可将待处理对象区域发送至目标云应用服务器。
实际上,在业务服务器对当前的图像数据(如第一图像数据)的图像处理时长足够短、效率足够快的情况下,在确定通过图像识别处理未能提取出所述第一对象区域之后,可获取到当前图像数据的下一个图像数据(具有最早接收时间戳的图像数据),对下一个图像数据进行图像处理。其目的在于,当用户执行一个动作,第一客户端获取到了图像帧进行编码后发送至了业务服务器,业务服务器快速识别到其包括的对象关键部位并不符合规范,无法进行对象区域及其对象区域包含的图像数据的提取,则云应用服务器无法接收到提取出的对象图像数据,也就无法对齐进行渲染并显示,那么此时业务服务器可以对下一个图像数据进行图像处理提取出下一个图像数据的对象区域,再将其所包含的对象图像数据发送至云应用服务器进行渲染输出。由此可以减少云应用中所显示的用户人像的跳跃性,增大其连贯性。当然,在上述确定通过图像识别处理未能提取出第一对象区域之后,也可直接在当前更新接收队列中,获取到具有最晚接收时间戳的第二图像数据(而不是获取到具有最早接收时间戳的图像数据),并对其进行图像处理。无论是哪一种获取方式,均是对于在确定通过图像识别处理未能提取出所述第一对象区域之后的一种可行的处理方式,而对于其具体的处理方式,可以由人工经验按照实际情况所设置,本申请将不对其进行限制。
在一种可行的实施例中,通过上述介绍,在确定对象识别配置部位与对象关键部位相匹配的时候,可直接将初始对象区域确定为第一对象区域。除此之外,在确定对象识别配置部位与对象关键部位相匹配的时候,对于确定第一对象区域的具体方法还可为:可获取初始对象区域所呈现的对象的对象关键部位;若对象关键部位具备部位完整性,则可将初 始对象区域确定为第一对象区域;而若对象关键部位不具备部位完整性,则可获取样本数据库中的N(N为正整数)个样本图像帧,可从N个样本图像帧中获取与对象对应的待处理样本图像帧,根据待处理样本图像帧与初始对象区域确定第一对象区域。
其中,对于根据待处理样本图像帧与初始对象区域确定第一对象区域的具体方法可为:可获取待处理样本图像帧中的整体部位信息;随后,可根据对象关键部位,在整体部位信息中确定待融合部位区域;可将待融合部位区域与初始对象区域进行融合,由此可得到第一对象区域。
应当理解,本申请可预先收集到用户整体的完整的人像样本数据(从头部至脚部的完整的人像样本数据),一个用户可对应一个样本图像帧,一个样本图像帧中即可呈现一个用户完整的整体的人像数据。那么当提取出初始对象区域,并确定对象识别配置部位与对象关键部位相匹配的时候,此时可确定该对象关键部位是否具备部位完整性,若具备部位完整性,则可直接将该初始对象区域确定为第一对象区域;而若该对象关键部位并不具备完整性,则可获取到样本数据库该对象所对应的待处理样本图像帧,再在该待处理样本图像帧中获取到对象的整体部位信息,按照该整体部位信息将该初始对象区域补充完整,得到一个完整的包含完整部位的第一对象区域。
为便于理解,请一并参见图6,图6是本申请实施例提供的一种进行部位融合的场景示意图。如图6所示,假设初始对象区域为初始对象区域600a,初始对象区域600a中所呈现的对象关键部位包括头部、颈部、手臂部位、胸部、腹部(也就是用户的上半身部位);假设对象识别配置部位也为用户的上半身部位,也就是说第一客户端需要采集用户的上半身部位,那么可见该初始对象区域是符合要求的。那么进一步地,可确定该对象关键部位是否具备部位完整性,此时,假设我们规定部位完整性是指的用户的整体人像完整性(即需要包含上半身部位以及下半身部位,也就是从头部至脚部),那么可见该初始对象区域600a所呈现的对象关键部位时不具备部位完整性的,那么此时业务服务器1000可获取到样本数据库中该对象所对应的待处理样本图像帧(假设为待处理样本图像帧600b)。如图6所示,该待处理样本图像帧中所呈现的整体部位信息包含对象的从头部至脚部的完整的信息,此时因为初始对象区域600a中已经包含有上半身部位,那么可将该待处理样本图像帧中的下半身部位确定为待融合部位区域(即区域600c),可提取出该待融合部位区域600c。进一步地,可将该待融合部位区域600c与初始对象区域600a进行融合(例如,拼接),由此可得到包含上半身部位与下半身部位的第一对象区域600d。应当理解,通过提前采集用户的整体部位信息(例如,从头部脚部),可以使得第一客户端在每次获取用户的画面时,无需严格要求用户每次都需要站立于固定的能够采集到完整部位的位置,用户可以灵活进行移动,第一客户端只需获取到部分部位信息即可,业务服务器在获取到第一客户端的部分部位信息后,可根据提前采集的整体部位信息来对其进行补充拼接,由此也可得到完整的部位,通过这种方式可以增加用户的体验感与沉浸感。
需要说明的是,在上述过程中,假设我们规定部位完整性是指的用户的上半身部位的完整性,那么此时初始对象区域600a所呈现的对象关键部位实际上已经具备了部位完整性,那么此时可以直接将该初始对象区域600a确定为第一对象区域。
其中,对于上述获取样本数据库中该对象所对应的待处理样本图像帧的具体方法可为:可通过人脸匹配的方式;也可以在采集用户的样本图像帧时,利用用户标识(如用户名称、用户编号等)对其对应的样本图像帧进行标识,使得每个样本图像帧均具备一个用户标识(可称为样本标识);而第一客户端在向业务服务器发送图像数据时,可携带发送该图像数据中所包含的用户的用户标识,那么业务服务器可通过携带的用户标识与样本图像帧的样本标识,来匹配出对应的待处理样本图像帧。对于获取样本数据库中该对象所对应的待处理样本图像帧的具体实现方式,当然并不仅限于上述所描述的方式,本申请对于其具体实现方式不进行限制。
在本申请实施例中,客户端(如第一客户端)在获取到包含对象的第一图像数据时,可以将该第一图像数据发送至相关计算机设备(如业务服务器),由该业务服务器进行图像识别处理,无需在客户端本地进行图像识别,可以使得第一图像数据由具备较高计算能力的业务服务器来进行图像识别处理,可以提高图像识别效率与清晰度;同时,在本申请中,业务服务器可以将接收到的第一图像数据存储至接收队列中,在对第一图像数据进行图像识别处理的过程中,可以持续从第一客户端同步获取到第二图像数据,并将该第二图像数据存储至接收队列中,得到更新接收队列。也就是说,本申请中的业务服务器在对第一图像数据进行图像识别处理时,并不会暂停第二图像数据的接收,通过接收队列可以实现图像处理与图像接收的同步进行,由此可以减少图像传输时延。进一步地,业务服务器在通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,业务服务器可以将该第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,由目标云应用服务器进行渲染并将渲染得到的渲染数据发送至第一客户端,由此可以在云应用中进行显示。同时,在提取出第一对象区域后,业务服务器可以获取到接收队列中具有最晚接收时间戳的第二图像数据,并继续对该第二图像数据进行处理。可以看出,本申请在对某个图像数据进行图像识别处理后,接下来是从接收队列中获取到具有最晚接收时间戳的图像数据进行处理,并非是按照接收时间戳的时间顺序对图像数据进行一一识别,可以提高对图像数据的识别效率,同时由于具有最晚接收时间戳的图像数据是根据对象当前的行为所采集得到,那么对具有最晚接收时间戳的图像数据进行图像识别并显示时,也是与对象当前的行为相匹配的。综上,本申请可以提高图像识别效率,减少图像传输时延,保证云应用所显示的虚拟对象的虚拟行为与对象当前的行为状态匹配。
在一种可行的实施例中,在每个客户端首次运行云应用时,每个客户端所对应的云应用服务器可向业务服务器发送注册请求,该注册请求用于请求向业务服务器注册设备,而在经过注册后,业务服务器可将云应用服务器对应的设备标识添加至已存设备标识集合中,由此可证明该云应用服务器为已注册云应用服务器。当其为已注册云应用服务器时,可表明其为合法的云应用服务器,此时业务服务器可与合法的云应用服务器之间进行数据交互。那么第一客户端在向业务服务器发送图像数据(如第一图像数据时),可携带发送具有绑定关系的云应用服务器(可称为绑定云应用服务器)的设备标识(可称为待确认设备标识),用以业务服务器通过设备标识来确认其是否经过注册(是否合法),在确定该绑定云应用服务器经过注册时,再将该绑定云应用服务器确定为目标云应用服务器,然后将第一对象 图像数据发送至目标云应用服务器。也就是说,在通过上述确定出第一对象区域后,业务服务器在向目标云应用服务器发送第一对象图像数据前,首先可以确定该第一客户端对应的云应用服务器是否经过注册,在确定其经过注册时,再将该第一对象图像数据发送至其对应的目标云应用服务器。
为便于理解,请参见图7,图7是本申请实施例提供一种将第一对象图像数据发送至目标云应用服务器的流程示意图。该流程以第一图像数据携带待确认设备标识(待确认设备标识是绑定云应用服务器的设备标识,绑定云应用服务器与第一客户端具有绑定关系)为例进行说明,如图7所示,该流程可以包括至少以下S701-S704:
S701,获取已存设备标识集合;已存设备标识集合包含M个已存设备标识,一个已存设备标识对应一个已注册云应用服务器,M为正整数。
在每个客户端运行云应用时(一般是首次运行云应用时),每个客户端所对应的云应用服务器可向业务服务器发送注册请求,该注册请求用于请求向业务服务器注册设备,而在经过注册后,业务服务器可将云应用服务器对应的设备标识添加至已存设备标识集合中,由此可证明该云应用服务器为已注册云应用服务器。以客户端为第二客户端为例,其具体方法可为:在用户使用客户端开启云应用时,该第二客户端可响应这一应用开启操作,生成应用开启通知,第二客户端可将该应用开启通知发送至其对应的云应用服务器(可称为待注册云应用服务器),而该待注册云应用服务器此时可基于应用开启通知向业务服务器发送注册请求;而业务服务器可接收待注册云应用服务器发送的注册请求;随后,业务服务器可根据注册请求,检测待注册云应用服务器的设备指标信息;当设备指标信息满足处理质量条件时,再获取待注册云应用服务器的待存储设备标识,并将待存储设备标识存储至已存设备标识集合,将待注册云应用服务器转换为已注册云应用服务器,将待存储设备标识转换为已存设备标识。
其中,设备指标信息可包括网络质量参数、设备版本、功能模块质量指标、存储空间指标等等,这里对设备指标信息进行检测可以是检测某一个指标是否合格,例如,可检测网络质量参数是否合格,该网络质量参数合格了那么即可认为该待注册云应用服务器的设备指标信息满足了处理质量条件;对设备指标信息进行检测还可以是检测两个或两个以上的指标是否均合格,只有在均合格的条件下,才确认该待注册云应用服务器的设备指标信息满足了处理质量条件。
以下将以设备指标信息包括网络质量参数与设备版本为例,对检测待注册云应用服务器的设备指标信息的具体方法进行说明,其具体方法可为:根据注册请求,获取待注册云应用服务器的网络质量参数与设备版本;若网络质量参数达到参数阈值,且设备版本与质量标准版本(可理解为质量合格版本)相匹配,则可确定设备指标信息满足处理质量条件;而若网络质量参数未达到参数阈值,或设备版本与质量标准版本不匹配,则可确定设备指标信息不满足处理质量条件。
通过上述可知,已存设备集合中存储有不同的已注册云应用服务器所对应的已存设备标识,那么在获取到第一客户端所发送的待确认设备标识后,可获取到已存设备标识集合,再将待确认设备标识与已存设备标识集合进行匹配。
S702,确定M个已存设备标识中是否存在与待确认设备标识相匹配的已存设备标识。
若存在,则可执行后续S703;若不存在,则可执行后续S704。
S703,若M个已存设备标识中,存在与待确认设备标识相匹配的已存设备标识,则确定待确认设备标识所指示的绑定云应用服务器属于已注册云应用服务器,将待确认设备标识所指示的绑定云应用服务器确定为目标云应用服务器,将第一对象图像数据发送至目标云应用服务器。
将待确认设备标识与M个已存设备标识进行匹配,若M个已存设备标识中存在与待确认设备标识相匹配的已存设备标识,则可确定待确认设备标识所指示的绑定云应用服务器属于已注册云应用服务器,那么此时可将该绑定云应用服务器确定为该目标云应用服务器,可将该第一对象图像数据发送至该目标云应用服务器。
S704,若M个已存设备标识中不存在已存设备标识,则生成设备异常提示信息,将设备异常提示信息发送至第一客户端。
将待确认设备标识与M个已存设备标识进行匹配,若已存设备标识集合中不存在与待确认设备标识相匹配的已存设备标识,则可确定待确认设备标识所指示的绑定云应用服务器属于未注册云应用服务器,该绑定云应用服务器未经过注册,该业务服务器无法将该第一对象图像数据发送至该绑定云应用服务器。那么此时,业务服务器可生成设备异常提示信息(可以是指服务器未注册提示信息),业务服务器可将该设备异常提示信息返回至第一客户端,而第一客户端可基于该设备异常提示信息向其对应的绑定云应用服务器发送注册通知,绑定云应用服务器可基于该注册通知向业务服务器申请注册。
应当理解,实际上,在一种可行的情况下,并不是仅有上述第一客户端这一个客户端向业务服务器发送图像数据,而是会有不同的客户端向业务服务器发送图像数据,由于不同的客户端所对应的云应用服务器也不同,那么在业务服务器进行注册的云应用服务器也会存在多个(会存在多个云应用服务连接到业务服务器)。那么本申请通过预先存储已存设备标识集合,以及客户端在发送图像数据时携带发送其对应的云游戏服务器的设备标识,由此可以确定出客户端与云应用服务器之间的对应关系,同时也可以确定该云应用服务器是否已经过注册,从而可以将客户端采集的用户画面发送给正确的已注册的云应用服务器,可以提高云应用显示的用户画面的正确性。
可以理解的是,对于每个云应用服务器而言,如对于目标云应用服务器而言,其在接收业务服务器传输的人像数据(也就是对象图像数据,如第一对象图像数据)时,通常处理步骤可以包括以下3个步骤:
1、分配接收缓冲区并将人像数据(如上述第一对象图像数据)写入接收缓冲区。
2、待接收完成后,再将人像数据拷贝出来进行处理和渲染。
3、待渲染完成后,回到步骤1。
也就是说,目标云应用服务器需要先将人像数据写入缓冲区,再读取并渲染人像数据,渲染完成后再继续接收人像数据并写入缓冲区。对于分辨率较高的人像数据,接收数据量会较大,那么目标云应用服务器在分配缓冲区及数据拷贝时,会消耗大量的时间,严重影响后续接收人像数据的时间,造成大量的时延,由此可见,虽然通过上述所述,通过业务 服务器的跳帧处理,可以减少业务服务器一侧的图像接收时延,但是在云应用服务器一侧,仍然存在时延问题。而为了进一步减少时延,本申请提供一种数据处理方法,该方法即为在云应用服务器一侧分配双缓冲区。为便于理解,请一并参见图8,图8是本申请实施例提供的一种数据处理方法的流程示意图,该流程由计算机设备执行,该计算机设备可以是目标云应用服务器例如云游戏服务器,该流程可以对应于目标云应用服务器在接收到对象图像数据后的数据处理过程。如图8所示,该流程可以包括至少以下S801-S804:
S801,接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据。
对于第一对象区域、第一对象图像数据的确定以及提取过程,可以参见上述图3所对应实施例中S102中的描述,这里将不再进行赘述。业务服务器在提取出第一对象区域,并获取到第一对象图像数据后,可将该第一对象图像数据发送至目标云应用服务器,目标云应用服务器可将该第一对象图像数据存储至缓冲区集合中,工作状态处于存储状态的第一缓冲区。
其中,目标云应用服务器可预先分配同样大小的两个接收缓冲区(Buffer),并将其中的一个缓冲区的工作状态设置为存储状态,也就是该缓冲区实际为存储缓冲区,目标云应用服务器可将接收到的数据存储至该存储缓冲区;同时,可将另一个缓冲区的工作状态设置为读取状态,也就是将该缓冲区实际为读取缓冲区,目标云应用服务器在需要读取及渲染数据时,可从该读取缓冲区进行读取。
以两个缓冲区为例,其分配双缓冲区生成缓冲区集合的具体方法可为:可预先分配第一缓冲区与第二缓冲区;随后,可将第一缓冲区的初始指针标识设置为存储指针标识,将第二缓冲区的初始指针标识设置为读取指针标识;应当理解,具有存储指针标识的第一缓冲区的工作状态为存储状态;具有读取指针标识的第二缓冲区的工作状态为读取状态;随后,可根据工作状态处于存储状态的第一缓冲区,与工作状态处于读取状态的第二缓冲区,生成缓冲区集合。那么此时,在接收到第一对象图像数据时,因第一缓冲区的工作状态处于存储状态,则此时可将该第一对象图像数据存储至第一缓冲区。
S802,当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象图像数据,将第一对象图像数据进行渲染处理。
通过上述介绍,第一缓冲区初始工作状态为存储状态,该第二缓冲区的初始工作状态为读取状态,那么目标云应用服务器在接收存储该第一图像数据时,可以同步对第二缓冲区中已经存储的数据(可称为已存图像数据)进行读取并渲染。若该第二缓冲区中此时并没有存储有图像数据,那么该第二缓冲区即未包含未处理对象图像数据,那么在将该第一对象图像数据存储至第一缓冲区后,可将该第一缓冲区的存储指针标识切换为读取指针标 识、将该第二缓冲区的存储指针标识切换为存储指针标识,那么该第一缓冲区与第二缓冲区的工作状态就进行了互换,第一缓冲区的当前工作状态变为读取状态,第二缓冲区的当前工作状态变为存储状态,此时可从第一缓冲区中读取该第一对象图像数据,并对其进行渲染处理,同时也可以继续接收第二对象图像数据并将之存储至第二缓冲区中。
应当理解,通过双缓冲区的设置,目标云应用服务器可实现读取与接收的同步,无需等待渲染完成即可接收其余的数据,可以大大减少接收时延。
应当理解,通过上述介绍,第一缓冲区初始工作状态为存储状态,该第二缓冲区的初始工作状态为读取状态,那么目标云应用服务器在接收存储该第一图像数据时,可以同步对第二缓冲区中已经存储的数据(可称为已存图像数据)进行读取并渲染。若该第二缓冲区中存储有图像数据,但是所有的图像数据已经被读取渲染完成了,那么此时可将该第二缓冲区中的已经处理了的图像数据进行清空,那么此时也可以确定该第二缓冲区中未包含有未处理图像数据,可通过指针标识的切换将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态。然后再从处于读取状态的第一缓冲区中读取到该第一对象图像数据,对其进行渲染处理。
S803,在第一对象图像数据的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;更新接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到。
第二对象区域可以是指上述业务服务器,对第二图像数据进行图像识别处理后,所提取出的区域,第二对象图像数据可以是指第二图像数据中,第二对象区域所包含的图像数据。对于其具体提取方式,可以与提取第一对象区域的方式相同,这里将不再进行赘述。
在第一对象图像数据的渲染过程中,可接收业务服务器发送的第二对象图像数据,实际上就是目标云应用服务器可以在读取数据的过程中,同步接收数据,并将之存储至当前处于存储状态的第二缓冲区,由此可以减少时延。
S804,在获取到第一对象图像数据对应的渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
当对第一缓冲区中的已存储数据读取并渲染完成时(如,第一缓冲区仅包含第一对象图像数据,那么即是在获取到第一对象图像数据对应的渲染数据时),可将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,此时,可以从处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理;也可以同步接收其余的对象图像数据,并将之存储至处于存储状态的第一缓冲区中。其中,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态的具体实现方式,也可为指针标识的切换方式。其具体方法可为:在获取到第一对象区域对应 的第一渲染数据时,可获取第一缓冲区所对应的用于表征读取状态的读取指针标识,以及第二缓冲区所对应的用于表征存储状态的存储指针标识;将第一缓冲区所对应的读取指针标识切换为存储指针标识;具有存储指针标识的第一缓冲区的工作状态为存储状态;将第二缓冲区的存储指针标识切换为读取指针标识;具有读取指针标识的第二缓冲区的工作状态为读取状态。
为便于理解,请一并参见图9,图9是本申请实施例提供的一种双缓冲区状态变化的示意图。如图9所示,以第一缓冲区为缓冲区900a、第二缓冲区为缓冲区900b为例,此时,缓冲区900a的工作状态为读取状态,该缓冲区900a中所存储的对象图像数据可包括有对象图像数据1至对象图像数据10,在图9中依次通过标号1、2、3、4、5、6、7、8、9、10表示。其中,该对象图像数据1至对象图像数据7为已经读取了的数据,而对象图像数据8至对象图像数据10为待读取的数据。同时,该缓冲区900b的工作状态为存储状态,在目标云应用服务器从缓冲区900a读取数据的过程中,可持续接收对象图像数据并将之存储至缓冲区900b中,此时,缓冲区900b中已经接收的数据包括对象图像数据11至对象图像数据14(缓冲区900b中还剩余6个剩余空间位置用于接收对象图像数据)。
如图9所示,当将缓冲区900a中的数据读取完时(即读取完对象图像数据7至对象图像数据9)时,可将缓冲区900a清空,而此时该缓冲区900b中所接收到的数据包含对象图像数据11至对象图像数据20(在图9中依次通过标号11、12、13、14、15、16、17、18、19、20表示)。进一步地,可将缓冲区900a的工作状态切换为存储状态、将缓冲区900b的工作状态切换为读取状态,由此,目标云应用服务器可从缓冲区900b中读取数据(如,从对象图像数据11开始依次读取);同时,目标云应用服务器可同步接收对象图像数据,并将之存储至缓冲区900a中。如,接收到新的对象图像数据1至新的对象图像数据3后,可将之存储至缓冲区900a中。
需要说明的是,上述缓冲区900a与缓冲区900b,均是为便于理解所作出的举例说明,并不具备实际参考意义。
在本申请实施例中,通过业务服务器的跳帧处理,可以减少业务服务器一侧的图像接收时延,提高图像识别效率;通过目标云应用服务器的双缓冲区分配处理,无需进行数据拷贝,只需要将两个缓冲区的工作状态进行切换即可(如指针切换),也不需要每次均进行缓冲区分配,同时,接收数据与处理数据(如读取并渲染数据)可以同时进行,不需要相互等待,可以减少时延。也就是说,在业务服务器侧减少时延的基础上,通过双缓冲区的设置,可以进一步地减少时延。
为便于理解,请一并参见图10,图10是本申请实施例提供的一种系统架构图。如图10所示的系统架构图是以云应用为例,其云应用对应的云应用服务器可为云游戏服务器。如图10所示,该系统架构可以包括客户端集群(可包括客户端1、客户端2、…、客户端n)、业务服务器(可包括推流子服务器与图像识别子服务器。该推流子服务器可用于接收客户端上传的图像编码文件并对其进行解码处理;该图像识别子服务器可对推流子服务器所解码得到的解码图像数据进行图像识别处理)、云游戏服务器。为便于理解,以下将进行具体阐述。
客户端集群:每个客户端运行云应用(如云游戏应用)时,可以展示云应用的画面(如云游戏应用的画面)。当运行云游戏应用时,可通过摄像头采集用户画面,并进行编码处理,得到图像数据上传至业务服务器中的推流子服务器(该推流子服务器可为任一具备数据接收功能与解码功能的服务器,主要用于接收客户端上传的图像编码文件,并对其进行解码处理)中。
推流子服务器:可接收客户端上传的图像数据并进行解码处理,得到具有初始图像格式(如YUV格式)的解码图像,可将该解码图像发送至图像识别子服务器。
图像识别子服务器:可将解码图像从YUV格式转换成RGB格式,随后可识别并提取出图像中的用户人像数据或人体关键点,并将该用户人像或人体关键点发送至云游戏服务器。应当理解,推流子服务器与图像识别子服务器可共同组成业务服务器,使得业务服务器可以具备图像解码、图像识别功能。当然,推流子服务器与图像识别子服务器也可以作为独立服务器,各自执行相应的任务(即推流子服务器接收图像编码文件并进行解码;图像识别子服务器对解码数据进行图像识别处理)。应当理解,为减少数据接收时延,推流子服务器与图像识别子服务器均可以进行跳帧处理。
云游戏服务器:该云游戏服务器可是指客户端所对应的云应用服务器,当客户端运行云游戏应用时,云游戏服务器为其提供相应的计算服务器。云游戏服务器可接收用户人像数据或人体关键点,可将用户人像数据进行渲染显示。或者,云游戏服务器可利用人体关键点在云游戏应用中操纵虚拟卡通玩偶来实现动画(即不是将用户人像投射于云游戏应用中,而是操作虚拟卡通玩偶来同步用户的真实动作状态)。
为便于理解,请一并参见图11,图11是本申请实施例提供的一种系统流程示意图。该流程可以对应于图10所对应的系统架构。如图11所示,该流程可以包括S31-S36:
S31:客户端采集摄像头图像。
S32,客户端对采集图像进行编码。
S33,推流子服务器对编码数据进行解码。
S34,图像识别子服务器将解码图像从YUV格式转换为RGB格式。
S35,云应游戏服务器识别用户人像或人体关键点。
S36,展示人像或展示虚拟动画。
其中,对于S31-S36的具体实现方式,可以参见前述图3、图5、图7、图8所对应实施例中的描述,这里将不再进行赘述;其带来的有益效果这里也将不再进行赘述。
可以理解的是,当客户端在采集对象时,若同时采集到了另一个对象(可以理解为另一个用户入镜),那么此时,客户端可生成对象选择提示信息,由用户选择将谁作为最终采集对象。或者,客户端自动根据对象的清晰度与所占面积来确定。例如,客户端同时采集到了对象1与对象2,但对象2距离镜头较远、采集画面并不清晰,对象1距离镜头较近、采集画面清晰,那么客户端可自动将对象1作为最终采集对象。
请参见图12,图12是本申请实施例提供的一种交互流程图。该交互流程可为客户端、推流子服务器、图像识别子服务器以及云应用服务器(是以云游戏服务器为例)之间的交互流程。如图12所示,该交互流程可以至少包括以下S41-S54:
S41,客户端与推流子服务器之间建立连接。
用户在通过客户端打开云游戏应用时,客户端可以与推流子服务器之间建立连接(如,建立Websocket长连接)。
S42,图像识别子服务器与云游戏服务器之间建立连接。
用户通过客户端打开云游戏应用时,云游戏服务器(可集成有云游戏软件工具开发包(SDK))可以与图像识别子服务器之间建立连接(如,建立传输通信协议(Transmission Control Protocol,TCP)连接)。
S43,云游戏服务器向其对应的客户端发送采集通知。
采集通知可以是图像开始采集的通知消息。
S44,客户端向推流子服务器发送推流消息。
客户端可基于采集通知打开摄像头,并将云游戏服务器的设备标识、所采集的用户的标识以及摄像头采集图像的宽(width)和高(height),一并通知给推流子服务器,让推流子服务器准备接收数据。其中,推流消息即可包括云游戏服务器的设备标识、所采集的用户的标识以及摄像头采集图像的宽和高。
S45,推流子服务器向图像识别子服务器发送推流消息。
推流子服务器收到客户端的推流消息后,可以与图像识别服务器建立TCP连接,并将推流消息发送给图像识别子服务器。
S46,客户端开始采集图像并进行编码。
S47,客户端向推流子服务器发送编码数据。
S48,推流子服务器进行解码,得到解码数据。
S49,推流子服务器将解码数据发送至图像识别子服务器。
S50,图像识别子服务器将解码数据进行格式转换,并进行图像识别。
S51,图像识别子服务器将识别数据发送至云游戏服务器。
S52,云游戏服务器将识别数据进行渲染得到渲染数据。
S53,云游戏服务器将渲染数据发送至客户端。
其中,对于S46-S53的具体实现方式,可以参见上述图3、图5、图7与图8所对应实施例中的描述,这里将不再进行赘述,其带来的有益效果也将不再进行赘述。
请参见图13,图13是本申请实施例提供的一种数据处理装置的结构示意图。该数据处理装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如该数据处理装置为一个应用软件;该数据处理装置可以用于执行图3所示的方法。如图13所示,该数据处理装置1可以包括:数据获取模块11、图像识别模块12、队列更新模块13以及区域发送模块14。
数据获取模块11,用于获取第一客户端发送的第一图像数据,将第一图像数据存储至接收队列中;第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
图像识别模块12,用于对接收队列中的第一图像数据进行图像识别处理;
队列更新模块13,用于在第一图像数据的图像识别处理过程中,将持续获取到的第一客户端所发送的第二图像数据,存储至接收队列中,得到更新接收队列;
区域发送模块14,用于当通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,将第一对象区域所包含的第一对象图像数据发送至目标云应用服务器;目标云应用服务器用于对第一对象图像数据进行渲染得到渲染数据,并将渲染数据发送至第一客户端;
区域发送模块14,还用于同步对更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理。
其中,数据获取模块11、图像识别模块12、队列更新模块13以及区域发送模块14的具体实现方式,可以参见上述图3所对应实施例中S101-S101-S103的描述,这里将不再进行赘述。
在一个实施例中,数据获取模块11可以包括:图像接收单元111与存储单元112。
图像接收单元111,用于接收第一客户端发送的第一图像数据;第一图像数据是由第一客户端对原始图像帧进行编码处理后得到的数据;原始图像帧是第一客户端在运行云应用时所采集得到;
存储单元112,用于获取接收到第一图像数据的接收时间戳,将第一图像数据与接收时间戳关联存储至接收队列中。
其中,图像接收单元111与存储单元112的具体实现方式,可以参见上述图3所对应实施例中S101-S101的描述,这里将不再进行赘述。
在一个实施例中,图像识别模块12可以包括:数据解码单元121、格式转换单元122以及图像识别单元123。
数据解码单元121,用于对第一图像数据进行解码处理,得到具有初始图像格式的解码图像数据;
格式转换单元122,用于对解码图像数据进行格式转换,得到具有标准图像格式的原始图像帧;
图像识别单元123,用于对具有标准图像格式的原始图像帧进行图像识别处理。
其中,数据解码单元121、格式转换单元122以及图像识别单元123的具体实现方式,可以参见上述图3所对应实施例中S101的描述,这里将不再进行赘述。
在一个实施例中,图像识别单元123可以包括:关键点识别子单元1231、曲线连接子单元1232以及区域确定子单元1233。
关键点识别子单元1231,用于识别对象在原始图像帧中的对象边缘关键点;
曲线连接子单元1232,用于将对象边缘关键点进行连接,得到对象的对象边缘曲线;
区域确定子单元1233,用于将原始图像帧中对象边缘曲线所覆盖的区域,确定对象在原始图像帧中所处的初始对象区域;并根据初始对象区域确定第一对象区域。
其中,关键点识别子单元1231、曲线连接子单元1232以及区域确定子单元1233的具体实现方式,可以参见上述图3所对应实施例中S102的描述,这里将不再进行赘述。
在一个实施例中,图像识别单元123还用于获取针对对象的对象识别配置信息,以及对象识别配置信息所指示的对象识别配置部位,将对象识别配置部位与对象关键部位进行匹配;若对象识别配置部位与对象关键部位相匹配,则执行根据初始对象区域确定第一对象区域的步骤;若对象识别配置部位与对象关键部位不匹配,则确定通过图像识别处理未能提取出第一对象区域。
在一个实施例中,图像识别单元123还用于将更新接收队列中具有最早接收时间戳的图像数据确定为待识别图像数据;对待识别图像数据进行图像识别处理,当通过图像识别处理提取出对象在待识别图像数据中所处的待处理对象区域时,区域发送模块14还用于将待处理对象区域发送至目标云应用服务器。
在一个实施例中,图像识别单元123具体用于:
获取初始对象区域所呈现的对象的对象关键部位;
若对象关键部位具备部位完整性,则将初始对象区域确定为第一对象区域;
若对象关键部位不具备部位完整性,则获取样本数据库中的N个样本图像帧,从N个样本图像帧中获取与对象对应的待处理样本图像帧,根据待处理样本图像帧与初始对象区域确定第一对象区域;N为正整数。
在一个实施例中,图像识别单元123具体用于:
获取待处理样本图像帧中的整体部位信息;
根据对象关键部位,在整体部位信息中确定待融合部位区域;
将待融合部位区域与初始对象区域进行融合,得到第一对象区域。
在一个实施例中,第一图像数据携带待确认设备标识;待确认设备标识是绑定云应用服务器的设备标识,绑定云应用服务器与第一客户端具有绑定关系;
区域发送模块14包括:集合获取单元141与标识匹配单元142。
集合获取单元141,用于获取已存设备标识集合;已存设备标识集合包含M个已存设备标识,一个已存设备标识对应一个已注册云应用服务器,M为正整数;
标识匹配单元142,用于若M个已存设备标识中存在与待确认设备标识相匹配的已存设备标识,则确定待确认设备标识所指示的绑定云应用服务器属于已注册云应用服务器,将待确认设备标识所指示的绑定云应用服务器确定为目标云应用服务器,将第一对象图像数据发送至目标云应用服务器。
其中,集合获取单元141与标识匹配单元142的具体实现方式,可以参见上述图3所对应实施例中S103的描述,这里将不再进行赘述。
在一个实施例中,该数据处理装置1还可以包括:注册请求接收模块15、指标检测模块16以及标识添加模块17。
注册请求接收模块15,用于接收待注册云应用服务器发送的注册请求;注册请求是待注册云应用服务器在接收到第二客户端发送的应用开启通知后所生成的;应用开启通知是第二客户端响应针对云应用的应用开启操作所生成的;
指标检测模块16,用于根据注册请求,检测待注册云应用服务器的设备指标信息;
标识添加模块17,用于当设备指标信息满足处理质量条件时,获取待注册云应用服务器的待存储设备标识,将待存储设备标识存储至已存设备标识集合,将待注册云应用服务器转换为已注册云应用服务器,将待存储设备标识转换为已存设备标识。
其中,注册请求接收模块15、指标检测模块16以及标识添加模块17的具体实现方式,可以参见上述图3所对应实施例中S103的描述,这里将不再进行赘述。
在一个实施例中,设备指标信息包括网络质量参数与设备版本;
指标检测模块16可以包括:参数获取单元161与指标确定单元162。
参数获取单元161,用于根据注册请求,获取待注册云应用服务器的网络质量参数与设备版本;
指标确定单元162,用于若网络质量参数达到参数阈值,且设备版本与质量标准版本相匹配,则确定设备指标信息满足处理质量条件;
指标确定单元162,还用于若网络质量参数未达到参数阈值,或设备版本与质量标准版本不匹配,则确定设备指标信息不满足处理质量条件。
其中,参数获取单元161与指标确定单元162的具体实现方式,可以参见上述图3所对应实施例中S103的描述,这里将不再进行赘述。
在本申请实施例中,客户端(如第一客户端)在获取到包含对象的第一图像数据时,可以将该第一图像数据发送至相关计算机设备(如业务服务器),由该业务服务器进行图像识别处理,无需在客户端本地进行图像识别,可以使得第一图像数据由具备较高计算能力的业务服务器来进行图像识别处理,可以提高图像识别效率与清晰度;同时,在本申请中,业务服务器可以将接收到的第一图像数据存储至接收队列中,在对第一图像数据进行图像识别处理的过程中,可以持续从第一客户端同步获取到第二图像数据,并将该第二图像数据存储至接收队列中,得到更新接收队列。也就是说,本申请中的业务服务器在对第一图像数据进行图像识别处理时,并不会暂停第二图像数据的接收,通过接收队列可以实现图像处理与图像接收的同步进行,由此可以减少图像传输时延。进一步地,业务服务器在通过图像识别处理提取出对象在第一图像数据中所处的第一对象区域时,业务服务器可以将该第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,由目标云应用服务器进行渲染并将渲染得到的渲染数据发送至第一客户端,由此可以在云应用中进行显示。同时,在提取出第一对象区域后,业务服务器可以获取到接收队列中具有最晚接收时间戳的第二图像数据,并继续对该第二图像数据进行处理。可以看出,本申请在对某个图像数据进行图像识别处理后,接下来是从接收队列中获取到具有最晚接收时间戳的图像数据进行处理,并非是按照接收时间戳的时间顺序对图像数据进行一一识别,可以提高对图像数据的识别效率,同时由于具有最晚接收时间戳的图像数据是根据对象当前的行为所采集得到,那么对具有最晚接收时间戳的图像数据进行图像识别并显示时,也是与对象当前的行为相匹配的。综上,本申请可以提高图像识别效率,减少图像传输时延,保证云应用所显示的虚拟对象的虚拟行为与对象当前的行为状态匹配。
请参见图14,图14是本申请实施例提供的另一种数据处理装置的结构示意图。该数据处理装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如该数据 处理装置为一个应用软件;该数据处理装置可以用于执行图8所示的方法。如图14所示,该数据处理装置2可以包括:区域存储模块21、区域渲染模块22、区域接收模块23以及状态调整模块24。
区域存储模块21,用于接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
区域渲染模块22,用于当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象区域,将第一对象区域进行渲染处理;
区域接收模块23,用于在第一对象区域的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时从接收队列中所获取到的具有最晚接收时间戳的图像数据;接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到;
状态调整模块24,用于在获取到第一对象图像数据对应的第一渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
其中,区域存储模块21、区域渲染模块22、区域接收模块23以及状态调整模块24的具体实现方式,可以参见上述图8所对应实施例中S801-S804的描述,这里将不再进行赘述。
在一个实施例中,状态调整模块24可以包括:标识获取单元241与标识切换单元242。
标识获取单元241,用于在获取到第一对象区域对应的第一渲染数据时,获取第一缓冲区所对应的用于表征读取状态的读取指针标识,以及第二缓冲区所对应的用于表征存储状态的存储指针标识;
标识切换单元242,用于将第一缓冲区所对应的读取指针标识切换为存储指针标识;具有存储指针标识的第一缓冲区的工作状态为存储状态;
标识切换单元242,还用于将第二缓冲区的存储指针标识切换为读取指针标识;具有读取指针标识的第二缓冲区的工作状态为读取状态。
其中,标识获取单元241与标识切换单元242的具体实现方式,可以参见上述图8所对应实施例中S804的描述,这里将不再进行赘述。
在一个实施例中,数据处理装置2还可以包括:缓冲区分配模块25、标识设置模块26以及集合生成模块27。
缓冲区分配模块25,用于分配第一缓冲区与第二缓冲区;
标识设置模块26,用于将第一缓冲区的初始指针标识设置为存储指针标识,将第二缓冲区的初始指针标识设置为读取指针标识;具有存储指针标识的第一缓冲区的工作状态为存储状态;具有读取指针标识的第二缓冲区的工作状态为读取状态;
集合生成模块27,用于根据工作状态处于存储状态的第一缓冲区,与工作状态处于读取状态的第二缓冲区,生成缓冲区集合。
其中,缓冲区分配模块25、标识设置模块26以及集合生成模块27的具体实现方式,可以参见上述图8所对应实施例中S801中的描述。
在本申请实施例中,通过业务服务器的跳帧处理,可以减少业务服务器一侧的图像接收时延,提高图像识别效率;通过云游戏服务器的双缓冲区分配处理,无需进行数据拷贝,只需要将两个缓冲区的工作状态进行切换即可,也不需要每次均进行缓冲区分配,同时,接收数据与处理数据(如读取并渲染数据)可以同时进行,不需要相互等待,可以减少时延。也就是说,在业务服务器侧减少时延的基础上,通过双缓冲区的设置,可以进一步地减少时延。
请参见图15,图15是本申请实施例提供的一种计算机设备的结构示意图。如图15所示,上述图13所对应实施例中的装置1或图14所对应实施例中的装置2可以应用于上述计算机设备8000,上述计算机设备8000可以包括:处理器8001,网络接口8004和存储器8005,此外,上述计算机设备8000还包括:用户接口8003,和至少一个通信总线8002。其中,通信总线8002用于实现这些组件之间的连接通信。其中,用户接口8003可以包括显示屏(Display)、键盘(Keyboard),可选用户接口8003还可以包括标准的有线接口、无线接口。网络接口8004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器8005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器8005可选的还可以是至少一个位于远离前述处理器8001的存储装置。如图15所示,作为一种计算机可读存储介质的存储器8005中可以包括操作系统、网络通信模块、用户接口模块以及设备控制应用程序。
在图15所示的计算机设备8000中,网络接口8004可提供网络通讯功能;而用户接口8003主要用于为用户提供输入的接口;而处理器8001可以用于调用存储器8005中存储的设备控制应用程序,以实现前述实施例提供的数据处理方法。
应当理解,本申请实施例中所描述的计算机设备8000可执行前文图3到图8所对应实施例中对该数据处理方法的描述,也可执行前文图13所对应实施例中对该数据处理装置1,或图14所对应实施例中对该数据处理装置2的描述,在此不再赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。
此外,这里需要指出的是:本申请实施例还提供了一种计算机可读存储介质,且上述计算机可读存储介质中存储有前文提及的数据处理的计算机设备8000所执行的计算机程序,且上述计算机程序包括程序指令,当上述处理器执行上述程序指令时,能够执行前文 图3到图8所对应实施例中对上述数据处理方法的描述,因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的计算机可读存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述。
上述计算机可读存储介质可以是前述任一实施例提供的数据处理装置或者上述计算机设备的内部存储单元,例如计算机设备的硬盘或内存。该计算机可读存储介质也可以是该计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,该计算机可读存储介质还可以既包括该计算机设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储该计算机程序以及该计算机设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例中一方面提供的方法。
本申请实施例的说明书和权利要求书及附图中的术语“第一”、“第二”等是用于区别不同对象,而非用于描述特定顺序。此外,术语“包括”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、装置、产品或设备没有限定于已列出的步骤或模块,而是可选地还包括没有列出的步骤或模块,或可选地还包括对于这些过程、方法、装置、产品或设备固有的其他步骤单元。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例提供的方法及相关装置是参照本申请实施例提供的方法流程图和/或结构示意图来描述的,具体可由计算机程序指令实现方法流程图和/或结构示意图的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。这些计算机程序指令可提供到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或结构示意图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或结构示意图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计 算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或结构示意一个方框或多个方框中指定的功能的步骤。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (20)

  1. 一种数据处理方法,所述方法由计算机设备执行,所述方法包括:
    获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中;所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含对象的图像数据;
    对所述接收队列中的所述第一图像数据进行图像识别处理,在所述第一图像数据的图像识别处理过程中,将持续获取到的所述第一客户端所发送的第二图像数据,存储至所述接收队列中,得到更新接收队列;
    当通过图像识别处理提取出所述对象在所述第一图像数据中所处的第一对象区域时,将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,同步对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;所述目标云应用服务器用于对所述第一对象图像数据进行渲染得到渲染数据,并将所述渲染数据发送至所述第一客户端。
  2. 根据权利要求1所述的方法,所述获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中,包括:
    接收所述第一客户端发送的所述第一图像数据;所述第一图像数据是由所述第一客户端对原始图像帧进行编码处理后得到的数据;所述原始图像帧是所述第一客户端在运行所述云应用时所采集得到;
    获取接收到所述第一图像数据的接收时间戳,将所述第一图像数据与所述接收时间戳关联存储至所述接收队列中。
  3. 根据权利要求2所述的方法,所述对所述接收队列中的所述第一图像数据进行图像识别处理,包括:
    对所述第一图像数据进行解码处理,得到具有初始图像格式的解码图像数据;
    对所述解码图像数据进行格式转换,得到具有标准图像格式的所述原始图像帧;
    对具有所述标准图像格式的所述原始图像帧进行图像识别处理。
  4. 根据权利要求3所述的方法,所述对具有所述标准图像格式的所述原始图像帧进行图像识别处理,包括:
    识别所述对象在所述原始图像帧中的对象边缘关键点;
    将所述对象边缘关键点进行连接,得到所述对象的对象边缘曲线;
    将所述原始图像帧中所述对象边缘曲线所覆盖的区域,确定所述对象在所述原始图像帧中所处的初始对象区域;
    根据所述初始对象区域确定所述第一对象区域。
  5. 根据权利要求4所述的方法,所述根据所述初始对象区域确定所述第一对象区域之前,所述方法还包括:
    获取所述初始对象区域所呈现的所述对象的对象关键部位;
    获取针对所述对象的对象识别配置信息,以及所述对象识别配置信息所指示的对象识别配置部位,将所述对象识别配置部位与所述对象关键部位进行匹配;
    若所述对象识别配置部位与所述对象关键部位相匹配,则执行所述根据所述初始对象区域确定所述第一对象区域的步骤;
    若所述对象识别配置部位与所述对象关键部位不匹配,则确定通过图像识别处理未能提取出所述第一对象区域。
  6. 根据权利要求5所述的方法,在所述若所述对象识别配置部位与所述对象关键部位不匹配,则确定通过图像识别处理未能提取出所述第一对象区域之后,所述方法还包括:
    将所述更新接收队列中具有最早接收时间戳的图像数据确定为待识别图像数据;
    对所述待识别图像数据进行图像识别处理,当通过图像识别处理提取出所述对象在所述待识别图像数据中所处的待处理对象区域时,将所述待处理对象区域发送至所述目标云应用服务器。
  7. 根据权利要求4所述的方法,所述根据所述初始对象区域确定所述第一对象区域,包括:
    获取所述初始对象区域所呈现的所述对象的对象关键部位;
    若所述对象关键部位具备部位完整性,则将所述初始对象区域确定为所述第一对象区域;
    若所述对象关键部位不具备部位完整性,则获取样本数据库中的N个样本图像帧,从所述N个样本图像帧中获取与所述对象对应的待处理样本图像帧,根据所述待处理样本图像帧与所述初始对象区域确定所述第一对象区域;N为正整数。
  8. 根据权利要求7所述的方法,所述根据所述待处理样本图像帧与所述初始对象区域确定所述第一对象区域,包括:
    获取所述待处理样本图像帧中的整体部位信息;
    根据所述对象关键部位,在所述整体部位信息中确定待融合部位区域;
    将所述待融合部位区域与所述初始对象区域进行融合,得到所述第一对象区域。
  9. 根据权利要求1所述的方法,所述第一图像数据携带待确认设备标识;所述待确认设备标识是绑定云应用服务器的设备标识,所述绑定云应用服务器与所述第一客户端具有绑定关系;
    所述将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,包括:
    获取已存设备标识集合;所述已存设备标识集合包含M个已存设备标识,一个已存设备标识对应一个已注册云应用服务器,M为正整数;
    若所述M个已存设备标识中存在与所述待确认设备标识相匹配的已存设备标识,则确定所述待确认设备标识所指示的所述绑定云应用服务器属于已注册云应用服务器,将所述待确认设备标识所指示的所述绑定云应用服务器确定为所述目标云应用服务器,将所述第一对象区域所包含的第一对象图像数据发送至所述目标云应用服务器。
  10. 根据权利要求9所述的方法,所述方法还包括:
    接收待注册云应用服务器发送的注册请求;所述注册请求是所述待注册云应用服务器在接收到第二客户端发送的应用开启通知后所生成的;所述应用开启通知是所述第二客户端响应针对所述云应用的应用开启操作所生成的;
    根据所述注册请求,检测所述待注册云应用服务器的设备指标信息;
    当所述设备指标信息满足处理质量条件时,获取所述待注册云应用服务器的待存储设备标识,将所述待存储设备标识存储至所述已存设备标识集合,将所述待注册云应用服务器转换为已注册云应用服务器,将所述待存储设备标识转换为已存设备标识。
  11. 根据权利要求10所述的方法,所述设备指标信息包括网络质量参数与设备版本;
    所述根据所述注册请求,检测所述待注册云应用服务器的设备指标信息,包括:
    根据所述注册请求,获取所述待注册云应用服务器的网络质量参数与设备版本;
    若所述网络质量参数达到参数阈值,且所述设备版本与质量标准版本相匹配,则确定所述设备指标信息满足所述处理质量条件;
    若所述网络质量参数未达到所述参数阈值,或所述设备版本与所述质量标准版本不匹配,则确定所述设备指标信息不满足所述处理质量条件。
  12. 根据权利要求1所述的方法,所述对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理,包括:
    获取所述更新接收队列中具有最晚接收时间戳的第二图像数据;
    对所述第二图像数据进行图像识别处理,同步删除所述更新接收队列中的历史图像数据;所述历史图像数据为所述更新接收队列中接收时间戳早于所述第二图像数据的图像数据。
  13. 一种数据处理方法,所述方法由计算机设备执行,所述方法包括:
    接收业务服务器发送的第一对象图像数据,将所述第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;所述第一对象图像数据为第一对象区域所包含的图像数据,所述第一对象区域为所述业务服务器对第一图像数据进行图像识别处理后,得到的对象在所述第一图像数据中所处的区域;所述第一图像数据是由第一客户端发送至所述业务服务器的,所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含所述对象的图像数据;
    当所述缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将所述第一缓冲区的工作状态调整为所述读取状态,将所述第二缓冲区的工作状态调整为所述存储状态,从工作状态处于所述读取状态的所述第一缓冲区中读取所述第一对象图像数据,将所述第一对象图像数据进行渲染处理;
    在所述第一对象图像数据的渲染过程中,接收所述业务服务器发送的第二对象图像数据,将所述第二对象图像数据存储至工作状态处于所述存储状态的所述第二缓冲区中;所述第二对象图像数据是第二对象区域所包含的图像数据,所述第二对象区域为所述业务服务器在提取出所述第一对象区域后对第二图像数据进行图像识别处理所得到,所述第二对象区域为所述对象在所述第二图像数据中所处的区域;所述第二图像数据为所述业务服务器在提取出所述第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;所述更新接收队列中的第二图像数据是所述业务服务器对所述第一图像数据进行图像识别处理的过程中,从所述第一客户端所持续获取得到;
    在获取到所述第一对象图像数据对应的渲染数据时,将所述第一缓冲区的工作状态调整为所述存储状态,将所述第二缓冲区的工作状态调整为所述读取状态,从工作状态处于所述读取状态的所述第二缓冲区中读取所述第二对象图像数据,将所述第二对象图像数据进行渲染处理。
  14. 根据权利要求13所述的方法,所述在获取到所述第一对象图像数据对应的渲染数据时,将所述第一缓冲区的工作状态调整为所述存储状态,将所述第二缓冲区的工作状态调整为所述读取状态,包括:
    在获取到所述第一对象图像数据对应的渲染数据时,获取所述第一缓冲区所对应的用于表征所述读取状态的读取指针标识,以及所述第二缓冲区所对应的用于表征所述存储状态的存储指针标识;
    将所述第一缓冲区所对应的读取指针标识切换为所述存储指针标识;具有所述存储指针标识的所述第一缓冲区的工作状态为所述存储状态;
    将所述第二缓冲区的所述存储指针标识切换为所述读取指针标识;具有所述读取指针标识的所述第二缓冲区的工作状态为所述读取状态。
  15. 根据权利要求13所述的方法,所述方法还包括:
    分配第一缓冲区与第二缓冲区;
    将所述第一缓冲区的初始指针标识设置为存储指针标识,将所述第二缓冲区的初始指针标识设置为读取指针标识;具有所述存储指针标识的所述第一缓冲区的工作状态为所述存储状态;具有所述读取指针标识的所述第二缓冲区的工作状态为所述读取状态;
    根据工作状态处于所述存储状态的所述第一缓冲区,与工作状态处于所述读取状态的第二缓冲区,生成所述缓冲区集合。
  16. 一种数据处理装置,所述装置部署在计算机设备上,所述装置包括:
    数据获取模块,用于获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中;所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含对象的图像数据;
    图像识别模块,用于对所述接收队列中的所述第一图像数据进行图像识别处理;
    队列更新模块,用于在所述第一图像数据的图像识别处理过程中,将持续获取到的所述第一客户端所发送的第二图像数据,存储至所述接收队列中,得到更新接收队列;
    区域发送模块,用于当通过图像识别处理提取出所述对象在所述第一图像数据中所处的第一对象区域时,将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器;
    所述区域发送模块,还用于同步对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;所述目标云应用服务器用于对所述第一对象图像数据进行渲染得到渲染数据,并将所述渲染数据发送至所述第一客户端。
  17. 一种数据处理装置,所述装置部署在计算机设备上,所述装置包括:
    区域存储模块,用于接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一 对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;
    区域渲染模块,用于当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象区域,将第一对象区域进行渲染处理;
    区域接收模块,用于在第一对象区域的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;更新接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到;
    状态调整模块,用于在获取到第一对象图像数据对应的渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
  18. 一种计算机设备,包括:处理器、存储器以及网络接口;
    所述处理器与所述存储器、所述网络接口相连,其中,所述网络接口用于提供网络通信功能,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,以使所述计算机设备执行权利要求1-15任一项所述的方法。
  19. 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序适于由处理器加载并执行权利要求1-15任一项所述的方法。
  20. 一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,所述计算机指令适于由处理器读取并执行,以使得具有所述处理器的计算机设备执行权利要求1-15任一项所述的方法。
PCT/CN2022/112398 2021-09-24 2022-08-15 一种数据处理方法、装置、设备以及可读存储介质 WO2023045619A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP22871677.5A EP4282499A1 (en) 2021-09-24 2022-08-15 Data processing method and apparatus, and device and readable storage medium
JP2023555773A JP2024518227A (ja) 2021-09-24 2022-08-15 データ処理方法、装置、機器及びコンピュータプログラム
US18/196,364 US20230281861A1 (en) 2021-09-24 2023-05-11 Data processing method and apparatus, device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111123508.7 2021-09-24
CN202111123508.7A CN113559497B (zh) 2021-09-24 2021-09-24 一种数据处理方法、装置、设备以及可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/196,364 Continuation US20230281861A1 (en) 2021-09-24 2023-05-11 Data processing method and apparatus, device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2023045619A1 true WO2023045619A1 (zh) 2023-03-30

Family

ID=78174395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/112398 WO2023045619A1 (zh) 2021-09-24 2022-08-15 一种数据处理方法、装置、设备以及可读存储介质

Country Status (5)

Country Link
US (1) US20230281861A1 (zh)
EP (1) EP4282499A1 (zh)
JP (1) JP2024518227A (zh)
CN (1) CN113559497B (zh)
WO (1) WO2023045619A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113559497B (zh) * 2021-09-24 2021-12-21 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备以及可读存储介质
CN115022204B (zh) * 2022-05-26 2023-12-05 阿里巴巴(中国)有限公司 Rtc的传输时延检测方法、装置以及设备
CN115460189B (zh) * 2022-11-09 2023-04-11 腾讯科技(深圳)有限公司 处理设备测试方法、装置、计算机及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130147819A1 (en) * 2011-06-09 2013-06-13 Ciinow, Inc. Method and mechanism for performing both server-side and client-side rendering of visual data
CN108810554A (zh) * 2018-06-15 2018-11-13 腾讯科技(深圳)有限公司 虚拟场景的场景图像传输方法、计算机设备及存储介质
EP3634005A1 (en) * 2018-10-05 2020-04-08 Nokia Technologies Oy Client device and method for receiving and rendering video content and server device and method for streaming video content
CN111729293A (zh) * 2020-08-28 2020-10-02 腾讯科技(深圳)有限公司 一种数据处理方法、装置及存储介质
CN113559497A (zh) * 2021-09-24 2021-10-29 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备以及可读存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7274368B1 (en) * 2000-07-31 2007-09-25 Silicon Graphics, Inc. System method and computer program product for remote graphics processing
JP2009171994A (ja) * 2008-01-21 2009-08-06 Sammy Corp 画像生成装置、遊技機、及びプログラム
CN103294439B (zh) * 2013-06-28 2016-03-02 华为技术有限公司 一种图像更新方法、系统及装置
KR102407691B1 (ko) * 2018-03-22 2022-06-10 구글 엘엘씨 온라인 인터랙티브 게임 세션들에 대한 콘텐츠를 렌더링 및 인코딩하기 위한 방법들 및 시스템들
CN111767503B (zh) * 2020-07-29 2024-05-28 腾讯科技(深圳)有限公司 一种游戏数据处理方法、装置、计算机及可读存储介质
CN112233419B (zh) * 2020-10-10 2023-08-25 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备及存储介质
CN112316424B (zh) * 2021-01-06 2021-03-26 腾讯科技(深圳)有限公司 一种游戏数据处理方法、装置及存储介质
CN112689142A (zh) * 2021-01-19 2021-04-20 青岛美购传媒有限公司 一种便于虚拟现实对象控制的低延迟控制方法
CN112569591B (zh) * 2021-03-01 2021-05-18 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备及可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130147819A1 (en) * 2011-06-09 2013-06-13 Ciinow, Inc. Method and mechanism for performing both server-side and client-side rendering of visual data
CN108810554A (zh) * 2018-06-15 2018-11-13 腾讯科技(深圳)有限公司 虚拟场景的场景图像传输方法、计算机设备及存储介质
EP3634005A1 (en) * 2018-10-05 2020-04-08 Nokia Technologies Oy Client device and method for receiving and rendering video content and server device and method for streaming video content
CN111729293A (zh) * 2020-08-28 2020-10-02 腾讯科技(深圳)有限公司 一种数据处理方法、装置及存储介质
CN113559497A (zh) * 2021-09-24 2021-10-29 腾讯科技(深圳)有限公司 一种数据处理方法、装置、设备以及可读存储介质

Also Published As

Publication number Publication date
CN113559497A (zh) 2021-10-29
EP4282499A1 (en) 2023-11-29
JP2024518227A (ja) 2024-05-01
US20230281861A1 (en) 2023-09-07
CN113559497B (zh) 2021-12-21

Similar Documents

Publication Publication Date Title
WO2023045619A1 (zh) 一种数据处理方法、装置、设备以及可读存储介质
CN113423018B (zh) 一种游戏数据处理方法、装置及存储介质
US10419618B2 (en) Information processing apparatus having whiteboard and video conferencing functions
EP2940940B1 (en) Methods for sending and receiving video short message, apparatus and handheld electronic device thereof
WO2017084174A1 (zh) 一种图像同步显示方法及装置
CN109085950B (zh) 基于电子白板的多屏互动方法、装置及电子白板
CN108737884B (zh) 一种内容录制方法及其设备、存储介质、电子设备
US20160014193A1 (en) Computer system, distribution control system, distribution control method, and computer-readable storage medium
CN113225585A (zh) 一种视频清晰度的切换方法、装置、电子设备以及存储介质
CN114938408B (zh) 一种云手机的数据传输方法、系统、设备及介质
CN104639501B (zh) 一种数据流传输的方法、设备及系统
WO2024159932A1 (zh) 设备配对方法、装置、计算机设备及计算机可读存储介质
JP2016143236A (ja) 配信制御装置、配信制御方法、及びプログラム
CN114598931A (zh) 一种多开云游戏的串流方法、系统、装置及介质
CN114139491A (zh) 一种数据处理方法、装置及存储介质
CN111880756B (zh) 在线课堂投屏方法、装置、电子设备及存储介质
WO2023024832A1 (zh) 数据处理方法、装置、计算机设备和存储介质
WO2023279919A1 (zh) 游戏更新方法、系统、服务器、电子设备、程序产品及存储介质
JP2020109896A (ja) 動画配信システム
CN110798700B (zh) 视频处理方法、视频处理装置、存储介质与电子设备
CN112702625B (zh) 视频处理方法、装置、电子设备及存储介质
CN113784094A (zh) 视频数据处理方法、网关、终端设备及存储介质
JP2020109895A (ja) 動画配信システム
CN111800455A (zh) 一种基于局域网内不同主机数据源共享卷积神经网络的方法
WO2024139724A1 (zh) 图像处理方法、装置、计算机设备、计算机可读存储介质及计算机程序产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871677

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202347055056

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2022871677

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022871677

Country of ref document: EP

Effective date: 20230824

WWE Wipo information: entry into national phase

Ref document number: 2023555773

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 11202306198U

Country of ref document: SG

NENP Non-entry into the national phase

Ref country code: DE