WO2023045619A1 - 一种数据处理方法、装置、设备以及可读存储介质 - Google Patents
一种数据处理方法、装置、设备以及可读存储介质 Download PDFInfo
- Publication number
- WO2023045619A1 WO2023045619A1 PCT/CN2022/112398 CN2022112398W WO2023045619A1 WO 2023045619 A1 WO2023045619 A1 WO 2023045619A1 CN 2022112398 W CN2022112398 W CN 2022112398W WO 2023045619 A1 WO2023045619 A1 WO 2023045619A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- image
- buffer
- data
- area
- Prior art date
Links
- 238000003860 storage Methods 0.000 title claims abstract description 114
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 225
- 238000000034 method Methods 0.000 claims abstract description 137
- 230000008569 process Effects 0.000 claims abstract description 65
- 239000000872 buffer Substances 0.000 claims description 198
- 238000009877 rendering Methods 0.000 claims description 76
- 238000004590 computer program Methods 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 19
- 239000000284 extract Substances 0.000 claims description 18
- 238000004891 communication Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 abstract description 18
- 238000010586 diagram Methods 0.000 description 29
- 230000006399 behavior Effects 0.000 description 24
- 230000003993 interaction Effects 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000007654 immersion Methods 0.000 description 2
- 239000012536 storage buffer Substances 0.000 description 2
- 206010029216 Nervousness Diseases 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/31—Communication aspects specific to video games, e.g. between several handheld game devices at close range
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/30—Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
- A63F13/35—Details of game servers
- A63F13/355—Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an encoded video stream for transmitting to a mobile phone or a thin client
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
- A63F13/655—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/50—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers
- A63F2300/53—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing
- A63F2300/534—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by details of game servers details of basic data processing for network load management, e.g. bandwidth optimization, latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present application relates to the field of computer technology, and in particular to a data processing method, device, equipment and readable storage medium.
- a corresponding virtual object for example, a virtual animation object
- the virtual object can be displayed in the cloud game.
- the terminal collects the user's picture through the camera
- the user's portrait is directly recognized and extracted on the terminal, so as to obtain and display the corresponding virtual object.
- the computing power of the terminal Since the computing power of the terminal is not high, it is likely to cause the problem of low efficiency of image recognition due to insufficient computing power, which will lead to a large delay in the process of sending the portrait recognition result to the cloud by the terminal. As a result, there will be a time delay when the game displays the virtual object, resulting in that the virtual behavior of the virtual object displayed by the game does not match the current behavior state of the user.
- Embodiments of the present application provide a data processing method, device, device, and readable storage medium, which can reduce image transmission delay and improve image recognition efficiency.
- an embodiment of the present application provides a data processing method, the method is executed by a computer device, and the method includes:
- the first image data is the image data including the object obtained by the first client when running the cloud application;
- the first object image data contained in the first object area is sent to the target cloud application server, and the update receiving queue has The second image data with the latest time stamp is received for image recognition processing; the target cloud application server is used to render the first object image data to obtain rendered data, and send the rendered data to the first client.
- An embodiment of the present application provides a data processing device on the one hand, the device is deployed on a computer device, and the device includes:
- the data acquisition module is configured to acquire the first image data sent by the first client, and store the first image data in the receiving queue; the first image data is the included object acquired by the first client when running the cloud application image data;
- An image recognition module configured to perform image recognition processing on the first image data in the receiving queue
- the queue update module is used to store the continuously obtained second image data sent by the first client into the receiving queue during the image recognition processing of the first image data, so as to obtain an updated receiving queue;
- the area sending module is used to send the first object image data contained in the first object area to the target cloud application server when the first object area where the object is located in the first image data is extracted through image recognition processing;
- the cloud application server is used to render the image data of the first object to obtain rendering data, and send the rendering data to the first client;
- the area sending module is further configured to synchronously perform image recognition processing on the second image data with the latest receiving time stamp in the update receiving queue.
- the embodiment of the present application provides another data processing method, the method is executed by a computer device, and the method includes:
- the first object image data is the image contained in the first object area data, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data;
- the first image data is sent to the business server by the first client,
- the first image data is the image data including the object obtained by the first client when running the cloud application;
- the second object image data sent by the service server is received, and the second object image data is stored in the second buffer whose working state is in the storage state;
- the second object image data is the second object
- the image data included in the area, the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data, the second object area is the area where the object is located in the second image data ;
- the second image data is the image data with the latest receiving timestamp obtained from the update receiving queue when the service server extracts the first object area;
- the image data is continuously obtained from the first client during the process of image recognition processing;
- the working state of the first buffer is adjusted to the storage state
- the working state of the second buffer is adjusted to the reading state
- the working state of the second buffer is in the reading state.
- the second object image data is read from the second buffer, and the second object image data is rendered.
- the embodiment of the present application provides another data processing device, which is deployed on computer equipment, and the device includes:
- the area storage module is used to receive the first object image data sent by the service server, and store the first object image data in the first buffer in the buffer set whose working state is in the storage state; the first object image data is the first The image data contained in the object area, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data; the first image data is provided by the first client For sending to the service server, the first image data is the image data including the object obtained by the first client when running the cloud application;
- the area rendering module is used to adjust the working state of the first buffer to the reading state and convert the second buffer to The working state of the first object is adjusted to the storage state, and the first object area is read from the first buffer whose working state is in the reading state, and the first object area is rendered;
- the area receiving module is used to receive the second object image data sent by the service server during the rendering process of the first object area, and store the second object image data in the second buffer whose working state is in the storage state; the second object The image data is the image data included in the second object area.
- the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data.
- the second object area is the object in the second image The area where the data is located; the second image data is the image data with the latest receiving timestamp obtained from the update receiving queue when the business server extracts the first object area; the second image in the update receiving queue
- the data is obtained continuously from the first client during the process of the business server performing image recognition processing on the first image data;
- a state adjustment module configured to adjust the working state of the first buffer to a storage state, and adjust the working state of the second buffer to a reading state when the rendering data corresponding to the first object image data is obtained, from the working state to The second object image data is read from the second buffer in the reading state, and the second object image data is rendered.
- An embodiment of the present application provides a computer device, including: a processor and a memory;
- the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the method in the embodiment of the present application.
- Embodiments of the present application provide, on the one hand, a computer-readable storage medium.
- the computer-readable storage medium stores a computer program, and the computer program includes program instructions.
- the program instructions are executed by a processor, the method in the embodiment of the present application is executed.
- One aspect of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method provided in one aspect of the embodiments of the present application.
- the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
- the relevant computer equipment such as the service server
- the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
- the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
- the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
- the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
- Recognition of data one by one can improve the recognition efficiency of image data.
- image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
- the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
- FIG. 1 is a network architecture diagram provided by an embodiment of the present application
- Fig. 2a is a schematic diagram of a scene provided by an embodiment of the present application.
- Fig. 2b is a schematic diagram of a scene provided by an embodiment of the present application.
- Fig. 3 is a schematic flow chart of a data processing method provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of frame skipping processing provided by an embodiment of the present application.
- FIG. 5 is a schematic flow diagram of a data processing method provided in an embodiment of the present application.
- FIG. 6 is a schematic diagram of a scene for part fusion provided by an embodiment of the present application.
- Fig. 7 is a schematic flow diagram of sending the first object image data to the target cloud application server according to the embodiment of the present application.
- FIG. 8 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
- FIG. 9 is a schematic diagram of a state change of a double buffer provided by an embodiment of the present application.
- FIG. 10 is a system architecture diagram provided by an embodiment of the present application.
- Fig. 11 is a schematic flow diagram of a system provided by an embodiment of the present application.
- Fig. 12 is an interactive flowchart provided by the embodiment of the present application.
- FIG. 13 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
- Fig. 14 is a schematic structural diagram of another data processing device provided by an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
- FIG. 1 is a network architecture diagram provided by an embodiment of the present application.
- the network architecture may include a service server 1000, a terminal device cluster, and a cloud application server cluster 10000.
- the terminal device cluster may include one or more terminal devices, and the number of terminal devices will not be limited here.
- a plurality of terminal devices may include a terminal device 100a, a terminal device 100b, a terminal device 100c,..., a terminal device 100n; as shown in Figure 1, a terminal device 100a, a terminal device 100b, a terminal device 100c,...,
- the terminal devices 100n can respectively be connected to the service server 1000 through a network, so that each terminal device can perform data interaction with the service server 1000 through the network connection.
- the cloud application server cluster 10000 may include one or more cloud application servers, and the number of cloud application servers will not be limited here.
- a plurality of cloud application servers may include cloud application server 10001, cloud application server 10002, ..., cloud application server 1000n; as shown in Figure 1, cloud application server 10001, cloud application server 10002, ..., cloud application
- the servers 1000n can be respectively connected to the service server 1000 through a network, so that each cloud application server can exchange data with the service server 1000 through the network connection.
- each cloud application server can be a cloud application server, and one terminal device can correspond to one cloud application server (multiple terminal devices can correspond to the same cloud application server).
- When a terminal device runs a cloud application its corresponding cloud application
- the server provides corresponding functional services (such as computing services) for it.
- the cloud application is a cloud game application
- the cloud application server may be a cloud game server, and when the terminal device runs the cloud game application, its corresponding cloud game server provides corresponding functional services for it.
- each terminal device shown in FIG. 1 can be installed with a cloud application, and when the cloud application runs in each terminal device, it can perform data interaction with the service server 1000 shown in FIG. 1 respectively. , so that the service server 1000 can receive service data from each terminal device.
- the cloud application may include an application having a function of displaying data information such as text, image, audio and video.
- the cloud application may be an entertainment application (for example, a game application), and the entertainment application may be used for game entertainment by the user.
- the service server 1000 in this application can obtain service data according to these cloud applications.
- the service data can be the image data (which can be called the first an image data).
- the service server 1000 can store the first image data in the receiving queue, and then obtain the first image data from the receiving queue, and the service server 1000 can store the first image data in the receiving queue. Perform image recognition processing. It should be understood that after the terminal device acquires the first image data and sends it to the service server 1000, it can continuously acquire the image data (which can be referred to as the second image data) containing the object, and the service server 1000 is processing the first image data During the process of image recognition processing, the second image data acquired by the terminal device may also be continuously acquired from the terminal device. Like the first image data, the service server 1000 may also store the second image data in the receiving queue, thereby obtaining an updated receiving queue containing one or more second image data.
- the service server 1000 may not store the first image data in the receiving queue, and the service server 1000 may directly process the first image data Image recognition processing, and in the process of image recognition processing, continuously obtain the second image data collected by the terminal device from the terminal device (that is, the second image data after the first image data, the third image data data, the third image data%), store the second image data in the receiving queue.
- the service server 1000 After the service server 1000 extracts the area where the object is located in the first image data (which may be referred to as the first object area) through image recognition processing, the service server 1000 can obtain the first object in the first image data For the image data contained in the area (which can be referred to as the first object image data), the service server 1000 can send the first object image data to the cloud application server corresponding to the terminal device, and the cloud application server can The image data of an object is read and rendered, and after the rendering is completed, the rendered data can be sent to the terminal device, and the terminal device can display and output the rendered data in the cloud application.
- the service server 1000 when the service server 1000 extracts the first object area where the object is located in the first image data, the service server 1000 can then perform image recognition processing on the rest of the image data, for example, the service server 1000 can include In the update receiving queue of the second image data, the second image data (that is, the latest image data received) with the latest receiving time stamp is obtained, and the service server 1000 can then perform the second image data with the latest receiving time stamp.
- the second image data (may be referred to as target image data) is subjected to image recognition processing.
- the service server 1000 can continuously acquire the image data (which can be called the third image data) containing the object from the terminal device, and store the third image data Go to the update receiving queue to get a new receiving queue.
- the image data which can be called the third image data
- the service server 1000 may also obtain the image data contained in the second object area in the target image data (which may be referred to as the second object image data), the service server may also send the second object image data to the cloud application server corresponding to the terminal device; at the same time, the service server 1000 may acquire the The third image data (which may be referred to as new target image data) with a time stamp is received latest, and the service server 1000 may then perform image recognition processing on the new target image data.
- the service server 1000 in this application can continuously receive the rest of the image data during the process of image recognition processing for certain image data, so that the synchronization of recognition and reception can be realized, and there is no need to wait for the recognition to be completed before receiving. This can reduce the reception delay of image data.
- the business server will perform frame skip processing (that is, it will obtain the current image data with the latest received timestamp, and It performs image recognition processing; instead of obtaining the next image data of the currently processed image data (receiving the image data with the closest time stamp) to perform image recognition processing on it), frame skip processing can reduce the queuing time of image data Delay, the image data with the latest received timestamp is the collected user's current behavior, then after performing image recognition processing and displaying the image data with the latest received timestamp, the rendered data displayed in the cloud application can be Synchronized and matched with the user's current behavior.
- one terminal device may be selected among multiple terminal devices to perform data interaction with the service server 1000, and the terminal device may include: smart phones, tablet computers, notebook computers, desktop computers, smart TVs, smart speakers, desktop Smart terminals such as computers, smart watches, and smart vehicles that carry multimedia data processing functions (eg, video data playback functions, music data playback functions), but are not limited thereto.
- the above-mentioned cloud application may be integrated in the terminal device 100a shown in FIG.
- the method provided in the embodiment of the present application can be executed by a computer device, and the computer device includes but is not limited to a user terminal or a service server.
- the business server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, and can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud Cloud application servers for basic cloud computing services such as communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
- terminal device and the service server may be connected directly or indirectly through wired or wireless communication, which is not limited in this application.
- the above-mentioned computer equipment may be a node in a distributed system, wherein the distributed system may be a block chain system
- the The blockchain system can be a distributed system formed by connecting multiple nodes through network communication.
- the peer-to-peer (P2P, Peer To Peer) network that can be formed between nodes
- the P2P protocol is an application layer protocol that runs on the Transmission Control Protocol (TCP, Transmission Control Protocol) protocol.
- TCP Transmission Control Protocol
- any form of computer equipment such as business servers, terminal equipment and other electronic equipment, can become a node in the blockchain system by joining the peer-to-peer network.
- blockchain is a new application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm. Organize and encrypt into a ledger, so that it cannot be tampered with and forged, and at the same time, data can be verified, stored and updated.
- the computer device is a block chain node
- the data in this application (such as the first image data, the first object area, the second object image data, etc. ) has authenticity and security, which can make the results obtained after relevant data processing based on these data more reliable.
- FIG. 2a is a schematic diagram of a scenario provided by an embodiment of the present application.
- the terminal device 100a shown in FIG. 2a may be the terminal device 100a in the terminal device cluster 100 in the embodiment corresponding to FIG. 1;
- the service server 1000 shown in FIG. 2a may be the terminal device 100a in the embodiment corresponding to FIG.
- the cloud application server 10001 shown in FIG. 2a may be the cloud application server 10001 in the embodiment corresponding to FIG. 1 above.
- the terminal device 100a may contain a game application.
- the terminal device 100a may capture a picture containing object a (which may be referred to as an original image) through the camera component 200a. frame 20a), the terminal device may perform encoding processing (such as H264 encoding processing) on the original image frame to obtain image data.
- the terminal device 100a can send the image data to the service server 1000 .
- the service server 1000 can store the image data in the receiving queue, and then, the service server 1000 can obtain the image data from the receiving queue, and the service server 1000 can decode the image data to obtain the original image frame 20a.
- the service server 1000 will store it in the receiving queue, and for the first image data received by the service server 1000, of course, it can also be Choose not to store, but directly decode it.
- the service server 1000 can first store it in the receiving queue according to the storage rules, and then obtain it in the receiving queue; The server 1000 may not store it, and may directly decode it to obtain the original image frame 20a.
- the service server 1000 can perform image recognition processing on the original image frame 20a, and the service server 1000 can determine the object edge curve P1 corresponding to the object a in the original image frame 20a through the image recognition processing. It should be understood that user a will continue to produce action behaviors (such as raising his hands, shaking his head, squatting, etc.), then after the terminal device 100a collects the original image frame 20a, it can continue to collect images containing object a through the camera component 200a. For the original image frame, every time the terminal device 100a successfully acquires an original image frame containing the object a, the terminal device 100a can encode it to obtain image data, and send it to the service server 1000 .
- action behaviors such as raising his hands, shaking his head, squatting, etc.
- the service server 1000 can continuously acquire different images from the terminal device 100a during the process of image recognition processing. data, the service server 1000 may temporarily store these image data in the receiving queue.
- the service server 1000 can extract the entire area covered by the object edge curve P1 (which can be called the object area P2) in the original image frame 20a, which can be obtained in the original image frame 20a. All the image contents contained in the object area can thus be obtained from all the image contents contained in the object area (which can be referred to as object image data); the service server 1000 can obtain the cloud application server corresponding to the terminal device 100a (such as cloud application server 10001), the service server 1000 may send the object image data included in the object area P2 to the cloud application server 10001. After acquiring the object image data, the cloud application server 10001 may perform rendering processing on the object image data, thereby obtaining rendering data P3, and the cloud game server may send the rendering data P3 to its corresponding terminal device 100a.
- the cloud application server 10001 may perform rendering processing on the object image data, thereby obtaining rendering data P3, and the cloud game server may send the rendering data P3 to its corresponding terminal device 100a.
- FIG. 2b is a schematic diagram of a scenario provided by an embodiment of the present application.
- the terminal device 100a may display the rendering data P3 in the game application.
- the virtual environment corresponding to the game (which can be understood as a game scene) includes a virtual background (virtual house background), a dancing virtual object 2000a (dancing) and a dancing virtual object 2000b.
- rendering data P3 can be displayed.
- the service server 1000 may further process the image data in the receiving queue.
- the service server 1000 can perform frame skipping processing, that is, the service server 1000 can obtain the image data with the latest receiving time stamp in the receiving queue, and It performs decoding and image recognition processing.
- the image data with the latest reception time stamp can be understood as the last image data sent by the terminal device 100a at the current moment, and the image data may correspond to the latest real-time behavior of the object a. Then, after extracting the corresponding object area and performing rendering output, the rendered data presented is consistent with the actual action behavior of the object.
- the object a may be a game player, and the corresponding portrait rendering data (such as rendering data P3) is displayed in the game application, that is, the player portrait is projected into the game scene, thereby It can enable game players to "place themselves" in the game scene, and can improve the game player's sense of immersion.
- the embodiment of the present application can realize the synchronization of image recognition and reception through the receiving queue, which can reduce the receiving delay of image data; in addition, through frame skipping processing, it can speed up the recognition efficiency of image data and further reduce the time delay. , and can also improve the matching rate between the player portrait displayed in the game and the player.
- FIG. 3 is a schematic flowchart of a data processing method provided in an embodiment of the present application.
- the method may be executed by a computer device, and the computer device may be a terminal device (for example, any terminal device in the terminal device cluster shown in FIG. 1 above, such as terminal device 100a) or a service server (such as the The shown service server 1000) is executed, and the computer device may also include a terminal device and a service server, so that the terminal device and the service server are jointly executed.
- this embodiment takes the method executed by the above-mentioned service server as an example for description.
- the data processing method may at least include the following S101-S103:
- S101 Acquire first image data sent by a first client, and store the first image data in a receiving queue; the first image data is image data including an object acquired by the first client when running a cloud application.
- the first client may be understood as a terminal device, and an application may be deployed in the first client, and the application may be a cloud application (such as a game application) or the like.
- the cloud application as an example, when the user uses the first client, the user can start the cloud application in the first client, for example, the user can click the cloud application and click the start control to run the cloud application.
- the first client may refer to any client.
- the first client can capture a picture containing the user (which can be called an object) through the camera component, including The user's picture may be referred to as an original image frame.
- the first client can perform coding processing on the original image frame, thereby obtaining a coded image file, which can be referred to as image data here.
- the first client can send the image data to the service server (the service server can refer to a server with image decoding function and image recognition function, which can be used to obtain the encoded file sent by the first client, and perform decoding and image recognition processing).
- the H264 encoding method As an encoding format, has a higher compression ratio. After the same image is encoded by H264, it will occupy less bandwidth in transmission. Therefore, H264 is widely used in mobile video It has a wide range of applications. Then, in this application, in order to reduce the transmission bandwidth between the first client and the service server, the coding method for the original image frame may be preferentially selected as the H264 coding method.
- the encoding method used by the first client to encode the original image frame can also be any encoding method other than H264, such as the encoding method can be H262 encoding method, H263 encoding method , H265 encoding method, etc., this application will not limit them.
- the service server may store the image data in the receiving queue.
- the first image data is taken as an example.
- the service server After the service server acquires the first image data, it can store the first image data in the receiving queue.
- the specific method can be: receiving the first image data sent by the first client (No. One image data is the data obtained after the first client encodes the original image frame); then, the service server can obtain the receiving time stamp of the first image data, and can associate the first image data with the receiving time stamp stored in the receive queue. That is to say, when storing each piece of image data, the service server can also store its receiving time.
- the receiving time of the business server (which can be used as a receiving time stamp) is 19:09 on September 5, 2021, then the business server can compare the image data A with the receiving time of September 5, 2021 at 19:09 :09 Associated storage to the receive queue. It should be understood that when no image data is stored in the receiving queue, the receiving queue may be empty.
- the first image data may be acquired from the receiving queue and subjected to image recognition processing.
- the first image data since the first image data is actually an image encoding file, it can be decoded and restored to obtain the original image frame first, and then image recognition processing is performed on the original image frame.
- the specific method can be as follows: the first image data can be decoded to obtain decoded image data with an original image format; subsequently, format conversion can be performed on the decoded image data to obtain an original image frame with a standard image format; subsequently, the original image frame can be obtained Image recognition processing is performed on raw image frames in a standard image format.
- the standard image format may refer to a specified image format for unified image recognition processing. For example, if it is specified that an image for image recognition processing must have a color format (Red Green Blue color mode, RGB color mode), then the RGB format is May be referred to as a standard image format.
- the decoded image data with the original image format can be obtained; if the original image format is the standard image format, the decoded image data can be determined as the original image frame with the standard image format ; and if the original image format is different from the standard image format, it can be converted into the standard image format to obtain the original image frame with the standard image format. For example, when the original image format is YUV format, the YUV format can be converted into RGB format, thereby obtaining the original image frame in RGB format.
- the service server After the service server decodes the original image frame corresponding to the first image data, it can perform image recognition processing on the original image frame with a standard image format, and determine the area where the object is located in the original image frame (which can be called the first image frame).
- An object area for a specific implementation manner of performing image recognition processing to determine the first object area, reference may be made to the subsequent description in the embodiment corresponding to FIG. 5 .
- the first client can continue to capture the picture (new original image frame) containing the object, and the first client Each original image frame may be encoded to obtain an image encoded file (which may be called second image data).
- the first client can continuously send each second image data to the service server.
- the service server will not suspend the reception of the image data, and the service server can continuously obtain the second image data from the first client, and the service server can The second image data is temporarily stored in the receiving queue, so that the receiving queue can be updated.
- the first object area where the object is located in the original image frame can be determined through the above-mentioned image recognition processing, then after the first object area is determined, the first object area can be obtained in the original image frame
- the included image content (which may be referred to as the first object image data)
- the service server may extract the first object area and the first object image data included in the first object area.
- the service server can obtain the target cloud application server corresponding to the first client, and the service server can send the first object image data to the target cloud application server .
- the target cloud application server may refer to the cloud application server corresponding to the first client.
- the cloud application server When the first client runs the cloud application, the cloud application server provides computing services for the first client, such as central processing Central Processing Unit (CPU) computing services, Graphics Processing Unit (GPU) computing services, etc.
- the target cloud application server can render the first object image data, thereby obtaining the rendering data corresponding to the first object image data, and the target cloud application server can send the rendering data to the first client, and the first client This rendered data can be displayed in a cloud application.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the service server may also continue to perform decoding and image recognition processing on the rest of the image data.
- the service server may acquire the second image data with the latest receiving time stamp in the update receiving queue, and the service server may perform decoding and image recognition processing on the second image data.
- FIG. 4 is a schematic diagram of frame skipping processing provided by an embodiment of the present application.
- image data 1, image data 2, image data 3, image data 4, image data 5, image data 6, image data 7, image data 8, image data 9 may be included in the receiving queue 40a, wherein, The image data 1 to image data 9 are sorted according to the order of receiving timestamps from early to late, which are represented by labels 1, 2, 3, 4, 5, 6, 7, 8, and 9 in sequence in FIG.
- image data 1 is the earliest receiving time stamp
- image data 9 is the latest time stamp.
- image data 1, image data 2, image data 3, image data 4, and image data 5 are image data that have been processed
- image data 6 is image data currently being processed by the business server
- image data 7, image data 8 The image data 9 is the image data received by the service server when processing the image data 6, and the image data 7, image data 8, and image data 9 are queuing up for processing.
- the service server when the service server extracts the object area where the object is located in the image data 6, the service server can obtain the image data 9 from the end of the receiving queue 40a (that is, obtain the latest receiving time stamp image data), the service server can skip the image data 7 and the image data 8, and then decode and process the image data 9, which is the frame skipping process.
- the time required for the business server to decode and perform image recognition processing is 30 ms, and the time interval between the business server receiving the two image data before and after it is 10 ms (that is, receiving One image data, the next image data is received at 10ms, and the next image data is received at 20ms); then when the business server processes the image data 6, during this process, the business server will continue to receive Image data 7 (it can be stored at the end of the receiving queue at this time: after image data 6), image data 8 (it can be stored at the end of the receiving queue at this time: after image data 7), image data 9 ( At this point it can be stored at the end of the receive queue: after image data 8).
- the receiving queue is as shown in the receiving queue 40a, and the service server can directly obtain the latest image data (that is, the image data 9) from the end of the receiving queue at this time, skipping the image data 7 and image data 8 (although image data 7 and image data 8 have not been processed, they have actually been skipped and will not be processed again, so image data 7 and image data 8 can be determined as processed images data).
- the business server when processing the image data 9, can also continuously receive the image data 10 and image data 11 (represented by labels 10 and 11 in Figure 4), and obtain the receiving queue 40b; then When the object area of the object in the image data 9 is extracted, the image data at the end of the queue in the receiving queue 40b (that is, the image data 11 with the latest reception time stamp) can be obtained, and the service server can skip the image data 10 , perform decoding and image recognition processing on the image data 11 .
- the service server can continuously receive the rest of the image data to obtain the receiving queue 40c, and when the image data 11 is processed, the image data at the end of the queue in the receiving queue 40c can be obtained again , so that it is executed repeatedly, and this process will not be repeated here.
- the previous image data (that is, the processed image data) can be cleared, thereby increasing the storage space of the receiving queue.
- the processed image data in the receiving queue 40a includes image data 1 to image data 5, then image data 1 to image data 5 can be deleted, and now the receiving queue 40a only includes image data 6 to image data 9.
- the image data with the latest receiving time stamp can be obtained in the receiving queue, and after the image data is obtained, it will be arranged before the image data (that is, the receiving time stamp is earlier than
- the image data (which may be referred to as historical image data) of the image data) is deleted.
- the acquired image data 9 is the image data to be processed, and the historical image data (including image data 1 - image data 8 ) before the image data 9 has been obtained.
- the processed image data (including the first image data) can be cleared first, and then the image with the most The second image data with a later time stamp is received.
- the second image data with the latest receiving time stamp can be obtained first, and then the historical image data whose receiving time stamp is earlier than the second image data can be obtained Perform deletion and clearing (that is, first obtain the second image data with the latest receiving time stamp in the update receiving queue, and then perform image recognition processing on the second image data, and synchronously delete and update the historical image data in the receiving queue; wherein, the history The image data is the image data whose receiving timestamp in the update receiving queue is earlier than the second image data).
- the service server receives the coded code stream (that is, image data) sent by the first client, if the duration of decoding and image recognition processing (hereinafter referred to as image processing) is longer than the received two frames of image data (For example, the image processing time is 30ms, and the receiving interval between two frames of image data is 10ms), then if there is no receiving queue, the first client will always wait for the service server to respond to the current image data Image processing, which greatly increases the transmission delay of image data, will seriously affect the transmission efficiency of image data.
- this application can store the image data through the receiving queue, so that the service server can continuously receive the image data sent by the first client during the process of image processing.
- the service server performs image processing on the image data sequentially, it will cause The image data recognized by the business server does not match the latest state of the object seriously, and the image data recognized by the business server is seriously behind. Therefore, through frame skipping processing, the business server can process the latest image data every time. Image processing can reduce the time delay of image recognition, and at the same time, due to the high computing power of the business server, it can also improve the efficiency of image recognition.
- the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the first client, and the first image data can be processed by a business server with high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this
- the service server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the image recognition process of the first image data, and The second image data is stored in the receiving queue to obtain an updated receiving queue.
- the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
- the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
- the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
- the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
- Recognition of data one by one can improve the recognition efficiency of image data.
- image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
- the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
- FIG. 5 is a schematic flowchart of a data processing method provided by an embodiment of the present application. This process may correspond to the process of performing image recognition processing on the original image frame to determine the first object area in the above-mentioned embodiment corresponding to FIG. 3 , as shown in FIG. 5 , the process may include at least the following S501-S503:
- the key points of the object edge here may refer to the key points of the object contour of the object, and the original image frame contains the key parts of the object, so the key points of the object contour here may refer to the key points of the contour of the key parts. If the key part is the head, the key point of the object edge may refer to the key point of the head contour; if the key part is the neck, then the key point of the object edge may refer to the key point of the neck contour.
- the object edge key points of the identified object can be identified by an artificial intelligence algorithm, a dedicated graphics processing unit (Graphics Processing Unit, GPU) to identify, etc., and this application will not limit it.
- an artificial intelligence algorithm a dedicated graphics processing unit (Graphics Processing Unit, GPU) to identify, etc., and this application will not limit it.
- the object edge curve corresponding to the object (which can be understood as the object outline) can be obtained.
- the curve P1 shown in FIG. 2a can be regarded as the object profile of the object a.
- the area covered by the object edge curve can be determined in the original image frame, and this area can be used as the first object area where the object is located in the original image frame.
- the area covered by the object edge curve P2 is area P2 (this area P2 is the area where the object a is located), the The region P2 can then be determined as the region where the object is located in the original image frame (herein may be referred to as the first object region).
- the area covered by the above-mentioned object edge curve can be called the initial object area, and after the initial object area is determined, the initial object area may not be determined as the final first object area temporarily , but to determine the first object area according to the initial object area, the specific method can be: the key parts of the object presented by the initial object area can be obtained; then, the object recognition configuration information for the object can be obtained, and the object recognition configuration The object recognition configuration part indicated by the information can match the object recognition configuration part with the key part of the object; if the object recognition configuration part matches the object key part, the step of determining the first object area according to the initial object area can be performed; and If the object recognition configuration part does not match the object key part, it can be determined that the first object region cannot be extracted through image recognition processing.
- the object key parts of the object presented in the initial object area can be obtained (the object key parts may refer to body parts of the object, such as head, neck, arm parts , abdomen, legs, feet, etc.); then, the object recognition configuration parts of the objects that need to be contained in the original image frame can be obtained (that is, the recognition rule, which specifies the original image frame collected by the terminal device) , the part of the object that needs to be included).
- the object recognition configuration part as the leg as an example, it is assumed that the original image frame collected by the terminal device needs to contain the user's leg, and after decoding the received image data and performing image recognition processing, the extracted
- the key parts of the object presented in the initial object area of the original image frame are head and neck, then it can be determined that the key parts of the object (head and neck) do not match the object recognition configuration parts (legs), and the original image frame If the key parts of the presented object do not meet the requirements, then it can be directly determined that the first object region cannot be extracted through image recognition processing (that is, the parts do not meet the requirements, and the extraction fails).
- the extracted initial object area presents the key object If the part is a leg, then it can be determined that the key parts of the object presented in the original image frame meet the requirements. At this time, the initial object area can be determined as the first object area.
- the service server can obtain the update Receive the next image data of the current image data in the queue, and then perform image processing on the next image data.
- the current image data is the first image data
- the next image data of the first image data in the receiving queue can be acquired (that is, the image with the earliest receiving time stamp among the image data whose receiving time stamp is later than the first image data) data)
- the service server can then perform image processing on the next image data.
- the specific method can be as follows: the second image data with the earliest received time stamp in the update receiving queue can be determined as the image data to be recognized; subsequently, image recognition processing can be performed on the image data to be recognized, when the object is extracted through the image recognition processing When the object area to be processed in the image data is to be identified, the object area to be processed can be sent to the target cloud application server.
- the service server processes the current image data (such as the first image data) with a sufficiently short duration and fast enough efficiency, after determining that the first object region cannot be extracted through image recognition processing,
- the next image data (image data with the earliest received time stamp) of the current image data can be acquired, and image processing is performed on the next image data.
- the purpose is that when the user performs an action, the first client obtains the image frame, encodes it and sends it to the service server, and the service server quickly recognizes that the key parts of the object it contains do not meet the specifications, and cannot perform the object area and its object If the image data included in the area is extracted, the cloud application server cannot receive the extracted object image data, and cannot align, render and display it.
- the business server can perform image processing on the next image data to extract the next image object area of the data, and then send the contained object image data to the cloud application server for rendering output.
- the jumpiness of the user portrait displayed in the cloud application can be reduced, and its coherence can be increased.
- the second image data with the latest reception time stamp instead of the second image data with the earliest time-stamped image data
- manual experience can be used according to actual conditions. Circumstances set, this application will not limit it.
- the initial object area when it is determined that the object recognition configuration part matches the key part of the object, the initial object area can be directly determined as the first object area.
- the specific method for determining the first object area can also be: the key part of the object of the object presented by the initial object area can be obtained; if the key part of the object If the part has part integrity, the initial object area can be determined as the first object area; and if the key part of the object does not have part integrity, then N (N is a positive integer) sample image frames in the sample database can be obtained, which can be A sample image frame to be processed corresponding to the object is acquired from the N sample image frames, and a first object area is determined according to the sample image frame to be processed and the initial object area.
- the specific method for determining the first object area according to the sample image frame to be processed and the initial object area can be: the overall part information in the sample image frame to be processed can be obtained; then, according to the key parts of the object, in the overall part information Determine the area of the part to be fused; the area of the part to be fused can be fused with the initial object area, thereby obtaining the first object area.
- this application can collect the complete portrait sample data of the user in advance (complete portrait sample data from the head to the feet), one user can correspond to one sample image frame, and one sample image frame can present one User's complete overall portrait data. Then when the initial object area is extracted and it is determined that the object recognition configuration part matches the key part of the object, it can be determined whether the key part of the object has part integrity. If it has part integrity, the initial object can be directly The area is determined as the first object area; and if the key part of the object does not have integrity, the sample image frame to be processed corresponding to the object in the sample database can be obtained, and then the image frame of the object to be processed can be obtained in the sample image frame to be processed. For the overall part information, complete the initial object area according to the overall part information to obtain a complete first object area including the complete part.
- FIG. 6 is a schematic diagram of a scene for performing part fusion provided by an embodiment of the present application.
- the initial object area is the initial object area 600a
- the key parts of the object presented in the initial object area 600a include head, neck, arm, chest, abdomen (that is, the upper body part of the user);
- the object recognition configuration part is also the user's upper body part, that is to say, the first client needs to collect the user's upper body part, so it can be seen that the initial object area meets the requirements. Then further, it can be determined whether the key parts of the object have part integrity.
- part integrity refers to the user's overall portrait integrity (that is, it needs to include upper body parts and lower body parts, that is, from head to body). foot)
- the service server 1000 can obtain the sample image frame to be processed corresponding to the object in the sample database (assumed to be Sample image frame to be processed 600b).
- the overall part information presented in the sample image frame to be processed contains complete information from the head to the feet of the object.
- the The lower body part in the sample image frame to be processed is determined as the region of the part to be fused (that is, the region 600c), and the region 600c of the part to be fused can be extracted. Further, the part region to be fused 600c can be fused (for example, spliced) with the initial object region 600a, so as to obtain the first object region 600d including the upper body part and the lower body part. It should be understood that by collecting the user's overall part information in advance (for example, from the head and feet), it is possible for the first client to obtain the user's picture each time without strictly requiring the user to stand on a fixed frame that can be collected.
- the user can move flexibly to the location of the complete part.
- the first client only needs to obtain part of the part information. It performs supplementary splicing, so that a complete part can also be obtained. In this way, the user's sense of experience and immersion can be increased.
- part integrity refers to the integrity of the user's upper body parts
- the key parts of the object presented in the initial object area 600a actually have part integrity, then At this time, the initial object area 600a may be directly determined as the first object area.
- the specific method for obtaining the sample image frame to be processed corresponding to the object in the sample database can be: through face matching; or when collecting the sample image frame of the user, using the user identification (such as user name , user number, etc.) to identify its corresponding sample image frame, so that each sample image frame is equipped with a user ID (can be called a sample ID); and when the first client sends image data to the service server, it can carry After sending the user ID of the user included in the image data, the service server can match the corresponding sample image frame to be processed by using the carried user ID and the sample ID of the sample image frame.
- the specific implementation manner of obtaining the sample image frame to be processed corresponding to the object in the sample database is of course not limited to the above-described manner, and the present application does not limit the specific implementation manner.
- the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
- the relevant computer equipment such as the service server
- the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
- the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
- the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
- the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
- Recognition of data one by one can improve the recognition efficiency of image data.
- image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
- the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
- the cloud application server corresponding to each client may send a registration request to the service server, and the registration request is used to request to register the device with the service server, After registration, the service server can add the device identifier corresponding to the cloud application server to the set of stored device identifiers, thereby proving that the cloud application server is a registered cloud application server.
- it can indicate that it is a legal cloud application server, and at this time, the business server can exchange data with the legal cloud application server.
- the service server when the first client sends the image data (such as the first image data) to the service server, it can carry the device identification (which can be called the to-be confirm the device identification), the service server is used to confirm whether it has been registered (whether it is legal) through the device identification, and when it is determined that the bound cloud application server is registered, the bound cloud application server is then determined as the target cloud application server, Then send the first object image data to the target cloud application server. That is to say, after the first object area is determined through the above, before sending the first object image data to the target cloud application server, the service server can first determine whether the cloud application server corresponding to the first client has been registered. When it is registered, the first object image data is sent to its corresponding target cloud application server.
- the device identification which can be called the to-be confirm the device identification
- FIG. 7 is a schematic flowchart of sending the first object image data to a target cloud application server according to an embodiment of the present application.
- This process is illustrated by taking the first image data carrying the device identifier to be confirmed (the device identifier to be confirmed is the device identifier bound to the cloud application server, and the bound cloud application server has a binding relationship with the first client), as shown in Figure 7
- the process may include at least the following S701-S704:
- the stored device identifier set includes M stored device identifiers, one stored device identifier corresponds to one registered cloud application server, and M is a positive integer.
- the cloud application server corresponding to each client can send a registration request to the service server, and the registration request is used to request to register the device with the service server, and After registration, the service server can add the device identifier corresponding to the cloud application server to the set of stored device identifiers, thereby proving that the cloud application server is a registered cloud application server.
- the specific method can be: when the user uses the client to open the cloud application, the second client can respond to the application opening operation and generate an application opening notification, and the second client can Send the application opening notification to its corresponding cloud application server (may be referred to as the cloud application server to be registered), and the cloud application server to be registered can send a registration request to the service server based on the application opening notification at this time; and the service server can receive The registration request sent by the cloud application server to be registered; then, the business server can detect the device index information of the cloud application server to be registered according to the registration request; when the device index information meets the processing quality conditions, then obtain the storage The device ID is stored in the stored device ID set, the cloud application server to be registered is converted into a registered cloud application server, and the device ID to be stored is converted into a stored device ID.
- the device index information may include network quality parameters, device version, function module quality index, storage space index, etc.
- the detection of device index information here may be to detect whether a certain index is qualified, for example, to detect whether the network quality parameter is qualified , the network quality parameter is qualified, then it can be considered that the device index information of the cloud application server to be registered meets the processing quality conditions; the detection of the device index information can also be to detect whether two or more indicators are qualified, only It is confirmed that the device index information of the cloud application server to be registered satisfies the processing quality condition only when all are qualified.
- the following will take the device index information including network quality parameters and device version as an example to describe the specific method of detecting the device index information of the cloud application server to be registered.
- the specific method can be: according to the registration request, obtain the cloud application server to be registered Network quality parameters and device version; if the network quality parameter reaches the parameter threshold, and the device version matches the quality standard version (which can be understood as a qualified quality version), it can be determined that the device index information meets the processing quality conditions; and if the network quality parameter does not If the parameter threshold is reached, or the version of the device does not match the version of the quality standard, it can be determined that the device index information does not meet the processing quality condition.
- the quality standard version which can be understood as a qualified quality version
- the stored device set has stored device IDs corresponding to different registered cloud application servers, then after obtaining the device ID to be confirmed sent by the first client, the stored device ID can be obtained set, and then match the device ID to be confirmed with the stored device ID set.
- follow-up S703 may be performed; if not, follow-up S704 may be performed.
- the bound cloud application server indicated by the device ID to be confirmed belongs to a registered cloud application server, and the device to be confirmed is identified
- the bound cloud application server indicated by the identification is determined as the target cloud application server, and the first object image data is sent to the target cloud application server.
- the binding cloud indicated by the device ID to be confirmed can be determined. If the application server belongs to a registered cloud application server, then the bound cloud application server can be determined as the target cloud application server at this time, and the first object image data can be sent to the target cloud application server.
- the binding cloud indicated by the device ID to be confirmed can be determined.
- the application server is an unregistered cloud application server, the bound cloud application server is not registered, and the service server cannot send the first object image data to the bound cloud application server.
- the service server can generate the device abnormality prompt information (may refer to the server unregistered prompt information), and the service server can return the device abnormality prompt information to the first client, and the first client can prompt based on the device abnormality
- the information sends a registration notification to its corresponding bound cloud application server, and the bound cloud application server can apply for registration to the service server based on the registration notification.
- this application can determine the corresponding relationship between the client and the cloud application server by pre-storing the set of device identifiers, and the client sends the image data with the device identifier of the corresponding cloud game server, and at the same time It can also be determined whether the cloud application server has been registered, so that the user picture collected by the client can be sent to the correct registered cloud application server, which can improve the correctness of the user picture displayed by the cloud application.
- Steps can include the following 3 steps:
- the target cloud application server needs to write the portrait data into the buffer first, then read and render the portrait data, and then continue to receive the portrait data and write it into the buffer after the rendering is completed.
- the amount of received data will be large, so the target cloud application server will consume a lot of time when allocating buffers and data copies, seriously affecting the time for subsequent reception of portrait data, resulting in a large amount of time delay , it can be seen that although the frame skipping processing of the service server can reduce the image receiving delay on the service server side through the above, but there is still a delay problem on the cloud application server side.
- the present application provides a data processing method, which is to allocate double buffers on the side of the cloud application server.
- FIG. 8 is a schematic flowchart of a data processing method provided by an embodiment of the present application.
- the process is executed by a computer device, which may be a target cloud application server such as a cloud game server.
- This process may correspond to the data processing process of the target cloud application server after receiving the object image data.
- the process may include at least the following S801-S804:
- the service server After the service server extracts the first object area and obtains the first object image data, it can send the first object image data to the target cloud application server, and the target cloud application server can store the first object image data in the buffer In the zone set, the working status is in the first buffer of the storage status.
- the target cloud application server can pre-allocate two receiving buffers (Buffer) of the same size, and set the working state of one of the buffers to the storage state, that is, the buffer is actually a storage buffer, and the target cloud application
- the server can store the received data in the storage buffer; at the same time, the working state of another buffer can be set to the read state, that is, the buffer is actually a read buffer, and the target cloud application server can When reading and rendering data, read from this read buffer.
- the specific method of allocating double buffers to generate a buffer set can be: the first buffer and the second buffer can be pre-allocated; then, the initial pointer identification of the first buffer can be set as Store pointer identification, the initial pointer identification of the second buffer zone is set to read pointer identification; It should be understood that the working state of the first buffer zone with the storage pointer indicator is the storage state; the second buffer zone with the read pointer indicator The working state is the reading state; subsequently, a buffer set can be generated according to the first buffer whose working state is in the storage state and the second buffer whose working state is in the reading state. Then at this time, when the first object image data is received, because the working state of the first buffer is in the storage state, the first object image data can be stored in the first buffer at this time.
- the initial working state of the first buffer is the storage state
- the initial working state of the second buffer is the reading state
- the target cloud application server can synchronize the second buffer when receiving and storing the first image data
- the data already stored in the region may be referred to as stored image data
- the second buffer does not contain unprocessed object image data, so after storing the first object image data in the first buffer, the The storage pointer identification of the first buffer is switched to the reading pointer identification, and the storage pointer identification of the second buffer is switched to the storage pointer identification, so the working states of the first buffer and the second buffer are mutually In other words, the current working state of the first buffer becomes the reading state, and the current working state of the second buffer becomes the storage state.
- the image data of the first object can be read from the first buffer, and Perform rendering processing, and meanwhile continue to receive the second object image data and store it in the second buffer.
- the target cloud application server can realize the synchronization of reading and receiving, and can receive the rest of the data without waiting for the completion of rendering, which can greatly reduce the receiving delay.
- the initial working state of the first buffer is the storage state
- the initial working state of the second buffer is the reading state.
- the target cloud application server receives and stores the first image data, it can synchronously
- the data already stored in the second buffer (which may be referred to as stored image data) is read and rendered. If there is image data stored in the second buffer, but all the image data has been read and rendered, then the processed image data in the second buffer can be cleared at this time, then this time also It can be determined that the second buffer does not contain unprocessed image data, and the working state of the first buffer can be adjusted to a reading state and the working state of the second buffer can be adjusted to a storage state by switching pointer marks. Then read the first object image data from the first buffer in the reading state, and perform rendering processing on it.
- the second object image data is The image data included in the second object area.
- the second object area is obtained by the business server after extracting the first object area and performing image recognition processing on the second image data.
- the second object area is the object in the second image data.
- the area at the location; the second image data is the image data with the latest receiving time stamp obtained from the update receiving queue when the service server extracts the first object area; the second image data in the update receiving queue is the service
- the process of the server performing image recognition processing on the first image data it is obtained continuously from the first client.
- the second target area may refer to the area extracted by the above-mentioned service server after performing image recognition processing on the second image data
- the second target image data may refer to the image contained in the second target area in the second image data data.
- the specific extraction method may be the same as the method for extracting the first object region, which will not be repeated here.
- the second object image data sent by the service server can be received.
- the target cloud application server can receive the data synchronously during the process of reading the data and store it in the current Second buffer to store state, thus reducing latency.
- the working state of the first buffer can be adjusted to the storage state, and the working state of the second buffer can be adjusted to the reading state.
- the second object image data can be read from the second buffer in the reading state , performing rendering processing on the second object image data; it is also possible to receive the rest of the object image data synchronously and store them in the first buffer in a storage state.
- the specific implementation manner of adjusting the working state of the first buffer to the storage state and adjusting the working state of the second buffer to the reading state may also be a switching manner of pointer identification.
- the specific method can be: when the first rendering data corresponding to the first object area is obtained, the read pointer identifier corresponding to the first buffer and used to represent the read status can be obtained, and the identifier of the read pointer corresponding to the second buffer can be obtained.
- the storage pointer identifier is switched to the read pointer identifier; the working state of the second buffer with the read pointer identifier is the read state.
- FIG. 9 is a schematic diagram of a state change of a double buffer provided by an embodiment of the present application.
- the working state of the buffer 900a is the read state
- the objects stored in the buffer 900a The image data may include object image data 1 to object image data 10 , which are denoted by numbers 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 in sequence in FIG. 9 .
- the target image data 1 to the target image data 7 are the data that have been read
- the target image data 8 to the target image data 10 are the data to be read.
- the working state of the buffer 900b is a storage state.
- the target cloud application server can continuously receive object image data and store it in the buffer 900b.
- the buffer The received data in 900b includes object image data 11 to object image data 14 (there are still 6 remaining space positions in buffer 900b for receiving object image data).
- the buffer 900a when the data in the buffer 900a has been read (that is, the object image data 7 to the object image data 9 have been read), the buffer 900a can be emptied, and at this time, the buffer 900b
- the received data includes object image data 11 to object image data 20 (indicated by reference numerals 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 in order in FIG. 9 ).
- the working state of the buffer 900a can be switched to the storage state, and the working state of the buffer 900b can be switched to the reading state, thus, the target cloud application server can read data from the buffer 900b (such as from the object The image data 11 starts to be read sequentially); at the same time, the target cloud application server can receive the object image data synchronously and store it in the buffer 900a. For example, after receiving new object image data 1 to new object image data 3, they can be stored in the buffer 900a.
- buffer zone 900a and buffer zone 900b are examples for easy understanding, and do not have practical reference significance.
- the image receiving delay on the side of the business server can be reduced, and the image recognition efficiency can be improved; through the double buffer allocation processing of the target cloud application server, no data copying is required, only It is only necessary to switch the working status of the two buffers (such as pointer switching), and it is not necessary to allocate buffers every time.
- receiving data and processing data can be performed at the same time. Need to wait for each other, can reduce the delay. That is to say, on the basis of reducing the delay at the business server side, the delay can be further reduced by setting the double buffer.
- FIG. 10 is a system architecture diagram provided by an embodiment of the present application.
- the system architecture diagram shown in FIG. 10 takes a cloud application as an example, and the cloud application server corresponding to the cloud application may be a cloud game server.
- the system architecture can include client clusters (may include client 1, client 2, ..., client n), business servers (may include streaming sub-servers and image recognition sub-servers.
- the streaming The sub-server can be used to receive the coded image file uploaded by the client and decode it; the image recognition sub-server can perform image recognition processing on the decoded image data decoded by the streaming sub-server), cloud game server.
- client clusters may include client 1, client 2, ..., client n
- business servers may include streaming sub-servers and image recognition sub-servers.
- the streaming The sub-server can be used to receive the coded image file uploaded by the client and decode it; the image recognition sub-server can perform image recognition processing on the decoded image data decoded by the streaming sub-server), cloud game server.
- Client cluster When each client runs a cloud application (such as a cloud game application), it can display the screen of the cloud application (such as the screen of the cloud game application).
- the cloud game application When the cloud game application is running, the user screen can be collected through the camera, and encoded, and the obtained image data is uploaded to the streaming sub-server in the service server (the streaming sub-server can be any one with data receiving function and decoding function)
- the server is mainly used to receive the image encoding file uploaded by the client and decode it).
- Streaming sub-server It can receive the image data uploaded by the client and perform decoding processing to obtain a decoded image in the original image format (such as YUV format), which can be sent to the image recognition sub-server.
- a decoded image in the original image format such as YUV format
- Image recognition sub-server It can convert the decoded image from YUV format to RGB format, and then identify and extract user portrait data or key points of the human body in the image, and send the user portrait or key points of the human body to the cloud game server.
- the streaming sub-server and the image recognition sub-server can jointly form a service server, so that the service server can have image decoding and image recognition functions.
- the streaming sub-server and the image recognition sub-server can also be used as independent servers, each performing corresponding tasks (that is, the streaming sub-server receives and decodes the encoded image file; the image recognition sub-server performs image recognition processing on the decoded data). It should be understood that, in order to reduce the data receiving delay, both the streaming sub-server and the image recognition sub-server can perform frame skipping processing.
- the cloud game server can refer to the cloud application server corresponding to the client.
- the cloud game server provides the corresponding computing server.
- the cloud game server can receive user portrait data or key points of the human body, and can render and display the user portrait data.
- the cloud game server can use the key points of the human body to manipulate the virtual cartoon doll in the cloud game application to realize animation (that is, instead of projecting the user's portrait in the cloud game application, the virtual cartoon doll is operated to synchronize the user's real action state).
- FIG. 11 is a schematic flowchart of a system provided by an embodiment of the present application. This process may correspond to the system architecture shown in FIG. 10 . As shown in Figure 11, the process may include S31-S36:
- S31 The client collects camera images.
- the client encodes the collected image.
- the streaming sub-server decodes the encoded data.
- the image recognition sub-server converts the decoded image from YUV format to RGB format.
- the cloud responds to the game server to identify key points of the user's portrait or human body.
- the client when the client is collecting objects, if another object is collected at the same time (which can be understood as another user entering the mirror), then at this time, the client can generate object selection prompt information, and the user can choose who to use as The final collection object.
- the client automatically determines based on the clarity and area occupied by the object. For example, if the client captures Object 1 and Object 2 at the same time, but Object 2 is far away from the lens and the captured image is not clear, while Object 1 is relatively close to the lens and the captured image is clear, then the client can automatically use Object 1 as the final captured object .
- FIG. 12 is an interaction flowchart provided by an embodiment of the present application.
- the interaction process may be an interaction process between the client, the streaming sub-server, the image recognition sub-server, and the cloud application server (taking the cloud game server as an example).
- the interaction process may at least include the following S41-S54:
- the client can establish a connection with the streaming sub-server (for example, establish a Websocket persistent connection).
- the cloud game server (which can be integrated with the cloud game software tool development kit (SDK)) can establish a connection with the image recognition sub-server (for example, establish a Transmission Control Protocol (TCP) )connect).
- SDK cloud game software tool development kit
- the cloud game server sends a collection notification to its corresponding client.
- the acquisition notification may be a notification message for the start of image acquisition.
- the client sends a streaming message to the streaming sub-server.
- the client can turn on the camera based on the collection notification, and notify the streaming sub-server of the device ID of the cloud game server, the collected user ID, and the width and height of the image captured by the camera, so that the push sub-server can The streaming subserver is ready to receive data.
- the streaming message may include the device ID of the cloud game server, the collected user ID, and the width and height of the image captured by the camera.
- the streaming sub-server sends a streaming message to the image recognition sub-server.
- the streaming sub-server After the streaming sub-server receives the streaming message from the client, it can establish a TCP connection with the image recognition server, and sends the streaming message to the image recognition sub-server.
- the client sends encoded data to the push stream sub-server.
- the streaming sub-server sends the decoded data to the image recognition sub-server.
- the image recognition sub-server converts the format of the decoded data, and performs image recognition.
- the image recognition sub-server sends the recognition data to the cloud game server.
- the cloud game server renders the identification data to obtain rendering data.
- the cloud game server sends the rendering data to the client.
- FIG. 13 is a schematic structural diagram of a data processing device provided by an embodiment of the present application.
- the data processing device may be a computer program (including program code) running in a computer device, for example, the data processing device is an application software; the data processing device may be used to execute the method shown in FIG. 3 .
- the data processing device 1 may include: a data acquisition module 11 , an image recognition module 12 , a queue update module 13 and an area sending module 14 .
- the data acquisition module 11 is configured to acquire the first image data sent by the first client, and store the first image data in the receiving queue; the first image data is obtained by the first client when running the cloud application, including image data of the object;
- An image recognition module 12 configured to perform image recognition processing on the first image data in the receiving queue
- the queue update module 13 is used to store the continuously obtained second image data sent by the first client in the receiving queue during the image recognition processing of the first image data, so as to obtain an updated receiving queue;
- the area sending module 14 is used to send the first object image data contained in the first object area to the target cloud application server when the first object area where the object is located in the first image data is extracted through image recognition processing;
- the target cloud application server is used to render the first object image data to obtain rendering data, and send the rendering data to the first client;
- the area sending module 14 is further configured to synchronously perform image recognition processing on the second image data with the latest receiving time stamp in the update receiving queue.
- the specific implementation manners of the data acquisition module 11, the image recognition module 12, the queue update module 13, and the area sending module 14 can refer to the description of S101-S101-S103 in the embodiment corresponding to FIG. 3 above, and will not be repeated here. .
- the data acquisition module 11 may include: an image receiving unit 111 and a storage unit 112 .
- the image receiving unit 111 is configured to receive the first image data sent by the first client; the first image data is the data obtained after the first client encodes the original image frame; the original image frame is the first client Collected when running cloud applications;
- the storage unit 112 is configured to acquire a receiving time stamp of receiving the first image data, and associate and store the first image data and the receiving time stamp in a receiving queue.
- the image recognition module 12 may include: a data decoding unit 121 , a format conversion unit 122 and an image recognition unit 123 .
- a data decoding unit 121 configured to decode the first image data to obtain decoded image data in an original image format
- a format conversion unit 122 configured to perform format conversion on the decoded image data to obtain an original image frame with a standard image format
- the image recognition unit 123 is configured to perform image recognition processing on the original image frame with a standard image format.
- the data decoding unit 121 For specific implementations of the data decoding unit 121 , the format conversion unit 122 and the image recognition unit 123 , refer to the description of S101 in the embodiment corresponding to FIG. 3 , and details will not be repeated here.
- the image recognition unit 123 may include: a key point recognition subunit 1231 , a curve connection subunit 1232 and an area determination subunit 1233 .
- a key point identification subunit 1231 configured to identify object edge key points of the object in the original image frame
- the curve connection subunit 1232 is used to connect the key points of the object edge to obtain the object edge curve of the object;
- the area determining subunit 1233 is configured to determine the initial object area where the object is located in the original image frame from the area covered by the object edge curve in the original image frame; and determine the first object area according to the initial object area.
- the specific implementation manners of the key point identification subunit 1231, the curve connection subunit 1232, and the region determination subunit 1233 can refer to the description of S102 in the embodiment corresponding to FIG. 3 above, and will not be repeated here.
- the image recognition unit 123 is also used to obtain the object recognition configuration information for the object, and the object recognition configuration parts indicated by the object recognition configuration information, and match the object recognition configuration parts with the key parts of the object; if the object recognition If the configuration part matches the key part of the object, the step of determining the first object region according to the initial object region is performed; if the object recognition configuration part does not match the key part of the object, it is determined that the first object region cannot be extracted through image recognition processing.
- the image recognition unit 123 is also configured to determine the image data with the earliest received time stamp in the update receiving queue as the image data to be recognized; perform image recognition processing on the image data to be recognized, when the object is extracted through the image recognition processing
- the area sending module 14 is also configured to send the object area to be processed to the target cloud application server.
- the image recognition unit 123 is specifically used for:
- the initial object area is determined as the first object area
- N is a positive integer.
- the image recognition unit 123 is specifically used for:
- the area of the part to be fused is fused with the initial object area to obtain a first object area.
- the first image data carries a device identifier to be confirmed;
- the device identifier to be confirmed is a device identifier bound to a cloud application server, and the bound cloud application server has a binding relationship with the first client;
- the area sending module 14 includes: a set acquiring unit 141 and an identifier matching unit 142 .
- the set acquisition unit 141 is used to acquire a set of stored device identifiers;
- the stored device identifier set includes M stored device identifiers, one stored device identifier corresponds to a registered cloud application server, and M is a positive integer;
- the identification matching unit 142 is configured to determine that the bound cloud application server indicated by the equipment identification to be confirmed belongs to a registered cloud application server if there is an existing equipment identification matching the equipment identification to be confirmed among the M stored equipment identifications, The bound cloud application server indicated by the identification of the device to be confirmed is determined as the target cloud application server, and the first object image data is sent to the target cloud application server.
- the data processing device 1 may further include: a registration request receiving module 15 , an indicator detection module 16 and an identification adding module 17 .
- the registration request receiving module 15 is used to receive the registration request sent by the cloud application server to be registered; the registration request is generated by the cloud application server to be registered after receiving the application opening notification sent by the second client; the application opening notification is the second Generated by the client in response to the application start operation for the cloud application;
- the index detection module 16 is used for detecting the device index information of the cloud application server to be registered according to the registration request;
- the identification adding module 17 is used for obtaining the storage device identification of the cloud application server to be registered when the equipment index information satisfies the processing quality condition, storing the storage device identification to the stored equipment identification set, and converting the registration cloud application server into The cloud application server has been registered, and the device ID to be stored is converted into the stored device ID.
- the device index information includes network quality parameters and device version
- the index detection module 16 may include: a parameter acquisition unit 161 and an index determination unit 162 .
- the parameter obtaining unit 161 is used to obtain the network quality parameter and device version of the cloud application server to be registered according to the registration request;
- An index determining unit 162 configured to determine that the device index information meets the processing quality condition if the network quality parameter reaches the parameter threshold and the device version matches the quality standard version;
- the index determining unit 162 is further configured to determine that the device index information does not meet the processing quality condition if the network quality parameter does not reach the parameter threshold, or the device version does not match the quality standard version.
- the client when the client (such as the first client) obtains the first image data containing the object, it can send the first image data to the relevant computer equipment (such as the service server), and the service server Image recognition processing does not need to be performed locally on the client side, and the first image data can be processed by a business server with relatively high computing power, which can improve the efficiency and clarity of image recognition; at the same time, in this application , the business server can store the received first image data in the receiving queue, and can continuously obtain the second image data synchronously from the first client during the process of performing image recognition processing on the first image data, and store the received The second image data is stored in the receiving queue, and the receiving queue is updated.
- the relevant computer equipment such as the service server
- the service server in this application when the business server in this application performs image recognition processing on the first image data, it will not suspend the reception of the second image data, and the synchronization of image processing and image reception can be realized through the receiving queue, so that Reduce image transmission delay.
- the service server may send the first object image data contained in the first object area to the target cloud application The server is used for rendering by the target cloud application server and sending the rendering data obtained by rendering to the first client, so that it can be displayed in the cloud application.
- the service server may acquire the second image data with the latest receiving time stamp in the receiving queue, and continue to process the second image data.
- the next step is to obtain the image data with the latest received timestamp from the receiving queue for processing, instead of processing the images according to the time order of the received timestamps
- Recognition of data one by one can improve the recognition efficiency of image data.
- image recognition is performed on the image data with the latest received time stamp. And when displayed, it also matches the current behavior of the object.
- the present application can improve image recognition efficiency, reduce image transmission delay, and ensure that the virtual behavior of the virtual object displayed by the cloud application matches the current behavior state of the object.
- FIG. 14 is a schematic structural diagram of another data processing device provided by an embodiment of the present application.
- the data processing device can be a computer program (including program code) running in the computer equipment, for example, the data processing device is an application software; the data processing device can be used to execute the method shown in FIG. 8 .
- the data processing device 2 may include: an area storage module 21 , an area rendering module 22 , an area receiving module 23 and a state adjustment module 24 .
- the area storage module 21 is used to receive the first object image data sent by the service server, and store the first object image data in the first buffer zone whose working state is in the storage state in the buffer set; the first object image data is the first object image data The image data contained in an object area, the first object area is the area where the object is located in the first image data obtained after the business server performs image recognition processing on the first image data; the first image data is provided by the first client The first image data is the image data containing the object obtained by the first client when running the cloud application;
- the area rendering module 22 is used to adjust the working state of the first buffer to the reading state when the second buffer in the buffer set whose working state is in the read state does not contain unprocessed object image data, and convert the second buffer to the read state.
- the working state of the area is adjusted to the storage state, the first object area is read from the first buffer whose working state is in the reading state, and the first object area is rendered;
- the area receiving module 23 is used to receive the second object image data sent by the service server during the rendering process of the first object area, and store the second object image data in the second buffer whose working state is in the storage state;
- the second The object image data is the image data included in the second object area, which is obtained by the service server after extracting the first object area and performing image recognition processing on the second image data, and the second object area is the object in the second object area.
- the area in the image data; the second image data is the image data with the latest receiving time stamp obtained by the service server from the receiving queue when extracting the first object area; the second image data in the receiving queue is During the process of performing image recognition processing on the first image data by the business server, it is obtained continuously from the first client;
- a state adjustment module 24 configured to adjust the working state of the first buffer to a storage state, and adjust the working state of the second buffer to a reading state when the first rendering data corresponding to the first object image data is acquired, The second object image data is read from the second buffer in the read state, and the second object image data is rendered.
- the state adjustment module 24 may include: an identification obtaining unit 241 and an identification switching unit 242 .
- the identification obtaining unit 241 is configured to obtain the reading pointer identification corresponding to the first buffer and used to represent the reading state when the first rendering data corresponding to the first object area is obtained, and the identification of the reading pointer corresponding to the second buffer.
- An identifier switching unit 242 configured to switch the read pointer identifier corresponding to the first buffer to a storage pointer identifier; the working state of the first buffer with the storage pointer identifier is a storage state;
- the identification switching unit 242 is further configured to switch the storage pointer identification of the second buffer to the reading pointer identification; the working state of the second buffer with the reading pointer identification is the reading state.
- the data processing device 2 may further include: a buffer allocation module 25 , an identification setting module 26 and a set generation module 27 .
- a buffer allocation module 25 configured to allocate the first buffer and the second buffer
- the identification setting module 26 is used to set the initial pointer identification of the first buffer zone as the storage pointer identification, and set the initial pointer identification of the second buffer area as the reading pointer identification; the working state of the first buffer area with the storage pointer identification is the storage state; the working state of the second buffer with the read pointer identification is the read state;
- the set generating module 27 is configured to generate a buffer set according to the first buffer whose working state is in the storage state and the second buffer whose working state is in the reading state.
- the image receiving delay on the side of the business server can be reduced, and the image recognition efficiency can be improved; through the double buffer allocation processing of the cloud game server, there is no need to copy data, only need It is enough to switch the working status of the two buffers, and there is no need to allocate buffers every time.
- receiving data and processing data (such as reading and rendering data) can be carried out at the same time, without waiting for each other, which can reduce delay. That is to say, on the basis of reducing the delay at the business server side, the delay can be further reduced by setting the double buffer.
- FIG. 15 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
- the device 1 in the above embodiment corresponding to FIG. 13 or the device 2 in the embodiment corresponding to FIG. 14 can be applied to the above computer equipment 8000, and the above computer equipment 8000 can include: a processor 8001, a network interface 8004 and memory 8005, in addition, the computer device 8000 also includes: a user interface 8003, and at least one communication bus 8002.
- the communication bus 8002 is used to realize connection and communication between these components.
- the user interface 8003 may include a display screen (Display) and a keyboard (Keyboard), and the optional user interface 8003 may also include a standard wired interface and a wireless interface.
- the network interface 8004 may include a standard wired interface and a wireless interface (such as a WI-FI interface).
- the memory 8005 can be a high-speed RAM memory, or a non-volatile memory, such as at least one disk memory.
- the memory 8005 may also be at least one storage device located away from the aforementioned processor 8001 .
- the memory 8005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a device control application program.
- the network interface 8004 can provide a network communication function; the user interface 8003 is mainly used to provide an input interface for the user; and the processor 8001 can be used to call the device control application stored in the memory 8005 program to implement the data processing methods provided in the foregoing embodiments.
- the computer device 8000 described in the embodiment of the present application can execute the description of the data processing method in the previous embodiment corresponding to FIG. 3 to FIG. 8 , and can also execute the data processing method in the previous embodiment corresponding to FIG. 13
- the device 1, or the description of the data processing device 2 in the embodiment corresponding to FIG. 14 will not be repeated here.
- the description of the beneficial effect of adopting the same method will not be repeated here.
- the embodiment of the present application also provides a computer-readable storage medium, and the above-mentioned computer-readable storage medium stores the computer program executed by the aforementioned data processing computer device 8000, and
- the above-mentioned computer program includes program instructions.
- the above-mentioned processor executes the above-mentioned program instructions, it can execute the description of the above-mentioned data processing method in the embodiments corresponding to FIG. 3 to FIG. 8 above, so details will not be repeated here.
- the description of the beneficial effect of adopting the same method will not be repeated here.
- the above-mentioned computer-readable storage medium may be the data processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the above-mentioned computer equipment, such as a hard disk or memory of the computer equipment.
- the computer-readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (smart media card, SMC), a secure digital (secure digital, SD) card, Flash card (flash card), etc.
- the computer-readable storage medium may also include both an internal storage unit of the computer device and an external storage device.
- the computer-readable storage medium is used to store the computer program and other programs and data required by the computer device.
- the computer-readable storage medium can also be used to temporarily store data that has been output or will be output.
- One aspect of the present application provides a computer program product or computer program, where the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
- the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the method provided in one aspect of the embodiments of the present application.
- each flow and/or of the method flow charts and/or structural diagrams can be implemented by computer program instructions or blocks, and combinations of processes and/or blocks in flowcharts and/or block diagrams.
- These computer program instructions may be provided to a general purpose computer, special purpose computer, embedded processor, or processor of other programmable data processing equipment to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing equipment produce a A device for realizing the functions specified in one or more steps of the flowchart and/or one or more blocks of the structural diagram.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
- the device implements the functions specified in one or more blocks of the flow chart and/or one or more blocks of the structural schematic diagram.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device, causing a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process, thereby
- the instructions provide steps for implementing the functions specified in one or more steps of the flowchart and/or one or more blocks in the structural illustration.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (20)
- 一种数据处理方法,所述方法由计算机设备执行,所述方法包括:获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中;所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含对象的图像数据;对所述接收队列中的所述第一图像数据进行图像识别处理,在所述第一图像数据的图像识别处理过程中,将持续获取到的所述第一客户端所发送的第二图像数据,存储至所述接收队列中,得到更新接收队列;当通过图像识别处理提取出所述对象在所述第一图像数据中所处的第一对象区域时,将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,同步对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;所述目标云应用服务器用于对所述第一对象图像数据进行渲染得到渲染数据,并将所述渲染数据发送至所述第一客户端。
- 根据权利要求1所述的方法,所述获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中,包括:接收所述第一客户端发送的所述第一图像数据;所述第一图像数据是由所述第一客户端对原始图像帧进行编码处理后得到的数据;所述原始图像帧是所述第一客户端在运行所述云应用时所采集得到;获取接收到所述第一图像数据的接收时间戳,将所述第一图像数据与所述接收时间戳关联存储至所述接收队列中。
- 根据权利要求2所述的方法,所述对所述接收队列中的所述第一图像数据进行图像识别处理,包括:对所述第一图像数据进行解码处理,得到具有初始图像格式的解码图像数据;对所述解码图像数据进行格式转换,得到具有标准图像格式的所述原始图像帧;对具有所述标准图像格式的所述原始图像帧进行图像识别处理。
- 根据权利要求3所述的方法,所述对具有所述标准图像格式的所述原始图像帧进行图像识别处理,包括:识别所述对象在所述原始图像帧中的对象边缘关键点;将所述对象边缘关键点进行连接,得到所述对象的对象边缘曲线;将所述原始图像帧中所述对象边缘曲线所覆盖的区域,确定所述对象在所述原始图像帧中所处的初始对象区域;根据所述初始对象区域确定所述第一对象区域。
- 根据权利要求4所述的方法,所述根据所述初始对象区域确定所述第一对象区域之前,所述方法还包括:获取所述初始对象区域所呈现的所述对象的对象关键部位;获取针对所述对象的对象识别配置信息,以及所述对象识别配置信息所指示的对象识别配置部位,将所述对象识别配置部位与所述对象关键部位进行匹配;若所述对象识别配置部位与所述对象关键部位相匹配,则执行所述根据所述初始对象区域确定所述第一对象区域的步骤;若所述对象识别配置部位与所述对象关键部位不匹配,则确定通过图像识别处理未能提取出所述第一对象区域。
- 根据权利要求5所述的方法,在所述若所述对象识别配置部位与所述对象关键部位不匹配,则确定通过图像识别处理未能提取出所述第一对象区域之后,所述方法还包括:将所述更新接收队列中具有最早接收时间戳的图像数据确定为待识别图像数据;对所述待识别图像数据进行图像识别处理,当通过图像识别处理提取出所述对象在所述待识别图像数据中所处的待处理对象区域时,将所述待处理对象区域发送至所述目标云应用服务器。
- 根据权利要求4所述的方法,所述根据所述初始对象区域确定所述第一对象区域,包括:获取所述初始对象区域所呈现的所述对象的对象关键部位;若所述对象关键部位具备部位完整性,则将所述初始对象区域确定为所述第一对象区域;若所述对象关键部位不具备部位完整性,则获取样本数据库中的N个样本图像帧,从所述N个样本图像帧中获取与所述对象对应的待处理样本图像帧,根据所述待处理样本图像帧与所述初始对象区域确定所述第一对象区域;N为正整数。
- 根据权利要求7所述的方法,所述根据所述待处理样本图像帧与所述初始对象区域确定所述第一对象区域,包括:获取所述待处理样本图像帧中的整体部位信息;根据所述对象关键部位,在所述整体部位信息中确定待融合部位区域;将所述待融合部位区域与所述初始对象区域进行融合,得到所述第一对象区域。
- 根据权利要求1所述的方法,所述第一图像数据携带待确认设备标识;所述待确认设备标识是绑定云应用服务器的设备标识,所述绑定云应用服务器与所述第一客户端具有绑定关系;所述将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器,包括:获取已存设备标识集合;所述已存设备标识集合包含M个已存设备标识,一个已存设备标识对应一个已注册云应用服务器,M为正整数;若所述M个已存设备标识中存在与所述待确认设备标识相匹配的已存设备标识,则确定所述待确认设备标识所指示的所述绑定云应用服务器属于已注册云应用服务器,将所述待确认设备标识所指示的所述绑定云应用服务器确定为所述目标云应用服务器,将所述第一对象区域所包含的第一对象图像数据发送至所述目标云应用服务器。
- 根据权利要求9所述的方法,所述方法还包括:接收待注册云应用服务器发送的注册请求;所述注册请求是所述待注册云应用服务器在接收到第二客户端发送的应用开启通知后所生成的;所述应用开启通知是所述第二客户端响应针对所述云应用的应用开启操作所生成的;根据所述注册请求,检测所述待注册云应用服务器的设备指标信息;当所述设备指标信息满足处理质量条件时,获取所述待注册云应用服务器的待存储设备标识,将所述待存储设备标识存储至所述已存设备标识集合,将所述待注册云应用服务器转换为已注册云应用服务器,将所述待存储设备标识转换为已存设备标识。
- 根据权利要求10所述的方法,所述设备指标信息包括网络质量参数与设备版本;所述根据所述注册请求,检测所述待注册云应用服务器的设备指标信息,包括:根据所述注册请求,获取所述待注册云应用服务器的网络质量参数与设备版本;若所述网络质量参数达到参数阈值,且所述设备版本与质量标准版本相匹配,则确定所述设备指标信息满足所述处理质量条件;若所述网络质量参数未达到所述参数阈值,或所述设备版本与所述质量标准版本不匹配,则确定所述设备指标信息不满足所述处理质量条件。
- 根据权利要求1所述的方法,所述对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理,包括:获取所述更新接收队列中具有最晚接收时间戳的第二图像数据;对所述第二图像数据进行图像识别处理,同步删除所述更新接收队列中的历史图像数据;所述历史图像数据为所述更新接收队列中接收时间戳早于所述第二图像数据的图像数据。
- 一种数据处理方法,所述方法由计算机设备执行,所述方法包括:接收业务服务器发送的第一对象图像数据,将所述第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;所述第一对象图像数据为第一对象区域所包含的图像数据,所述第一对象区域为所述业务服务器对第一图像数据进行图像识别处理后,得到的对象在所述第一图像数据中所处的区域;所述第一图像数据是由第一客户端发送至所述业务服务器的,所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含所述对象的图像数据;当所述缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将所述第一缓冲区的工作状态调整为所述读取状态,将所述第二缓冲区的工作状态调整为所述存储状态,从工作状态处于所述读取状态的所述第一缓冲区中读取所述第一对象图像数据,将所述第一对象图像数据进行渲染处理;在所述第一对象图像数据的渲染过程中,接收所述业务服务器发送的第二对象图像数据,将所述第二对象图像数据存储至工作状态处于所述存储状态的所述第二缓冲区中;所述第二对象图像数据是第二对象区域所包含的图像数据,所述第二对象区域为所述业务服务器在提取出所述第一对象区域后对第二图像数据进行图像识别处理所得到,所述第二对象区域为所述对象在所述第二图像数据中所处的区域;所述第二图像数据为所述业务服务器在提取出所述第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;所述更新接收队列中的第二图像数据是所述业务服务器对所述第一图像数据进行图像识别处理的过程中,从所述第一客户端所持续获取得到;在获取到所述第一对象图像数据对应的渲染数据时,将所述第一缓冲区的工作状态调整为所述存储状态,将所述第二缓冲区的工作状态调整为所述读取状态,从工作状态处于所述读取状态的所述第二缓冲区中读取所述第二对象图像数据,将所述第二对象图像数据进行渲染处理。
- 根据权利要求13所述的方法,所述在获取到所述第一对象图像数据对应的渲染数据时,将所述第一缓冲区的工作状态调整为所述存储状态,将所述第二缓冲区的工作状态调整为所述读取状态,包括:在获取到所述第一对象图像数据对应的渲染数据时,获取所述第一缓冲区所对应的用于表征所述读取状态的读取指针标识,以及所述第二缓冲区所对应的用于表征所述存储状态的存储指针标识;将所述第一缓冲区所对应的读取指针标识切换为所述存储指针标识;具有所述存储指针标识的所述第一缓冲区的工作状态为所述存储状态;将所述第二缓冲区的所述存储指针标识切换为所述读取指针标识;具有所述读取指针标识的所述第二缓冲区的工作状态为所述读取状态。
- 根据权利要求13所述的方法,所述方法还包括:分配第一缓冲区与第二缓冲区;将所述第一缓冲区的初始指针标识设置为存储指针标识,将所述第二缓冲区的初始指针标识设置为读取指针标识;具有所述存储指针标识的所述第一缓冲区的工作状态为所述存储状态;具有所述读取指针标识的所述第二缓冲区的工作状态为所述读取状态;根据工作状态处于所述存储状态的所述第一缓冲区,与工作状态处于所述读取状态的第二缓冲区,生成所述缓冲区集合。
- 一种数据处理装置,所述装置部署在计算机设备上,所述装置包括:数据获取模块,用于获取第一客户端发送的第一图像数据,将所述第一图像数据存储至接收队列中;所述第一图像数据是所述第一客户端在运行云应用时,所获取到的包含对象的图像数据;图像识别模块,用于对所述接收队列中的所述第一图像数据进行图像识别处理;队列更新模块,用于在所述第一图像数据的图像识别处理过程中,将持续获取到的所述第一客户端所发送的第二图像数据,存储至所述接收队列中,得到更新接收队列;区域发送模块,用于当通过图像识别处理提取出所述对象在所述第一图像数据中所处的第一对象区域时,将所述第一对象区域所包含的第一对象图像数据发送至目标云应用服务器;所述区域发送模块,还用于同步对所述更新接收队列中具有最晚接收时间戳的第二图像数据进行图像识别处理;所述目标云应用服务器用于对所述第一对象图像数据进行渲染得到渲染数据,并将所述渲染数据发送至所述第一客户端。
- 一种数据处理装置,所述装置部署在计算机设备上,所述装置包括:区域存储模块,用于接收业务服务器发送的第一对象图像数据,将第一对象图像数据存储至缓冲区集合中的工作状态处于存储状态的第一缓冲区中;第一对象图像数据为第一 对象区域所包含的图像数据,第一对象区域为业务服务器对第一图像数据进行图像识别处理后,得到的对象在第一图像数据中所处的区域;第一图像数据是由第一客户端发送至业务服务器的,第一图像数据是第一客户端在运行云应用时,所获取到的包含对象的图像数据;区域渲染模块,用于当缓冲区集合中工作状态处于读取状态的第二缓冲区未包含未处理对象图像数据时,将第一缓冲区的工作状态调整为读取状态,将第二缓冲区的工作状态调整为存储状态,从工作状态处于读取状态的第一缓冲区中读取第一对象区域,将第一对象区域进行渲染处理;区域接收模块,用于在第一对象区域的渲染过程中,接收业务服务器发送的第二对象图像数据,将第二对象图像数据存储至工作状态处于存储状态的第二缓冲区中;第二对象图像数据是第二对象区域所包含的图像数据,第二对象区域为业务服务器在提取出第一对象区域后对第二图像数据进行图像识别处理所得到,第二对象区域为对象在第二图像数据中所处的区域;第二图像数据为业务服务器在提取出第一对象区域时,从更新接收队列中所获取到的具有最晚接收时间戳的图像数据;更新接收队列中的第二图像数据是业务服务器对第一图像数据进行图像识别处理的过程中,从第一客户端所持续获取得到;状态调整模块,用于在获取到第一对象图像数据对应的渲染数据时,将第一缓冲区的工作状态调整为存储状态,将第二缓冲区的工作状态调整为读取状态,从工作状态处于读取状态的第二缓冲区中读取第二对象图像数据,将第二对象图像数据进行渲染处理。
- 一种计算机设备,包括:处理器、存储器以及网络接口;所述处理器与所述存储器、所述网络接口相连,其中,所述网络接口用于提供网络通信功能,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,以使所述计算机设备执行权利要求1-15任一项所述的方法。
- 一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序适于由处理器加载并执行权利要求1-15任一项所述的方法。
- 一种计算机程序产品或计算机程序,所述计算机程序产品或计算机程序包括计算机指令,所述计算机指令存储在计算机可读存储介质中,所述计算机指令适于由处理器读取并执行,以使得具有所述处理器的计算机设备执行权利要求1-15任一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22871677.5A EP4282499A1 (en) | 2021-09-24 | 2022-08-15 | Data processing method and apparatus, and device and readable storage medium |
JP2023555773A JP2024518227A (ja) | 2021-09-24 | 2022-08-15 | データ処理方法、装置、機器及びコンピュータプログラム |
US18/196,364 US20230281861A1 (en) | 2021-09-24 | 2023-05-11 | Data processing method and apparatus, device, and readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111123508.7 | 2021-09-24 | ||
CN202111123508.7A CN113559497B (zh) | 2021-09-24 | 2021-09-24 | 一种数据处理方法、装置、设备以及可读存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/196,364 Continuation US20230281861A1 (en) | 2021-09-24 | 2023-05-11 | Data processing method and apparatus, device, and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023045619A1 true WO2023045619A1 (zh) | 2023-03-30 |
Family
ID=78174395
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/112398 WO2023045619A1 (zh) | 2021-09-24 | 2022-08-15 | 一种数据处理方法、装置、设备以及可读存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230281861A1 (zh) |
EP (1) | EP4282499A1 (zh) |
JP (1) | JP2024518227A (zh) |
CN (1) | CN113559497B (zh) |
WO (1) | WO2023045619A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113559497B (zh) * | 2021-09-24 | 2021-12-21 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备以及可读存储介质 |
CN115022204B (zh) * | 2022-05-26 | 2023-12-05 | 阿里巴巴(中国)有限公司 | Rtc的传输时延检测方法、装置以及设备 |
CN115460189B (zh) * | 2022-11-09 | 2023-04-11 | 腾讯科技(深圳)有限公司 | 处理设备测试方法、装置、计算机及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130147819A1 (en) * | 2011-06-09 | 2013-06-13 | Ciinow, Inc. | Method and mechanism for performing both server-side and client-side rendering of visual data |
CN108810554A (zh) * | 2018-06-15 | 2018-11-13 | 腾讯科技(深圳)有限公司 | 虚拟场景的场景图像传输方法、计算机设备及存储介质 |
EP3634005A1 (en) * | 2018-10-05 | 2020-04-08 | Nokia Technologies Oy | Client device and method for receiving and rendering video content and server device and method for streaming video content |
CN111729293A (zh) * | 2020-08-28 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置及存储介质 |
CN113559497A (zh) * | 2021-09-24 | 2021-10-29 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备以及可读存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7274368B1 (en) * | 2000-07-31 | 2007-09-25 | Silicon Graphics, Inc. | System method and computer program product for remote graphics processing |
JP2009171994A (ja) * | 2008-01-21 | 2009-08-06 | Sammy Corp | 画像生成装置、遊技機、及びプログラム |
CN103294439B (zh) * | 2013-06-28 | 2016-03-02 | 华为技术有限公司 | 一种图像更新方法、系统及装置 |
KR102407691B1 (ko) * | 2018-03-22 | 2022-06-10 | 구글 엘엘씨 | 온라인 인터랙티브 게임 세션들에 대한 콘텐츠를 렌더링 및 인코딩하기 위한 방법들 및 시스템들 |
CN111767503B (zh) * | 2020-07-29 | 2024-05-28 | 腾讯科技(深圳)有限公司 | 一种游戏数据处理方法、装置、计算机及可读存储介质 |
CN112233419B (zh) * | 2020-10-10 | 2023-08-25 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备及存储介质 |
CN112316424B (zh) * | 2021-01-06 | 2021-03-26 | 腾讯科技(深圳)有限公司 | 一种游戏数据处理方法、装置及存储介质 |
CN112689142A (zh) * | 2021-01-19 | 2021-04-20 | 青岛美购传媒有限公司 | 一种便于虚拟现实对象控制的低延迟控制方法 |
CN112569591B (zh) * | 2021-03-01 | 2021-05-18 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备及可读存储介质 |
-
2021
- 2021-09-24 CN CN202111123508.7A patent/CN113559497B/zh active Active
-
2022
- 2022-08-15 JP JP2023555773A patent/JP2024518227A/ja active Pending
- 2022-08-15 EP EP22871677.5A patent/EP4282499A1/en active Pending
- 2022-08-15 WO PCT/CN2022/112398 patent/WO2023045619A1/zh active Application Filing
-
2023
- 2023-05-11 US US18/196,364 patent/US20230281861A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130147819A1 (en) * | 2011-06-09 | 2013-06-13 | Ciinow, Inc. | Method and mechanism for performing both server-side and client-side rendering of visual data |
CN108810554A (zh) * | 2018-06-15 | 2018-11-13 | 腾讯科技(深圳)有限公司 | 虚拟场景的场景图像传输方法、计算机设备及存储介质 |
EP3634005A1 (en) * | 2018-10-05 | 2020-04-08 | Nokia Technologies Oy | Client device and method for receiving and rendering video content and server device and method for streaming video content |
CN111729293A (zh) * | 2020-08-28 | 2020-10-02 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置及存储介质 |
CN113559497A (zh) * | 2021-09-24 | 2021-10-29 | 腾讯科技(深圳)有限公司 | 一种数据处理方法、装置、设备以及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113559497A (zh) | 2021-10-29 |
EP4282499A1 (en) | 2023-11-29 |
JP2024518227A (ja) | 2024-05-01 |
US20230281861A1 (en) | 2023-09-07 |
CN113559497B (zh) | 2021-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023045619A1 (zh) | 一种数据处理方法、装置、设备以及可读存储介质 | |
CN113423018B (zh) | 一种游戏数据处理方法、装置及存储介质 | |
US10419618B2 (en) | Information processing apparatus having whiteboard and video conferencing functions | |
EP2940940B1 (en) | Methods for sending and receiving video short message, apparatus and handheld electronic device thereof | |
WO2017084174A1 (zh) | 一种图像同步显示方法及装置 | |
CN109085950B (zh) | 基于电子白板的多屏互动方法、装置及电子白板 | |
CN108737884B (zh) | 一种内容录制方法及其设备、存储介质、电子设备 | |
US20160014193A1 (en) | Computer system, distribution control system, distribution control method, and computer-readable storage medium | |
CN113225585A (zh) | 一种视频清晰度的切换方法、装置、电子设备以及存储介质 | |
CN114938408B (zh) | 一种云手机的数据传输方法、系统、设备及介质 | |
CN104639501B (zh) | 一种数据流传输的方法、设备及系统 | |
WO2024159932A1 (zh) | 设备配对方法、装置、计算机设备及计算机可读存储介质 | |
JP2016143236A (ja) | 配信制御装置、配信制御方法、及びプログラム | |
CN114598931A (zh) | 一种多开云游戏的串流方法、系统、装置及介质 | |
CN114139491A (zh) | 一种数据处理方法、装置及存储介质 | |
CN111880756B (zh) | 在线课堂投屏方法、装置、电子设备及存储介质 | |
WO2023024832A1 (zh) | 数据处理方法、装置、计算机设备和存储介质 | |
WO2023279919A1 (zh) | 游戏更新方法、系统、服务器、电子设备、程序产品及存储介质 | |
JP2020109896A (ja) | 動画配信システム | |
CN110798700B (zh) | 视频处理方法、视频处理装置、存储介质与电子设备 | |
CN112702625B (zh) | 视频处理方法、装置、电子设备及存储介质 | |
CN113784094A (zh) | 视频数据处理方法、网关、终端设备及存储介质 | |
JP2020109895A (ja) | 動画配信システム | |
CN111800455A (zh) | 一种基于局域网内不同主机数据源共享卷积神经网络的方法 | |
WO2024139724A1 (zh) | 图像处理方法、装置、计算机设备、计算机可读存储介质及计算机程序产品 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22871677 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202347055056 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022871677 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022871677 Country of ref document: EP Effective date: 20230824 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023555773 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11202306198U Country of ref document: SG |
|
NENP | Non-entry into the national phase |
Ref country code: DE |