CN113538083A - Data processing method, system, space and equipment - Google Patents

Data processing method, system, space and equipment Download PDF

Info

Publication number
CN113538083A
CN113538083A CN202010323311.7A CN202010323311A CN113538083A CN 113538083 A CN113538083 A CN 113538083A CN 202010323311 A CN202010323311 A CN 202010323311A CN 113538083 A CN113538083 A CN 113538083A
Authority
CN
China
Prior art keywords
image
target object
user
determining
display content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010323311.7A
Other languages
Chinese (zh)
Other versions
CN113538083B (en
Inventor
卢金波
朱思语
谭平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010323311.7A priority Critical patent/CN113538083B/en
Priority to PCT/CN2021/088872 priority patent/WO2021213457A1/en
Publication of CN113538083A publication Critical patent/CN113538083A/en
Application granted granted Critical
Publication of CN113538083B publication Critical patent/CN113538083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Item investigation

Abstract

The embodiment of the application provides a data processing method, a system, a space and equipment. Wherein the method comprises the following steps: displaying an image of a space; acquiring related information of at least one user viewing the image from a network side; and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user. By adopting the technical scheme provided by the embodiment of the application, the online and offline integration can be effectively realized, and the user experience is improved.

Description

Data processing method, system, space and equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, system, space, and device.
Background
In recent years, with the rapid development of the mobile internet industry, Online-Offline fusion (OMO) has come into existence and is applied to many industrial fields, for example, the market industry field and the road monitoring field. Taking a market application system as an example, two-dimensional information of commodities contained in a market is mostly displayed in the system, and mutual interaction among users in the system is mostly user comment information based on a comment area, so that the user experience is not enough.
Disclosure of Invention
In view of the above, the present application is proposed to provide a data processing method, system, space and device that solves the above mentioned problems, or at least partially solves the above mentioned problems.
Thus, in one embodiment of the present application, a data processing method is provided. The method comprises the following steps:
displaying an image of a space;
acquiring related information of at least one user viewing the image from a network side;
and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
In another embodiment of the present application, a data processing method is provided. The method comprises the following steps:
receiving a data request sent by a client aiming at an image;
acquiring related information of at least one user viewing the image;
and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
In yet another embodiment of the present application, a data processing method is provided. The method comprises the following steps:
displaying an image of a space;
determining at least one target object in the image;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
In another embodiment of the present application, a data processing system is provided. The data processing system includes:
acquiring data information of at least one target object in an image;
determining display content in the image according to the data information;
and sending the display content to at least one client displaying the image so that the received client displays the display content in the displayed image.
In yet another embodiment of the present application, a data processing system is provided. The data processing system includes:
the client is used for displaying an image of a space; acquiring related information of at least one user viewing the image from a network side; displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user;
the server is used for receiving a data request sent by the client aiming at an image; acquiring related information of at least one user viewing the image; and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
In yet another embodiment of the present application, a data processing system is provided. The data processing system includes:
the client is used for displaying an image of a space;
the server is used for acquiring data information of at least one target object in the image; determining display content in the image according to the data information; sending the display content to at least one client displaying the image, so that the received client displays the display content in the displayed image;
the client is further used for receiving the display content fed back by the server and displaying the display content in the image.
In another embodiment of the present application, a space is provided. The space, including:
the first acquisition equipment is arranged in the space and used for acquiring first data information related to at least one target object; sending the first data information to a server; the server side determines display content to be displayed in the image corresponding to the space according to the first data;
the second acquisition equipment is arranged in the space and used for acquiring the three-dimensional panoramic data of the space and sending the three-dimensional panoramic data to the server; and the server side generates an image of the space according to the three-dimensional panoramic data, acquires the related information of at least one user viewing the image, and feeds the related information of the at least one user back to at least one client side.
In another embodiment of the present application, a client device is provided. The client device includes: a memory, a processor, and a display, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image of a space through the display;
acquiring related information of at least one user viewing the image from a network side;
and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
In another embodiment of the present application, a server device is provided. The server side equipment comprises: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
receiving a data request sent by a client aiming at an image;
acquiring related information of at least one user viewing the image;
and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
In another embodiment of the present application, a client device is provided. The client device includes: a memory, a processor, and a display, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image of a space through the display;
determining at least one target object in the image;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
According to the technical scheme provided by the embodiment of the application, the image of a space is displayed, meanwhile, the related information of at least one user viewing the image at the network side can be acquired in real time, so that the interface elements displayed in the image corresponding to at least part of the at least one user are determined based on the related information of the at least one user, the interaction between users (such as consumers) is facilitated, and the satisfaction degree of the users is improved. In addition, the data information of at least one determined target object in the image can be acquired, and the display content in the image is determined based on the data information, so that the display content is displayed in the image, and thus, a user (such as a merchant) can conveniently view the data information change of the target object in real time and make adjustment on the target object in time.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings required to be utilized in the description of the embodiments or the prior art are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 2a is a partial image according to an embodiment of the present disclosure;
FIG. 2b is a schematic view of a spatial location provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the display of image content in an image provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 5a is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 5b is a schematic diagram of a specific implementation principle of the data processing method provided in the embodiment of the present application in a specific application scenario;
fig. 5c is a schematic specific flowchart of a data processing method in an application scenario according to an embodiment of the present application;
FIG. 6 is a flow diagram of a data processing system according to an embodiment of the present application;
FIG. 7 is a block diagram of a data processing system according to an embodiment of the present application;
FIG. 8 is a block diagram of a data processing system according to another embodiment of the present application;
FIG. 9 is a block diagram of a space according to an embodiment of the present disclosure;
fig. 10 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 11 is a block diagram of a data processing apparatus according to another embodiment of the present application;
fig. 12 is a block diagram of a data processing apparatus according to another embodiment of the present application;
fig. 13 is a block diagram of a device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", and the like in this document are used to distinguish different messages, devices, modules, and the like, and do not represent a sequence, and do not limit that "first" and "second" are different types; while the term "or/and" herein is merely an association describing an associated object, it means that three relationships may exist, for example: a or/and B, which means that A can exist independently, A and B can exist simultaneously, and B can exist independently; the character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical scheme provided by each embodiment of the application is that a corresponding image is generated for fusing space online and offline data so as to display interaction online in real time; the offline data of the space includes radar data and image data, which may be acquired in real time by using a specific hardware device, for example, a radar camera fusion device. Indoor monitoring operations such as families, markets and the like can be completed through the images, and meanwhile, real-time interaction of users can be realized under the market application scene, so that the shopping satisfaction of the users is improved.
Fig. 1 shows a schematic flow chart of a data processing method according to an embodiment of the present application. The execution main body of the method can be a client, and the client can be any terminal device such as a mobile phone, a tablet computer, an intelligent wearable device (such as a virtual reality device), and the embodiment of the application is not particularly limited in this respect.
As shown in fig. 1, the method includes:
101. displaying an image of a space;
102. acquiring related information of at least one user viewing the image from a network side;
103. and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
In the foregoing 101, the image may be obtained based on image data and radar data of the space; the image data may be collected by an image sensor (e.g., a panoramic camera), and the radar data may be obtained by scanning and measuring the same spatial area by a radar, such as a laser radar. Specifically, the method comprises the following steps: acquiring three-dimensional information (namely radar data or point cloud data) of a space through radar scanning, and acquiring surface texture information (namely panoramic pictures (or image data)) of the space by using an image sensor; and performing data fusion processing on the point cloud data and the panoramic picture to obtain picture information after the point cloud data and the panoramic picture are fused, namely the panoramic picture containing three-dimensional information, which can also be referred to as a three-dimensional panoramic picture for short. The image in this embodiment may be a three-dimensional panorama, or may be a three-dimensional scene map generated based on three-dimensional information of the space. The image is consistent with the position and the size corresponding to the space in reality, the image is visual and contains abundant information, and the image can be issued to a user side client and/or a merchant side client for displaying so as to be browsed and viewed by the user and/or the merchant. Of course, the image may also be a two-dimensional photograph of the space, a video image of the space, and so on.
When some users click to view, browse and the like on the image through the client application, the server end in communication connection with the user-side client can acquire the related information of the users, so that the related information is displayed in the image. Based on this, one implementation of the above-mentioned "acquiring the relevant information of at least one user viewing the image at the network side" by 102 is as follows: acquiring at least one client displaying the image at a network side; obtaining relevant information of at least one user associated with the at least one client. In the specific implementation: the server side can acquire at least one client side displaying the image at the network side according to the IP address of the client side, and further acquire related information of at least one user according to the incidence relation between the user and the at least one client side; the server may be a common server, a cloud, a virtual server, or the like, which is not specifically limited in this embodiment of the application; the client side is provided with an APP application or a related website for displaying the image function. For example, if a space corresponding to the image is a three-dimensional panoramic image of a certain market scene, the three-dimensional panoramic image can be displayed in an online shop of the market, and at this time, the online shop may be a shopping APP owned by the market, a shop owned by an e-commerce platform, an official website of the shop on the internet, or the like.
One implementation of the above-mentioned 103 "displaying, in the image, the interface elements corresponding to at least some of the at least one user according to the related information of the at least one user" is: acquiring a first spatial position corresponding to a local image displayed on an interface by the image; acquiring a target user with a spatial position in a target area from the at least one user according to the related information of the at least one user; wherein the target region is determined by the first spatial location; and displaying the interface element corresponding to the target user in the local image.
Here, it should be noted that: for some larger spaces, for example, spaces divided into many regions or areas, or spaces divided into multiple layers, images of the spaces, especially three-dimensional panoramic images, can be displayed on a display interface, and only partial images are possible; the user can perform corresponding operations on the interface by means of touch, a mouse and the like so as to display other hidden partial images of the image in the interface. The user can browse the whole image of the space through continuous operation.
For example, a user may only be interested in image content (e.g., a exhibit) in a certain partial image of the images, and when the user clicks and browses the partial image, the partial image will be displayed on the client interface; at this time, a corresponding first spatial position may be acquired based on the local image, and a target user with a spatial position within a target area among the at least one user may be determined according to the acquired related information of the at least one user, where the target area is determined by the first spatial position. For example, a region with a radius of m meters (e.g., 2 meters, 5 meters, etc., which is not specifically limited in this embodiment) with the first spatial position as a center, or a regular (e.g., square) or irregular shaped region with the first spatial position as a center, etc. And displaying the interface element corresponding to the target user in the local image. For example, in the interface shown in fig. 2b, in which a partial image 100 'of the image corresponding to a certain shop is displayed, the interface element corresponding to the target user displayed on the partial image 100' may include at least one of the following: user avatar, user nickname, content input by user (such as evaluation of a certain commodity). The content input by the user may be content about shopping moods published when the user shops, such as: referring to fig. 2b, the user b purchases a commodity in the local image 100 and reports the evaluation content of the commodity, "the quality of the computer is really good and the commodity is not stuck. For other users who have not published an evaluation, such as user c1, user c2, etc., the respective avatars may be displayed only in the partial image. In this way, the mutual communication among the target users can be realized, the purchase satisfaction of the users is improved, and merchants can visually know the popularity of each commodity and adjust the commodities in time.
In 103, the related information includes at least one of the following: a user identification, a user avatar, user input information, a second spatial location of the user. Correspondingly, the "displaying the interface element corresponding to the first user in the image according to the related information of the first user in the at least one user" may specifically include:
1031. determining an interface display position according to the second spatial position of the first user;
1032. and displaying an interface element corresponding to at least one item of user identification, user head portrait and user input information of the first user at the interface display position.
1031, the first user may be an online user having APP software or website for displaying the image on the login client to view the image; the second spatial position is a spatial position corresponding to a local image displayed on an interface of the first user when the first user browses the spatial image. Accordingly, the "determining the interface display position according to the second spatial position of the first user" may specifically include: determining positional information of the second spatial location relative to the first spatial location; and determining the display position of the interface according to the azimuth information.
1032' above, after determining the interface display position, displaying an interface element corresponding to at least one of the user identifier, the user avatar, and the user input information of the first user at the interface display position. The user identification is a login identification of the current user, which has the APP or website displaying the image, on the login client, and the user identification is used for uniquely identifying the user identity in an account system of the APP or website displaying the image. For example, the login identifier may be obtained by a user through registration in an APP application or a website, may be an identifier that belongs to another application and is available for the APP application or the website login, and may also be an identifier that is available for the APP application or the website login and is a certain instant messaging tool, which is not listed here any more.
Further, the method provided by this embodiment may further include the following steps:
displaying a spatial structure model diagram corresponding to the image;
mapping the first spatial position to the spatial structure model diagram to obtain a corresponding mark position;
at the marked position, an image element is displayed.
Specifically, with continuing reference to fig. 2a, the spatial structure model map may be a top view 11 of the spatial structure of the space, or may be a three-dimensional scene map 12 of the space. The user can click the "3D scene" control in fig. 2a, and can switch to displaying the three-dimensional scene map of the space. And displaying a graph element representing the current first spatial position of the user in the spatial structure model, so that the user can know the spatial position of the currently browsed local image in the overall spatial structure model graph in the process of browsing the image, and can guide the subsequent browsing direction of the user.
Further, the method provided by this embodiment may further include at least one of the following:
104a, responding to the operation of a user on one image content in the image, and displaying a first display content associated with the image content;
104b, responding to the operation of the user on the operable interface element displayed on the interface of the image, and displaying the second display content associated with the image.
In one embodiment, with continued reference to fig. 2a, if the user operates the image content 13 (computer) in a partial image 100 of the image shown in fig. 2a, the first display content, such as price, brand model, etc., associated with the image content 13 is displayed on the client interface. Partial information related to the image content can be displayed in the first display content, and a user can enter a detail interface corresponding to the image content by operating an operable element in the first display content. For example, referring to fig. 3, if the image content is a computer in the partial image 100, when the user triggers an operation on a computer primitive, the client sends a request message of the user to the server, and the server responds to the request operation of the user, and displays a part of information (i.e., the first display content 200) of the computer 13 on the interface, where the part of information includes a part of introduction and a price of the computer 13, and then the user can enter the computer purchasing interface 300 by clicking the "view details" module 21; alternatively, if the user clicks an operable element of the partial image 100, such as the product quantity display module 14 included in the partial image 100, a third display content (such as the content displayed by the display interface 400) associated with the image is displayed on the interface, where the third display content may be the product associated with the product quantity display module and the related information of the product. For example, when the number of the commodities included in the local scene displayed in the commodity number display module 14 is 3, and the commodities associated with the commodity number display module are wine cabinets, yarn curtains and goblets, the user clicks the commodity number display module 14, the third display content 400 is displayed on the interface, the related information of the commodities, namely the wine cabinets, the yarn curtains and the goblets, is specifically displayed in the third display content 400, and the user can select the commodities through the third display content 400.
Further, the method provided by this embodiment may further include the following steps:
105a, identifying the image content in the image to obtain an identification result;
105b, determining at least one target object according to the recognition result;
105c, acquiring data information of the at least one target object;
and 105d, determining the display content in the image according to the data information.
105a, the image content may be a system default content. For example, in a situation where the image corresponds to a mall monitoring scene, the image content may be a target person, a display item in a shop; in a home monitoring scenario, the image content may be a target person. The identification of the image content in the image can be accomplished by using the existing image identification technology, for example: the recognition of the image content can be completed by adopting a machine learning technology (such as a deep learning technology) to obtain a recognition result. Specifically, as shown in fig. 2a, a local image 100 in an image corresponding to a mall is input into a pre-trained deep learning network model, and according to an output result of the deep learning network model, contents contained in the local image 100, such as a target person, a displayed article, and the like, can be determined. For how to use other existing image recognition technologies to complete the recognition of the image content, reference may be made to existing technologies, and details are not described herein.
One possible implementation of the above-mentioned 105b "determining at least one target object according to the recognition result" is: and in response to the operation of a user on a target image content in the image, taking a recognition result corresponding to the target image content as a target object. For example, with continued reference to fig. 2a, when the user clicks on the target image content 13 in the local image 100, the recognition result corresponding to the target image content, i.e. the computer, may be used as the target object.
The above 105c "acquiring data information of the at least one target object" may include at least one of the following:
a1, acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and A2, acquiring second data information related to the at least one target object from a local or server side.
In the above a1, the first capturing device may be a camera, such as a panoramic camera; the target object may be any form of object, such as a person, an animal, a displayed item, and the like; dynamic information of people and/or animals and image information of displayed items can be acquired by the first acquisition device (such as a panoramic camera).
In a2 above, for example: in a market monitoring scene, if the target object is a person, second data information related to the at least one target object is acquired from a local or server side, such as historical purchased products of a user, user preferences, user personalized information and the like; if the target object is an article, the second data information related to the at least one target object, which is acquired from a local or a server, is the inventory quantity of the article, the recent sale condition, and the like.
Here, it should be noted that: the content shown in the above embodiments can be displayed simultaneously on the client sides corresponding to the merchant side and the user side. And when the target object is a person in the space or displayed goods in the space, the server side can determine the display content in the image based on the data information corresponding to the target object and send the display content to the merchant-side client only for displaying at the merchant-side client. How to determine the display contents in the image based on the data information of the person or the displayed item in the space will be described in detail below.
In an implementation, if the target object is a person in the space. Accordingly, the step 105d "determining the display content in the image according to the data information" may specifically include:
105d1, identifying the behavior of the at least one target object based on the data information;
105d2, determining the display content in the image according to the behavior recognition result.
The step 105d1 of recognizing the behavior of the at least one target object is to extract effective motion features from the image sequence and analyze the features to determine the category to which the behavior belongs. Based on the data information corresponding to the at least one target object obtained from the related content 105a to 105b, the characteristics representing the motion of the target object can be determined, and the classification result of the target object behavior can be obtained by comparing the motion behavior characteristics of the target object with the existing behavior classes through a classifier. Detailed behavior recognition of the target object can be found in the prior art, for example, machine learning technology, and is not described herein.
The step 105d1 of determining the display content in the image according to the behavior recognition result may specifically include:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information;
and displaying the display content in association with the image content corresponding to the target object in the image.
For example, if the intention information of the target object is to purchase a computer, the server may generate display content related to the computer and send the display content to the client interface at the merchant side for display, so that the merchant can provide services for the target object online, that is, in the physical store, as described above, the target user is introduced with the computer, a sales promotion activity, and the like; the popularity of each commodity in the store can be known through the display contents, and further, targeted promotion activities can be made.
In another implementation, if the target object is an item displayed in the space. Accordingly, the step 105d "determining the display content in the image according to the data information" may specifically include:
determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information;
generating display content for displaying in the image according to the determined at least one parameter;
and displaying the display content in association with the image content corresponding to the target object in the image.
As a specific embodiment, referring to fig. 2a continuously, if the target object is a computer 13, after determining a target object (i.e. the computer 13) according to the identification result when the server identifies the content in the local image 100, the server may obtain data information related to the computer 13 from a corresponding database, where the data information may include the display amount of the computer in the space, the inventory amount in the database, the ex-warehouse amount in a preset time period, and the like, send the display amount, the inventory amount in the database, the ex-warehouse amount in the preset time period, and the like as display content to the merchant-side client, so as to perform associated display at a preset position in the image at the merchant-side client, so that the merchant may query the data information related to the target object in real time, the target object is adjusted based on this data information. Such as: when the display quantity of the target object in the space is found to be small, the merchant is reminded to replenish the goods; when the total inventory quantity of the target objects in the database is detected to be insufficient, a merchant can be reminded to carry out goods handling; according to the delivery amount of the target object in the preset time period, the merchant can judge the popularity of the commodity.
According to the technical scheme provided by the embodiment, the related information of at least one user viewing the image at the network side is acquired in real time while the image of a space is displayed, so that the interface elements displayed in the image corresponding to at least part of the at least one user are determined based on the related information of the at least one user, mutual interaction among the users is facilitated, and the satisfaction degree of the users is improved.
Fig. 4 is a schematic flowchart illustrating a data processing method according to another embodiment of the present application. The execution main body of the method may be a server, and the server may be a common server, a cloud, a virtual server, or the like, which is not specifically limited in this embodiment of the present application. As shown in fig. 4, the method includes:
201. receiving a data request sent by a client aiming at an image;
202. acquiring related information of at least one user viewing the image;
203. and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
The specific implementation of the steps 201 to 203 can refer to the corresponding content in the above embodiments, and is not described herein again.
Further, the method provided by this embodiment may further include the following steps:
204a, receiving first acquisition data sent by second acquisition equipment arranged in the space;
204b, updating the image according to the first acquisition data, so that when a client requests to display the image corresponding to the space, the updated image is sent to the client.
In a specific implementation, the second acquisition device includes a radar and a camera, where the radar may be a laser radar and the camera may be a panoramic camera. The second acquisition equipment can acquire first acquisition data of a space area, namely radar data and image data; and then the collected first collected data is sent to a server side in communication connection with the second collecting device, and the server side can reconstruct an image according to the first collected data so as to update the previous image. Therefore, when the user sends a request to display the image corresponding to the space through the client, the server can send the updated image to the client.
Further, the method provided by this embodiment may further include the following steps:
205a, identifying the image content in the image to obtain an identification result;
205b, determining at least one target object according to the recognition result;
205c, obtaining data information of the at least one target object;
205d, determining the display content in the image according to the data information;
205e, sending the display content to at least one client displaying the image, so as to display the display content in the image displayed on the client interface.
The above 205c "acquiring data information of the at least one target object" may include at least one of the following: acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space; and acquiring second data information related to the at least one target object from a local or server side.
In the above, the target object is a person in the space. Accordingly, the "determining the display content in the image according to the data information" may specifically include:
205d1, identifying the behavior of the target object based on the data information;
205d2, determining the display content in the image according to the behavior recognition result.
The aforementioned 205d2 "determining the display content in the image according to the behavior recognition result" may specifically include: determining intention information of the target object according to the behavior recognition result; generating display content for display in the image based on the intention information.
In the above, the target object is an article displayed in the space. Accordingly, the "determining the display content in the image according to the data information" may specifically include:
determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information;
and generating display content for displaying in the image according to the determined at least one parameter.
According to the technical scheme provided by the embodiment, the relevant information of at least one user viewing an image is acquired based on a data request sent by a received client for the image, and the relevant information of the at least one user is fed back to the client, so that interface elements corresponding to at least part of the at least one user are displayed in the image displayed by the client. Based on this, mutual interaction between the users is easily realized, the user experience is improved, and further the user satisfaction is improved.
Here, it should be noted that: the content of each step in the data processing method provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Fig. 5a is a schematic flow chart illustrating a data processing method according to another embodiment of the present application. The execution subject of the method can be a client, such as a merchant-side client; the client can be any terminal device such as a mobile phone, a tablet computer, an intelligent wearable device, and the like, and the embodiment of the application is not particularly limited thereto.
As shown in fig. 5a, the method comprises:
301. displaying an image of a space;
302. determining at least one target object in the image;
303. acquiring data information of the at least one target object;
304. and determining the display content in the image according to the data information.
The aforementioned 302 "determining at least one target object in the image" may specifically include:
3021. identifying the image content in the image to obtain an identification result;
3022. and determining the at least one target object according to the recognition result.
The 3022 "determining at least one target object according to the recognition result" may specifically include: and in response to the operation of a user on a target image content in the image, taking a recognition result corresponding to the target image content as a target object.
The above 303 "acquiring data information of the at least one target object" may specifically include at least one of the following: acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space; and acquiring second data information related to the at least one target object from a local or server side.
In the above, the target object is a person in the space. Accordingly, the "determining the display content in the image according to the data information" may specifically include: identifying the behavior of the at least one target object based on the data information; and determining the display content in the image according to the behavior recognition result.
The "determining the display content in the image according to the behavior recognition result" may specifically include: determining intention information of the target object according to the behavior recognition result; generating display content for display in the image based on the intention information; and displaying the display content in association with the image content corresponding to the target object in the image.
In the above, if the target object is an article displayed in the space. Accordingly, the "determining the display content in the image according to the data information" may specifically include: determining the display quantity and the stock quantity of the target object in the space according to the data information; generating display content for displaying in the image according to the display quantity and the inventory quantity; and displaying the display content in association with the image content corresponding to the target object in the image.
According to the technical scheme provided by the embodiment, the data information of at least one determined target object in the image is acquired while the image of a space is displayed, and the display content of the image is determined based on the data information, so that the display content is conveniently displayed in the image, and therefore a user (such as a merchant) can conveniently check the data information change of the target object in real time and timely adjust the target object.
Here, it should be noted that: the content of each step in the data processing method provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
In summary, the data processing method provided in the above embodiments can be summarized as the process shown in fig. 5 b. That is, the image sensor 504 (e.g., panoramic camera) collects data from the same spatial region 001 as the radar 503. The image acquired by the image sensor 504 and the radar data acquired by the radar 503 are sent to the server 502 for data fusion processing, so as to obtain an image of the space region, such as a three-dimensional panoramic image. Sending the image to a client 501 so as to be displayed on an interface of the client 501 for a user to browse and view; meanwhile, when the user browses the image, the server side can also acquire related information of other users on the network side and send the related information to the client side so as to be displayed in the image displayed by the client side. In addition, based on the image acquired by the image sensor 504, the server may further identify a target object in the image, acquire data information of the target object according to the identification result, and send the data information to the client 501 to be displayed on an image interface displayed by the client.
Referring to fig. 5c, the technical solution of the image displayed on the client based on fig. 5b can be briefly described as the following process:
based on image information and radar data of a space (such as a market), three-dimensional reconstruction of the space is completed on line to obtain an image corresponding to the space, the image is sent to a client corresponding to a merchant side and/or a user side to be displayed, meanwhile, relevant information of an online user who browses the image to shop and data information of a target object (such as people in the space of the market, displayed articles and the like) in the image can be obtained in real time and displayed in the image, and online and offline fusion is achieved. For the client at the merchant side, the merchant can adjust the commodity information in time based on the online and offline real-time synchronous data, and for the user at the client at the user side, the user can purchase the commodity according to the real-time access amount.
Further, taking the space mentioned in the above embodiments as an example of a physical store, besides displaying on the client an image of the physical store, an interface element corresponding to at least one user browsing the image on the network side, display contents (such as stock quantity, sales volume, etc.) corresponding to one or some commodities, preferences of users in the store, etc., more information on the online, such as offline commodity promotion activity information in the physical store, the number of customers in the offline store, etc., can be converted to be online, that is, displayed in the image; the online information and the offline information corresponding to the entity shop are fused, so that the user can conveniently select and purchase commodities, and a merchant can know the online information and the offline information of the commodities at any time to make a more reasonable decision.
In addition, besides the shop scene, the technical scheme provided by the embodiment can also be applied to a virtual decoration scene. For example, a merchant may construct a three-dimensional model image of a user's home for three-dimensional spatial data of a space provided by the user, or for collecting three-dimensional spatial data from the user's home using a mobile data collection device; and then rendering a virtual decoration image of the user home, such as a three-dimensional panoramic image, based on the three-dimensional model image and a virtual decoration scheme formulated according to the actual needs of the user. The user can check the virtual decoration image through the corresponding client side, and can check detailed information (such as price, product brand, material and the like) of any commodity in the virtual decoration image by clicking the commodity, such as a curtain, a dining table, a bookcase and the like. Under the condition that the user allows, other users at the network side can also check the virtual decoration image of the user through corresponding clients, and the user at the network side uploads the evaluation of the virtual decoration image or one or more commodities in the image. The evaluation information can be displayed on the virtual decoration image, so that a user can decide whether to adopt the scheme in the virtual decoration image for decoration or not based on the evaluation of the user at the network side; for a merchant providing a virtual finishing plan, the preferences of most users may be learned based on the ratings of the online users in order to adjust the formulation of subsequent finishing plans.
Fig. 6 shows a schematic flowchart of a data processing method according to an embodiment of the present application. An execution subject of the method provided by this embodiment may be a server, such as a server, a virtual server deployed on a server cluster, or a cloud computing center. As shown in fig. 6, the data processing method includes:
401. acquiring data information of at least one target object in an image;
402. determining display content in the image according to the data information;
403. and sending the display content to at least one client displaying the image so that the received client displays the display content in the displayed image.
In 401, the target object may be a person in the space or an article displayed in the space, and based on this, the "acquiring data information of at least one target object in the image" may include at least one of:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
In one implementable aspect, the target object is a person within the space; accordingly, the step 402 of determining the display content in the image according to the data information may specifically include: identifying the behavior of the at least one target object based on the data information; and determining the display content in the image according to the behavior recognition result.
The "determining the display content in the image according to the behavior recognition result" may specifically include:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information;
and displaying the display content in association with the image content corresponding to the target object in the image.
In another implementable aspect, the target object is an item displayed within the space; accordingly, the step 402 of determining the display content in the image according to the data information may specifically include: determining the display quantity and the stock quantity of the target object in the space according to the data information; generating display content for displaying in the image according to the display quantity and the inventory quantity; and displaying the display content in association with the image content corresponding to the target object in the image.
According to the technical scheme provided by the embodiment, the display content in the image determined based on the data information of at least one target object in the image is sent to at least one client side displaying the image, so that the received client side can display the display content in the displayed image, a user (such as a merchant) can conveniently check the data information change of the target object in real time, and the target object can be adjusted in time.
Here, it should be noted that: the content of each step in the data processing system provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the data processing system provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and for details, reference may be made to corresponding contents in the above embodiments, and details are not described here.
The technical solutions provided by the above method embodiments can be implemented based on the following hardware system. Specifically, fig. 7 shows a block diagram of a data processing system according to an embodiment of the present application. As shown in fig. 7, the data processing system includes:
a client 501 for displaying an image of a space; acquiring related information of at least one user viewing the image from a network side; displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user;
a server 502, configured to receive a data request sent by a client for an image; acquiring related information of at least one user viewing the image; and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
Further, the system provided in this embodiment may further include:
the second acquisition device 5031 is configured to acquire three-dimensional panoramic data of the space and send the three-dimensional panoramic data to the server;
accordingly, the server 502 is configured to generate the image based on the three-dimensional panoramic data.
Further, the system provided in this embodiment may further include:
a first acquisition device 5041 for acquiring first data information related to at least one target object; sending the first data information to a server;
correspondingly, the server 502 is configured to determine the display content in the image according to the first data information and the second data information; the second data information is information related to at least one target object, which is acquired by the server from local or network side equipment.
According to the technical scheme provided by the embodiment, the relevant information of at least one user viewing an image is acquired based on a data request sent by a received client for the image, and the relevant information of the at least one user is fed back to the client, so that interface elements corresponding to at least part of the at least one user are displayed in the image displayed by the client. Based on this, mutual interaction between the users is easily realized, the user experience is improved, and further the user satisfaction is improved.
Here, it should be noted that: the content of each step in the data processing system provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the data processing system provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and for details, reference may be made to corresponding contents in the above embodiments, and details are not described here.
Fig. 8 is a block diagram illustrating a data processing system according to another embodiment of the present application. As shown in fig. 8, the data processing system includes:
a client 501 for displaying an image of a space;
the server 502 is used for acquiring data information of at least one target object in the image; determining display content in the image according to the data information; sending the display content to at least one client displaying the image;
the client 501 is further configured to receive display content fed back by the server 502, and display the display content in the image.
According to the technical scheme provided by the embodiment, the server side obtains the determined data information of at least one target object in the image while displaying the image of a space on the client side interface, and the display content determined based on the data information is sent to the client side so as to be displayed in the image displayed by the client side conveniently, so that a user (such as a merchant) can conveniently view the data information change of the target object in real time, and the target object can be adjusted in time.
Here, it should be noted that: the content of each step in the data processing system provided in the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described herein again. In addition, the data processing system provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and for details, reference may be made to corresponding contents in the above embodiments, and details are not described here.
Fig. 9 shows a block diagram of a space provided by an embodiment of the present application. As shown in fig. 9, the space includes:
a first acquisition device 5041, disposed in the space, for acquiring first data information related to at least one target object; sending the first data information to a server; the server side determines display content to be displayed in the image corresponding to the space according to the first data;
the second acquisition device 5031 is arranged in the space and configured to acquire three-dimensional panoramic data of the space and send the three-dimensional panoramic data to the server; and the server side generates an image of the space according to the three-dimensional panoramic data, acquires the related information of at least one user viewing the image, and feeds the related information of the at least one user back to at least one client side.
According to the technical scheme provided by the embodiment, the real-time acquisition of the spatial data can be realized by arranging the first acquisition equipment and the first acquisition equipment in the space; the acquired data information is sent to the server side in time, and the image updating processing can be effectively realized.
Here, it should be noted that: the content of each step in the space provided by the embodiment of the present application, which is not described in detail in the foregoing embodiments, may refer to corresponding content in the foregoing embodiments, and is not described in detail herein. In addition, the space provided in the embodiment of the present application may include, in addition to the above steps, other parts or all of the steps in the above embodiments, and reference may be specifically made to corresponding contents in the above embodiments, which is not described herein again.
The technical scheme provided by each embodiment of the application can be applied to various application scenes, such as indoor monitoring of families, shopping malls, museums and the like and outdoor monitoring of tourist attractions, streets and the like. The embodiments described above are described in conjunction with a mall space scene. The technical solution provided by the embodiment of the present application is described below with reference to a home application scenario. The camera and the radar (namely the first acquisition equipment and the second acquisition equipment) which are arranged around the family can acquire image data and radar data of each corner of the family; an image can be obtained by fusing the image data and the radar data, and the image is displayed at a client corresponding to each member of the family, so that a user can conveniently monitor related conditions occurring in the family at any time and any place.
Fig. 10 shows a block diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 10, the data processing apparatus includes: a display module 801, an acquisition module 802 and a display module 803; wherein the content of the first and second substances,
the display module 801 is configured to display an image of a space;
the obtaining module 802 is configured to obtain relevant information of at least one user viewing the image at a network side;
the displaying module 803 is configured to display, in the image, interface elements corresponding to at least some of the at least one user according to the related information of the at least one user.
According to the technical scheme provided by the embodiment, the related information of at least one user viewing the image at the network side is acquired in real time while the image of a space is displayed, so that the interface elements displayed in the image corresponding to at least part of the at least one user are determined based on the related information of the at least one user, mutual interaction among the users is facilitated, and the satisfaction degree of the users is improved.
Further, the obtaining module 802 is specifically configured to:
acquiring at least one client displaying the image at a network side;
obtaining relevant information of at least one user associated with the at least one client.
Further, the display module 803 is specifically configured to:
acquiring a first spatial position corresponding to a local image displayed on an interface by the image;
acquiring a target user with a spatial position in a target area from the at least one user according to the related information of the at least one user; wherein the target region is determined by the first spatial location;
and displaying the interface element corresponding to the target user in the local image.
In the above, the related information includes at least one of: a user identification, a user avatar, user input information, a second spatial location of the user. Correspondingly, the display module 803 is further specifically configured to: determining an interface display position according to the second spatial position of the first user; and displaying an interface element corresponding to at least one item of user identification, user head portrait and user input information of the first user at the interface display position.
Further, the display module 803 is further configured to: determining positional information of the second spatial location relative to the first spatial location; and determining the display position of the interface according to the azimuth information.
Further, the display module 803 is further specifically configured to: displaying a spatial structure model diagram corresponding to the image; mapping the first spatial position to the spatial structure model diagram to obtain a corresponding mark position; at the marked position, an image element is displayed.
Further, the data processing apparatus provided in this embodiment further includes: a response module 804, wherein the response module 804 is configured to display first presentation content associated with image content in response to a user operation on the image content in the image; and/or responding to the operation of the user on the operable interface element displayed on the interface of the image, and displaying the second display content associated with the image.
Further, the data processing apparatus provided in this embodiment further includes: the identification module 805, wherein the identification module 805 is specifically configured to: identifying the image content in the image to obtain an identification result; determining at least one target object according to the recognition result; acquiring data information of the at least one target object; and determining the display content in the image according to the data information.
Further, the identifying module 805 is further specifically configured to, in response to an operation performed by a user on a target image content in the image, take an identification result corresponding to the target image content as a target object.
Further, the identifying module 805 is further specifically configured to: acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space; and acquiring second data information related to the at least one target object from a local or server side.
The target object in the above is a person in the space. Correspondingly, the identifying module 804 is further specifically configured to: identifying the behavior of the at least one target object based on the data information; and determining the display content in the image according to the behavior recognition result.
Further, the identifying module 805 is further specifically configured to: determining intention information of the target object according to the behavior recognition result; generating display content for display in the image based on the intention information. Correspondingly, the display module 801 is further configured to associate and display the display content with image content corresponding to the target object in the image.
The target object is an item in the space. Correspondingly, the identifying module 804 is further specifically configured to: determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information; generating display content for displaying in the image according to the determined at least one parameter;
and displaying the display content in association with the image content corresponding to the target object in the image.
Here, it should be noted that: the data processing apparatus provided in this embodiment may implement the technical solution described in the data processing method embodiment shown in fig. 1, and the specific implementation principle of each module or unit may refer to the corresponding content in the data processing method embodiment shown in fig. 1, and is not described herein again.
Fig. 11 shows a block diagram of a data processing apparatus according to another embodiment of the present application. As shown in fig. 11, the data processing apparatus includes: a receiving module 901, an obtaining module 902 and a feedback module 903; wherein the content of the first and second substances,
the receiving module 901 is configured to receive a data request sent by a client for an image;
the obtaining module 902 is configured to obtain information related to at least one user viewing the image;
the feedback module 903 is configured to feed back the relevant information of the at least one user to the client, so as to display an interface element corresponding to at least some users in the at least one user in the image displayed by the client.
According to the technical scheme provided by the embodiment, the relevant information of at least one user viewing an image is acquired based on a data request sent by a received client for the image, and the relevant information of the at least one user is fed back to the client, so that interface elements corresponding to at least part of the at least one user are displayed in the image displayed by the client. Based on this, mutual interaction between the users is easily realized, the user experience is improved, and further the user satisfaction is improved.
Further, the data processing apparatus provided in this embodiment further includes an updating module 904, where the updating module 904 is specifically configured to:
receiving first acquisition data sent by second acquisition equipment arranged in a space;
and updating the image according to the first acquisition data so as to send the updated image to the client when the client requests to display the image corresponding to the space.
Further, the data processing apparatus provided in this embodiment further includes an identification module 905, where the identification module 905 is specifically configured to:
identifying the image content in the image to obtain an identification result;
determining at least one target object according to the recognition result;
acquiring data information of the at least one target object;
determining display content in the image according to the data information;
and sending the display content to at least one client terminal with the image displayed, so that the display content is displayed in the image displayed on the client terminal interface.
Further, the obtaining module 902 is further specifically configured to:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
The target object in the above is a person in the space. Correspondingly, the identifying module 905 is further specifically configured to: identifying the behavior of the target object based on the data information; and determining the display content in the image according to the behavior recognition result.
Further, the identifying module 905 is further specifically configured to: determining intention information of the target object according to the behavior recognition result; generating display content for display in the image based on the intention information.
The target object is an item in the space. Correspondingly, the identifying module 905 is further specifically configured to: determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information; and generating display content for displaying in the image according to the determined at least one parameter.
Here, it should be noted that: the data processing apparatus provided in this embodiment may implement the technical solution described in the data processing method embodiment shown in fig. 4, and the specific implementation principle of each module or unit may refer to the corresponding content in the data processing method embodiment shown in fig. 4, which is not described herein again.
Fig. 12 is a block diagram illustrating a data processing apparatus according to another embodiment of the present application. As shown in fig. 12, the data processing apparatus includes: a display module 1001, a determination module 1002 and an acquisition module 1003; wherein the content of the first and second substances,
the display module 1001 is configured to display an image of a space;
the determining module 1002 is configured to determine at least one target object in the image;
the obtaining module 1003 is configured to obtain data information of the at least one target object;
the determining module 1002 is further configured to determine display content in the image according to the data information.
According to the technical scheme provided by the embodiment, the data information of at least one determined target object in the image is acquired while the image of a space is displayed, and the display content of the image is determined based on the data information, so that the display content is conveniently displayed in the image, and therefore a user (such as a merchant) can conveniently check the data information change of the target object in real time and timely adjust the target object.
Further, the determining module 1002 is specifically configured to: identifying the image content in the image to obtain an identification result; and determining the at least one target object according to the recognition result.
Further, the determining module 1002 is further specifically configured to: and in response to the operation of a user on a target image content in the image, taking a recognition result corresponding to the target image content as a target object.
Further, the obtaining module 1003 is specifically configured to: acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space; and/or acquiring second data information related to the at least one target object from a local or server side.
The target object in the above is a person in the space. Correspondingly, the determining module 1002 is further specifically configured to: identifying the behavior of the at least one target object based on the data information; and determining the display content in the image according to the behavior recognition result.
Further, the determining module 1002 is further specifically configured to: determining intention information of the target object according to the behavior recognition result; generating display content for display in the image based on the intention information; and displaying the display content in association with the image content corresponding to the target object in the image.
The target object is an item in the space. Correspondingly, the determining module 1002 is further specifically configured to: determining the display quantity and the stock quantity of the target object in the space according to the data information; generating display content for displaying in the image according to the display quantity and the inventory quantity; and displaying the display content in association with the image content corresponding to the target object in the image.
Here, it should be noted that: the data processing apparatus provided in this embodiment may implement the technical solution described in the data processing method embodiment shown in fig. 5a, and the specific implementation principle of each module or unit may refer to the corresponding content in the data processing method embodiment shown in fig. 5a, which is not described herein again.
Fig. 13 is a schematic structural diagram of a client device according to an embodiment of the present application. As shown in fig. 13, the client device includes: a memory 1101, a processor 1102, and a display 1103. The memory 1101 may be configured to store other various data to support operations on the sensors. Examples of such data include instructions for any application or method operating on the sensor. The memory 1101 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 1102, coupled to the memory 1101, is configured to execute the program stored in the memory 1101 to:
displaying an image of a space through the display 1103;
acquiring related information of at least one user viewing the image from a network side;
and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
When the processor 1102 executes the program in the memory 1101, the processor 1102 may also implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Further, as shown in fig. 13, the client device further includes: communication component 1104, power component 1105, and the like. Only some of the components are schematically shown in fig. 13, and the sensor is not meant to include only the components shown in fig. 13.
An embodiment of the present application further provides a server device, and the structure of the server device is similar to that in fig. 13. Specifically, the service device includes: a memory 1101, and a processor 1102. The memory 1101 may be configured to store other various data to support operations on the sensors. Examples of such data include instructions for any application or method operating on the sensor. The memory 1101 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 1102, coupled to the memory 1101, is configured to execute the program stored in the memory 1101 to:
receiving a data request sent by a client aiming at an image;
acquiring related information of at least one user viewing the image;
and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
When the processor 1102 executes the program in the memory 1101, the processor 1102 may also implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Further, as shown in fig. 13, the client device further includes: communication component 1104, power component 1105, and the like. Only some of the components are schematically shown in fig. 13, and the sensor is not meant to include only the components shown in fig. 13.
An embodiment of the present application further provides another client device, which has a structure similar to that of fig. 13. Specifically, the client device includes: a memory 1101, a processor 1102, and a display 1103. The memory 1101 may be configured to store other various data to support operations on the sensors. Examples of such data include instructions for any application or method operating on the sensor. The memory 1101 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 1102, coupled to the memory 1101, is configured to execute the program stored in the memory 1101 to:
displaying an image of a space through the display 1103;
determining at least one target object in the image;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
When the processor 1102 executes the program in the memory 1101, the processor 1102 may also implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Further, as shown in fig. 13, the client device further includes: communication component 1104, power component 1105, and the like. Only some of the components are schematically shown in fig. 13, and the sensor is not meant to include only the components shown in fig. 13.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions of the data processing method provided in the foregoing embodiments when executed by a computer.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (42)

1. A data processing method, comprising:
displaying an image of a space;
acquiring related information of at least one user viewing the image from a network side;
and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
2. The method of claim 1, wherein obtaining information about at least one user viewing the image at a network side comprises:
acquiring at least one client displaying the image at a network side;
obtaining relevant information of at least one user associated with the at least one client.
3. The method of claim 1, wherein displaying interface elements corresponding to at least some of the at least one user in the image according to the related information of the at least one user comprises:
acquiring a first spatial position corresponding to a local image displayed on an interface by the image;
acquiring a target user with a spatial position in a target area from the at least one user according to the related information of the at least one user; wherein the target region is determined by the first spatial location;
and displaying the interface element corresponding to the target user in the local image.
4. The method of claim 3, wherein the related information comprises at least one of: a user identification, a user avatar, user input information, a second spatial location of the user.
5. The method of claim 4, wherein presenting, in the image, interface elements corresponding to a first user of the at least one user according to information related to the first user comprises:
determining an interface display position according to the second spatial position of the first user;
and displaying an interface element corresponding to at least one item of user identification, user head portrait and user input information of the first user at the interface display position.
6. The method of claim 5, wherein determining an interface display location based on the second spatial location of the first user comprises:
determining positional information of the second spatial location relative to the first spatial location;
and determining the display position of the interface according to the azimuth information.
7. The method of claim 3, further comprising:
displaying a spatial structure model diagram of the space;
mapping the first spatial position to the spatial structure model diagram to obtain a corresponding mark position;
at the marked position, an image element is displayed.
8. The method of any one of claims 1 to 3, further comprising at least one of:
responding to the operation of a user on image content in the image, and displaying first display content associated with the image content;
and displaying second display content associated with the image in response to the operation of the user on the operable interface element displayed on the interface of the image.
9. The method of any of claims 1 to 3, further comprising:
identifying the image content in the image to obtain an identification result;
determining at least one target object according to the recognition result;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
10. The method of claim 9, wherein determining at least one target object based on the recognition result comprises:
and in response to the operation of a user on a target image content in the image, taking a recognition result corresponding to the target image content as a target object.
11. The method of claim 9, wherein obtaining data information for at least one target object within the space comprises at least one of:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
12. The method of claim 9, wherein the target object is a person within the space; and
determining display content in the image according to the data information, comprising:
identifying the behavior of the at least one target object based on the data information;
and determining the display content in the image according to the behavior recognition result.
13. The method of claim 11, wherein determining display content in the image according to the behavior recognition result comprises:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information;
and displaying the display content in association with the image content corresponding to the target object in the image.
14. The method of claim 9, wherein the target object is an item displayed within the space; and
determining display content in the image according to the data information, comprising:
determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information;
generating display content for displaying in the image according to the determined at least one parameter;
and displaying the display content in association with the image content corresponding to the target object in the image.
15. The method of claim 1, wherein the image is generated based on three-dimensional panoramic data of the space; the three-dimensional panoramic data is collected by a collection device disposed within the space.
16. A data processing method, comprising:
receiving a data request sent by a client aiming at an image;
acquiring related information of at least one user viewing the image;
and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
17. The method of claim 16, further comprising:
receiving first acquisition data sent by second acquisition equipment arranged in a space;
and updating the image according to the first acquisition data so as to send the updated image to the client when the client requests to display the image corresponding to the space.
18. The method of claim 17, further comprising:
identifying the image content in the image to obtain an identification result;
determining at least one target object according to the recognition result;
acquiring data information of the at least one target object;
determining display content in the image according to the data information;
and sending the display content to at least one client terminal with the image displayed, so that the display content is displayed in the image displayed on the client terminal interface.
19. The method of claim 18, wherein obtaining data information of the at least one target object comprises at least one of:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
20. The method of claim 18, wherein the target object is a person within the space; and
determining display content in the image according to the data information, comprising:
identifying the behavior of the target object based on the data information;
and determining the display content in the image according to the behavior recognition result.
21. The method of claim 20, wherein determining display content in the image according to the behavior recognition result comprises:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information.
22. The method of claim 18, wherein the target object is an item displayed within the space; and
determining display content in the image according to the data information, comprising:
determining at least one parameter of the number of columns, the number of stocks and the delivery quantity of the target object in the space in a preset time period according to the data information;
and generating display content for displaying in the image according to the determined at least one parameter.
23. A data processing method, comprising:
displaying an image of a space;
determining at least one target object in the image;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
24. The method of claim 23, wherein determining at least one target object in the image comprises:
identifying the image content in the image to obtain an identification result;
and determining the at least one target object according to the recognition result.
25. The method of claim 24, wherein determining at least one target object based on the recognition result comprises:
and in response to the operation of a user on a target image content in the image, taking a recognition result corresponding to the target image content as a target object.
26. The method of claim 23, wherein obtaining data information of the at least one target object comprises at least one of:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
27. The method of claim 23, wherein the target object is a person within the space; and
determining display content in the image according to the data information, comprising:
identifying the behavior of the at least one target object based on the data information;
and determining the display content in the image according to the behavior recognition result.
28. The method of claim 27, wherein determining display content in the image according to the behavior recognition result comprises:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information;
and displaying the display content in association with the image content corresponding to the target object in the image.
29. The method of claim 23, wherein the target object is an item displayed within the space; and
determining display content in the image according to the data information, comprising:
determining the display quantity and the stock quantity of the target object in the space according to the data information;
generating display content for displaying in the image according to the display quantity and the inventory quantity;
and displaying the display content in association with the image content corresponding to the target object in the image.
30. A data processing system, comprising:
acquiring data information of at least one target object in an image;
determining display content in the image according to the data information;
and sending the display content to at least one client displaying the image so that the received client displays the display content in the displayed image.
31. The system of claim 30, wherein obtaining data information for at least one target object in the image comprises at least one of:
acquiring first data information related to at least one target object through at least one first acquisition device arranged in a space;
and acquiring second data information related to the at least one target object from a local or server side.
32. The system of claim 30, wherein the target object is a person within the space; and the number of the first and second groups,
determining display content in the image according to the data information, comprising:
identifying the behavior of the at least one target object based on the data information;
and determining the display content in the image according to the behavior recognition result.
33. The system of claim 32, wherein determining display content in the image based on the behavior recognition result comprises:
determining intention information of the target object according to the behavior recognition result;
generating display content for display in the image based on the intention information;
and displaying the display content in association with the image content corresponding to the target object in the image.
34. The system of claim 30, wherein the target object is an item displayed within the space; and the number of the first and second groups,
determining display content in the image according to the data information, comprising:
determining the display quantity and the stock quantity of the target object in the space according to the data information;
generating display content for displaying in the image according to the display quantity and the inventory quantity;
and displaying the display content in association with the image content corresponding to the target object in the image.
35. A data processing system, comprising:
the client is used for displaying an image of a space; acquiring related information of at least one user viewing the image from a network side; displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user;
the server is used for receiving a data request sent by the client aiming at an image; acquiring related information of at least one user viewing the image; and feeding back the related information of the at least one user to the client.
36. The system of claim 35, further comprising:
the second acquisition equipment is used for acquiring the three-dimensional panoramic data of the space and sending the three-dimensional panoramic data to the server;
the server is used for generating the image based on the three-dimensional panoramic data.
37. The system of claim 35, further comprising:
the first acquisition equipment is used for acquiring first data information related to at least one target object; sending the first data information to a server;
the server is used for determining display content in the image according to the first data information and the second data information;
the second data information is information related to at least one target object, which is acquired by the server from local or network side equipment.
38. A data processing system, comprising:
the client is used for displaying an image of a space;
the server is used for acquiring data information of at least one target object in the image; determining display content in the image according to the data information; sending the display content to at least one client displaying the image;
the client is further used for receiving the display content fed back by the server and displaying the display content in the image.
39. A space, comprising:
the first acquisition equipment is arranged in the space and used for acquiring first data information related to at least one target object; sending the first data information to a server; the server side determines display content to be displayed in the image corresponding to the space according to the first data;
the second acquisition equipment is arranged in the space and used for acquiring the three-dimensional panoramic data of the space and sending the three-dimensional panoramic data to the server; and the server side generates an image of the space according to the three-dimensional panoramic data, acquires the related information of at least one user viewing the image, and feeds the related information of the at least one user back to at least one client side.
40. A client device, comprising: a memory, a processor, and a display, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image of a space through the display;
acquiring related information of at least one user viewing the image from a network side;
and displaying interface elements corresponding to at least part of the at least one user in the image according to the related information of the at least one user.
41. A server-side device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
receiving a data request sent by a client aiming at an image;
acquiring related information of at least one user viewing the image;
and feeding back the related information of the at least one user to the client so as to display interface elements corresponding to at least part of the at least one user in the image displayed by the client.
42. A client device, comprising: a memory, a processor, and a display, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
displaying an image of a space through the display;
determining at least one target object in the image;
acquiring data information of the at least one target object;
and determining the display content in the image according to the data information.
CN202010323311.7A 2020-04-22 2020-04-22 Data processing method and system, off-line shop space and equipment Active CN113538083B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010323311.7A CN113538083B (en) 2020-04-22 2020-04-22 Data processing method and system, off-line shop space and equipment
PCT/CN2021/088872 WO2021213457A1 (en) 2020-04-22 2021-04-22 Data processing method and system, space, and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010323311.7A CN113538083B (en) 2020-04-22 2020-04-22 Data processing method and system, off-line shop space and equipment

Publications (2)

Publication Number Publication Date
CN113538083A true CN113538083A (en) 2021-10-22
CN113538083B CN113538083B (en) 2023-02-03

Family

ID=78124019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010323311.7A Active CN113538083B (en) 2020-04-22 2020-04-22 Data processing method and system, off-line shop space and equipment

Country Status (2)

Country Link
CN (1) CN113538083B (en)
WO (1) WO2021213457A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713840A (en) * 2016-06-28 2017-05-24 腾讯科技(深圳)有限公司 Virtual information display method and device
KR20170142086A (en) * 2016-06-16 2017-12-27 주식회사 에이치투앤컴퍼니 Interaction-type double reality system by combining VR content and AR content and method thereof
JP6378850B1 (en) * 2018-03-22 2018-08-22 株式会社ドワンゴ Server and program
CN108932051A (en) * 2017-05-24 2018-12-04 腾讯科技(北京)有限公司 augmented reality image processing method, device and storage medium
CN109087172A (en) * 2018-08-08 2018-12-25 百度在线网络技术(北京)有限公司 Commodity identifying processing method and device
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN110852770A (en) * 2018-08-21 2020-02-28 阿里巴巴集团控股有限公司 Data processing method and device, computing equipment and display equipment
CN110866796A (en) * 2018-08-28 2020-03-06 阿里巴巴集团控股有限公司 Information display method, information acquisition method, system and equipment
CN111008859A (en) * 2019-11-11 2020-04-14 北京迈格威科技有限公司 Information presentation method and device in virtual shop, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063796B (en) * 2013-03-19 2022-03-25 腾讯科技(深圳)有限公司 Object information display method, system and device
KR101773885B1 (en) * 2016-10-19 2017-09-01 (주)잼투고 A method and server for providing augmented reality objects using image authentication
CN107247510A (en) * 2017-04-27 2017-10-13 成都理想境界科技有限公司 A kind of social contact method based on augmented reality, terminal, server and system
CN107370801A (en) * 2017-07-07 2017-11-21 天脉聚源(北京)科技有限公司 A kind of method for displaying head portrait and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170142086A (en) * 2016-06-16 2017-12-27 주식회사 에이치투앤컴퍼니 Interaction-type double reality system by combining VR content and AR content and method thereof
CN106713840A (en) * 2016-06-28 2017-05-24 腾讯科技(深圳)有限公司 Virtual information display method and device
CN108932051A (en) * 2017-05-24 2018-12-04 腾讯科技(北京)有限公司 augmented reality image processing method, device and storage medium
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
JP6378850B1 (en) * 2018-03-22 2018-08-22 株式会社ドワンゴ Server and program
CN109087172A (en) * 2018-08-08 2018-12-25 百度在线网络技术(北京)有限公司 Commodity identifying processing method and device
CN110852770A (en) * 2018-08-21 2020-02-28 阿里巴巴集团控股有限公司 Data processing method and device, computing equipment and display equipment
CN110866796A (en) * 2018-08-28 2020-03-06 阿里巴巴集团控股有限公司 Information display method, information acquisition method, system and equipment
CN111008859A (en) * 2019-11-11 2020-04-14 北京迈格威科技有限公司 Information presentation method and device in virtual shop, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PAGES R 等: ""Affordable Content Creation for Free-Viewpoint Video and VR/AR Applications"", 《JOURNAL OF VISUAL COMMUNICATION & IMAGE REPRESENTATION》 *
杨建峰: ""基于Cesium的智慧景区虚拟监管平台构建"", 《度假旅游》 *

Also Published As

Publication number Publication date
WO2021213457A1 (en) 2021-10-28
CN113538083B (en) 2023-02-03

Similar Documents

Publication Publication Date Title
US11593871B1 (en) Virtually modeling clothing based on 3D models of customers
CN106447365B (en) Picture display method and device of business object information
US10559019B1 (en) System for centralized E-commerce overhaul
CN102054247B (en) Method for building three-dimensional (3D) panoramic live-action network business platform
KR20190000397A (en) Fashion preference analysis
CN106779940B (en) Method and device for confirming display commodity
CN107492007A (en) The shop methods of exhibiting and device of a kind of virtual reality
CN109213310B (en) Information interaction equipment, data object information processing method and device
WO2018045937A1 (en) Page information processing system, and method and apparatus for generating pages and providing page information
KR20140119291A (en) Augmented Reality Cart system Based mirror world Marketing Platform.
KR101657585B1 (en) System for Suggesting Product Dealing based on Hash Tag using Mobile Application and Method therefor
CN110858375A (en) Data, display processing method and device, electronic equipment and storage medium
KR20150022064A (en) Sale Support System for Product of Interactive Online Store based Mirror World.
KR101657588B1 (en) System for Providing Product Dealing Information based on Product Image using Web and Method therefor
US20200226668A1 (en) Shopping system with virtual reality technology
CN113538083B (en) Data processing method and system, off-line shop space and equipment
KR20120057668A (en) System supporting communication between customers in off-line shopping mall and method thereof
KR101484736B1 (en) Shopping-mall System for Product of Interactive Online Store based Mirror World.
KR101622940B1 (en) Method and apparatus for customization fashion sale using customer image information
US20210142394A1 (en) Virtual shopping software system and method
US20190272585A1 (en) Virtual shopping software system and method
KR101657583B1 (en) System for Extracting Hash Tag of Product Image using Mobile Application and Method therefor
KR101657587B1 (en) System for Providing Individual Tendency Information based on Product Dealing Information using Mobile Application and Method therefor
KR101657586B1 (en) System for Providing Individual Tendency Information based on Product Dealing Information using Web and Method therefor
KR101657589B1 (en) System for Providing Product Dealing Information based on Product Image using Mobile Application and Method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant