CN112288881B - Image display method and device, computer equipment and storage medium - Google Patents

Image display method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112288881B
CN112288881B CN202011193676.9A CN202011193676A CN112288881B CN 112288881 B CN112288881 B CN 112288881B CN 202011193676 A CN202011193676 A CN 202011193676A CN 112288881 B CN112288881 B CN 112288881B
Authority
CN
China
Prior art keywords
image
target user
target
equipment
activity area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011193676.9A
Other languages
Chinese (zh)
Other versions
CN112288881A (en
Inventor
侯欣如
刘杰靖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202011193676.9A priority Critical patent/CN112288881B/en
Publication of CN112288881A publication Critical patent/CN112288881A/en
Application granted granted Critical
Publication of CN112288881B publication Critical patent/CN112288881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Abstract

The disclosure provides an image display method, an image display device, computer equipment and a storage medium, wherein the method comprises the steps of obtaining an image to be identified; under the condition that the image to be recognized comprises an image of a target user carrying Augmented Reality (AR) equipment, determining a target activity area where the target user is located based on the image to be recognized; and sending the AR special effect data matched with the target activity area to the AR equipment of the target user.

Description

Image display method and device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image display method and apparatus, a computer device, and a storage medium.
Background
Augmented Reality (AR) technology is a relatively new technology content that promotes integration between real world information and virtual world information content, and users can watch effect images corresponding to AR special effect data through AR equipment worn by the users.
At present, when an effect image corresponding to AR special effect data is viewed, general contents are relatively fixed, and the function of adaptively changing according to a changing scene or requirement is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides an image display method, an image display device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an image display method, including:
acquiring an image to be identified;
under the condition that the image to be recognized comprises an image of a target user carrying Augmented Reality (AR) equipment, determining a target activity area where the target user is located based on the image to be recognized;
and sending the AR special effect data matched with the target activity area to the AR equipment of the target user.
In the aspect, the purpose of quickly identifying the specific activity area where the target user is located can be achieved through the image identification method, and then AR special effect data matched with the specific activity area can be sent to the target user entering the specific activity area, so that the real-time performance of the effect image display corresponding to the AR special effect data is improved.
In a possible implementation manner, the determining, based on the image to be recognized, a target activity area in which the target user is located includes:
acquiring a prefabricated map;
matching the image to be recognized with a prefabricated map, and determining the position information of the target user based on the matching result;
and determining a target activity area where the target user is located based on the position information of the target user and the position information of a preset activity area.
Therefore, by using the method for matching the acquired prefabricated map with the image to be identified and determining the position of the target user based on the matching result, the positioning step is simplified, the positioning complexity is reduced, the target activity area where the target user is located can be quickly determined, and the positioning efficiency is improved.
In one possible embodiment, the sending, to the AR device of the target user, AR special effect data matching the target activity area includes:
determining identification information of the target user based on the face feature information of the target user in the image to be recognized;
determining pre-stored identification information of the AR equipment matched with the identification information of the target user based on the identification information of the target user;
and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
Therefore, the identification information of the target user can be accurately determined in a face recognition mode, the AR equipment carried by the target user can be accurately determined by using the accurate identification information of the target user, and the condition that an effect image corresponding to the AR special effect data cannot be displayed for the corresponding target user is avoided.
In one possible embodiment, the sending, to the AR device of the target user, AR special effect data matching the target activity area includes:
determining image characteristic information corresponding to the AR equipment based on the image to be identified;
determining identification information of the AR equipment based on the image characteristic information corresponding to the AR equipment;
and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
Therefore, the AR equipment carried by the target user can be rapidly determined by utilizing the image characteristic information corresponding to the AR equipment, and meanwhile, the condition that the effect image corresponding to the AR special effect data cannot be displayed to the corresponding target user can be avoided.
In one possible implementation, the sending, to the AR device, AR special effect data matching the target activity area based on the identification information of the AR device includes:
determining whether the AR equipment logs in a target application or not based on the identification information of the AR equipment;
and sending the AR special effect data matched with the target activity area to the AR equipment under the condition that the AR equipment logs in the target application.
Therefore, by judging whether the AR equipment corresponding to the identification information logs in or not and sending the AR special effect data to the AR equipment under the condition of determining the login of the AR equipment, the condition of resource waste caused by sending the AR special effect data under the condition that the AR equipment does not log in is avoided, and the rationality of resource utilization is improved.
In one possible embodiment, the AR special effects data is determined according to the following steps:
determining user attribute information of the target user;
and determining AR special effect data matched with the user attribute information from multiple AR special effect data matched with the target activity area.
Therefore, the AR special effect data are determined according to the determined attribute information of the target user, the AR special effect data are sent in a personalized mode, the rationality of special effect display is improved, and the special effect experience of the user is improved.
In a possible implementation, the acquiring the image to be recognized includes:
acquiring the images to be identified, which are acquired by a camera device group with a shooting range covering a plurality of preset activity areas; the image pickup apparatus group includes at least one image pickup apparatus.
Therefore, the full coverage of a plurality of preset activity areas is realized by arranging at least one camera device, the target user is positioned by using the image acquired by the camera device, and the accuracy and the real-time performance of the positioning result are ensured.
In a second aspect, an embodiment of the present disclosure further provides an image display apparatus, including:
the acquisition module is used for acquiring an image to be identified;
the determining module is used for determining a target activity area where a target user is located based on the image to be identified under the condition that the image to be identified comprises an image of the target user carrying Augmented Reality (AR) equipment;
and the sending module is used for sending the AR special effect data matched with the target activity area to the AR equipment of the target user.
In a possible implementation manner, the determining module is specifically configured to obtain a prefabricated map; matching the image to be recognized with a prefabricated map, and determining the position information of the target user based on the matching result; and determining a target activity area where the target user is located based on the position information of the target user and the position information of a preset activity area.
In a possible implementation manner, the sending module is configured to determine, based on the facial feature information of the target user in the image to be recognized, identification information of the target user; determining pre-stored identification information of the AR equipment matched with the identification information of the target user based on the identification information of the target user; and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
In a possible implementation manner, the sending module is configured to determine, based on the image to be identified, image feature information corresponding to the AR device; determining identification information of the AR equipment based on the image characteristic information corresponding to the AR equipment; and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
In a possible implementation manner, the sending module is configured to determine whether the AR device logs in a target application based on the identification information of the AR device; and sending the AR special effect data matched with the target activity area to the AR equipment under the condition that the AR equipment logs in the target application.
In a possible embodiment, the apparatus further comprises:
a matching module for determining the AR special effect data according to the following steps:
determining user attribute information of the target user; and determining AR special effect data matched with the user attribute information from multiple AR special effect data matched with the target activity area.
In a possible implementation manner, the obtaining module is configured to obtain the image to be recognized, which is collected by a camera device group whose shooting range covers a plurality of preset activity areas; the image pickup apparatus group includes at least one image pickup apparatus.
In a third aspect, this disclosure also provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, and when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to perform the steps in the first aspect or any one of the possible implementations of the first aspect.
In a fourth aspect, this disclosure also provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
For the description of the effects of the image displaying apparatus, the computer device, and the computer-readable storage medium, reference is made to the description of the image displaying method, which is not repeated herein.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 is a schematic view illustrating an application scenario of an image presentation method provided by an embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating an image displaying method provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a preset activity area with different themes divided by an activity field according to an embodiment of the disclosure;
fig. 4 is a flowchart illustrating a method for displaying, according to a change in a position of a target user, an effect image corresponding to AR special effect data corresponding to different preset activity areas to the target user correspondingly according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an image display apparatus provided by an embodiment of the present disclosure;
fig. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of embodiments of the present disclosure, as generally described and illustrated herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Furthermore, the terms "first," "second," and the like in the description and in the claims, and in the drawings described above, in the embodiments of the present disclosure are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be implemented in other sequences than those illustrated or described herein.
Reference herein to "a plurality or a number" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Research shows that Augmented Reality (AR) technology is a relatively new technology content that promotes integration between real world information and virtual world information content, and a user can view an effect image corresponding to AR special effect data through an AR device worn by the user.
At present, when an effect image corresponding to AR special effect data is viewed, general contents are relatively fixed, and the function of adaptively changing according to a changing scene or requirement is poor.
Based on the research, the present disclosure provides an image display method, an image display apparatus, a computer device, and a storage medium, where a set group of camera devices collects an image to be recognized in a preset activity area, a server obtains the collected image to be recognized, and determines a target activity area where a target user is located according to the obtained image to be recognized when it is determined that the image to be recognized contains an image of the target user carrying AR equipment, and the target activity area where the target user is located can be quickly located by an image recognition method, thereby improving the efficiency of locating the target user; and then the AR special effect data matched with the specific activity area can be sent to the target user entering the specific activity area, so that the real-time performance of the effect image display corresponding to the AR special effect data is improved.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
To facilitate understanding of the present embodiment, first, an image displaying method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the image displaying method provided in the embodiments of the present disclosure is generally a computing device with certain computing capability, and the computing device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or a server or other processing device. In some possible implementations, the image presentation method may be implemented by a processor invoking computer readable instructions stored in a memory.
It should be noted that "AR" in the following is an Augmented Reality (Augmented Reality) technology, which is a technology that ingeniously fuses virtual information and the real world, and a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, thereby realizing "augmentation" of the real world.
To facilitate understanding of the embodiment, an application scenario of the image display method disclosed in the embodiment of the present disclosure is first introduced, and as shown in fig. 1, an application scenario schematic diagram of the image display method provided in the embodiment of the present disclosure is shown. The method comprises the steps that a user carries AR equipment 11 to enter a certain activity place, the activity place is divided into preset activity areas with different themes, the preset activity area of each theme correspondingly displays effect images corresponding to different AR special effect data, at least one camera device 12 capable of covering each preset activity area is arranged in the activity place, the camera device 12 collects real-time images of the preset activity areas, a server 13 determines a target activity area where the user is located according to the content of an image to be recognized, which is collected by the obtained camera device 12, and AR special effect data related to the theme corresponding to the target activity area are displayed for the user through the AR equipment 11. The camera device 12 and the server 13 are communicatively connected through a network, which may be a local area network, a cellular network, a wide area network, or the like, and the server 13 may be any device capable of providing internet services.
The server 13 may provide different internet services for the user, for example, the server 13 may provide an image display service for the user, in this case, when the user carries the AR device 11 and enters a preset activity area, the image to be recognized including the user image acquired by the camera device 12 is subjected to image processing by the server 13, the stored prefabricated map of the activity place is matched with the image to be recognized, the position information of the user is determined, the target activity area where the user is located and the AR special effect data of the target activity area are determined based on the determined position information, the AR device 11 carried by the user is further determined according to the acquired image to be recognized, the AR special effect data of the target activity area is sent to the AR device 11, and an effect image corresponding to the AR data is displayed therein, therefore, the user can view the effect image corresponding to the AR data of the target activity area where the user is located through the carried AR device, in addition, after the user enters another preset activity area from the certain preset activity area, the user can determine that the user enters another preset activity area through the acquired image to be recognized, then the AR device sends the effect image to the other preset activity area, and the AR device achieves displaying based on the different special effect data carried by the AR device. Different preset activity areas have different activity themes, and different AR special effect data are configured in advance for the preset activity area of each activity theme in the embodiment of the disclosure.
The following describes an image display method provided by an embodiment of the present disclosure by taking an execution subject as a computer device as an example.
As shown in fig. 2, a flowchart of an image displaying method provided in the embodiment of the present disclosure may include the following steps:
s201: and acquiring an image to be identified.
In this step, after the target user carries the AR device to enter the activity place, in order to ensure that an effect image corresponding to AR special effect data corresponding to different preset activity areas can be displayed for the target user in time according to the position of the target user, the camera device group can be set to perform full coverage monitoring on the activity place, wherein the camera device group at least comprises one camera device, in addition, the server is in communication connection with the camera device through a network, and based on this, the server can acquire an image to be identified acquired by the camera device.
In specific implementation, preset activity areas with different themes may be divided from an activity place, as shown in fig. 3, which is a schematic diagram of preset activity areas with different themes divided from an activity place provided in the embodiment of the present disclosure, where the preset activity area a, the preset activity area B, and the preset activity area C correspond to different themes, and in order to implement real-time monitoring on each preset activity area, the camera device group may be set in the following three ways: in the first mode, one camera device can be arranged in a live site, and all preset live areas can be monitored through the camera device; in a second mode, camera devices can be respectively arranged in each preset activity area, and each camera device collects area images of the preset activity area; in a third mode, camera devices may be disposed at specific locations of a moving place, and monitoring images acquired by these camera devices are combined to obtain monitoring images of all preset moving areas, and taking the preset moving areas divided in fig. 3 as an example, four camera devices (camera device 1, camera device 2, camera device 3, and camera device 4) may be disposed to acquire real-time pictures of a preset moving area a, a preset moving area B, and a preset moving area C, where camera device 1 is configured to acquire real-time pictures of the preset moving area a, camera device 2 is configured to acquire real-time pictures of the preset moving area B, camera device 3 is configured to acquire real-time pictures of a left half of the preset moving area C, camera device 4 is configured to acquire real-time pictures of a right half of the preset moving area C, and a server combines the acquired real-time pictures acquired by each camera device, so as to obtain real-time pictures of the whole moving place. With regard to the setting method of the above-described image capturing apparatus group, the embodiment of the present disclosure is not limited.
S202: and under the condition that the image to be recognized comprises the image of the target user carrying the AR equipment, determining a target activity area where the target user is located based on the image to be recognized.
In this step, under the condition that the image to be recognized acquired by the server includes the target user carrying the AR device, the target activity area where the target user is located may be determined according to the image to be recognized. In specific implementation, the server needs to acquire a prefabricated map generated for the activity place, wherein the prefabricated map is composed of all position feature points of the activity place, each position feature point corresponds to a unique actual position point in the activity place, then the prefabricated map is matched with the image to be identified, and the position information of the target user in the prefabricated map can be determined according to the position feature points of the prefabricated map and the image features of the image to be identified.
For example, the location information of the target user may be determined by: the method comprises the steps of extracting image features of the position where a target user is located in an image to be recognized and image features of a plurality of positions nearby the target user in the image to be recognized, matching each extracted image feature with position feature points included in a prefabricated map, determining position information of the target user in the prefabricated map according to the matched position feature points, namely determining the actual position of the target user in an activity place, and further determining a target activity area where the target user is located based on a preset activity area where the actual position is located.
In another embodiment, the location information of the target user may also be determined by a positioning device carried by the target user, for example, the target user may carry an intelligent terminal with a positioning function, where the intelligent terminal may be a computer device capable of performing image recognition, or a device performing information interaction with the computer device through a network; and after the user enters the activity place, the server determines the positioning information of the target user through the received positioning information sent by the intelligent terminal. In addition, the position information of the target user can be determined through the Bluetooth device carried by the target user, and in specific implementation, the computer device is connected with the Bluetooth device carried by the target user through the Bluetooth device of the computer device, and the position information of the target user is acquired through the Bluetooth device carried by the target user. Regarding the method for determining the location information of the target user, the embodiment of the present disclosure is not limited.
S203: and sending the AR special effect data matched with the target activity area to the AR equipment of the target user.
In specific implementation, after a target activity area where a target user is located is determined, according to a pre-stored corresponding relationship between a preset activity area and the AR special effect data, the AR special effect data corresponding to the target activity area can be determined. In one embodiment, the effect image corresponding to the AR special effect data corresponding to the same target activity area may have multiple display modes, each display mode corresponds to different AR special effect data, and under the condition that the attribute information of the target user is different, the display modes of the effect image corresponding to the AR special effect data are also different, and the determined AR special effect data are also different, for example, for children, the effect image corresponding to the AR special effect data may be displayed in a cartoon manner; for adults, the effect image corresponding to the AR special effect data may be presented in a science fiction manner. Therefore, after the target activity area where the target user is located is determined, the attribute information of the target user needs to be determined based on the obtained image to be recognized, and corresponding AR special effect data is determined according to the attribute information of the target user. For example, taking the effect shown by the preset activity area a in fig. 3 as an example, the waterfall effect data corresponding to the preset activity area a may include cartoon-form waterfall effect data and flood-form waterfall effect data, when a child enters the preset activity area a with an AR device, the server determines that the target user is located behind the preset activity area a according to the acquired image to be recognized, acquires attribute information of the target user in the image to be recognized, determines that the target user is a child according to the attribute information, and based on this, may determine that the waterfall effect data that needs to be acquired is the cartoon-form waterfall effect data.
Furthermore, identification information of the AR device carried by the target user needs to be determined, the AR device corresponding to the identification information can be ensured to be the AR device carried by the target user according to the identification information, and then the AR special effect data is sent to the AR device corresponding to the identification information, so that the target user can watch the effect image corresponding to the AR special effect data of the target activity area through the AR device.
Regarding determining the identification information of the AR device carried by the target user, in an embodiment, the face feature recognition may be performed on the acquired image to be recognized, and the identification information of the target user is determined based on the acquired face feature information of the target user, for example, the identification information may include a target user name, and the identification information of the AR device carried by the target user may be determined according to the target user name and a correspondence between the pre-stored target user name and the identification information of the AR device.
In another embodiment, after an image to be recognized is obtained, it is first determined whether an AR device is included in the image to be recognized, and when it is determined that the AR device exists, image feature information corresponding to the existing AR device is determined by performing image processing on the image to be recognized, for example, the image feature information may be a color, a device serial number, and the like of the AR device extracted from the image to be recognized, and further, based on the determined image feature information, identification information of the AR device may be determined, for example, the device serial number of the extracted AR device may be identification information of the AR device stored in advance.
In addition, after the identification information of the AR device is determined, it is also necessary to determine whether the corresponding AR device has already logged in the server according to the identification information of the AR device, and if the AR device has not logged in, sending the AR special effect data will not be matched to the received AR device, and possibly the target user does not have a need to view an effect image corresponding to the AR special effect data at this time and has already closed the AR device; in another embodiment, when the AR device is determined to be logged in, the determined AR special effect data can be directly sent to the AR device, and based on the fact that the user can watch the effect image corresponding to the AR special effect data matched with the located target activity area through the carried AR device.
According to the image display method provided by the embodiment of the disclosure, based on the acquired image to be identified, which includes the target user carrying the AR device, the target activity area where the target user is located can be quickly located through the image identification method, and the efficiency of locating the target user is improved. The method for determining the position of the target user by matching the prefabricated map with the image to be identified simplifies the positioning steps, reduces the positioning complexity and improves the positioning efficiency. The method and the device for identifying the AR special effect data can accurately determine the identification information of the target user, accurately determine the AR equipment carried by the target user by using the identification information of the target user, and avoid the condition that the effect image corresponding to the AR special effect data cannot be displayed to the corresponding target user. In addition, the AR equipment carried by the target user can be rapidly determined by utilizing the image characteristic information corresponding to the AR equipment, and meanwhile, the condition that the effect image corresponding to the AR special effect data cannot be displayed to the corresponding target user can be avoided.
As shown in fig. 4, according to the position change of the target user, the effect images corresponding to the AR special effect data corresponding to different preset activity areas are correspondingly displayed to the target user, which may specifically be implemented by using the following steps:
s401: and acquiring an image to be recognized.
In this step, the server acquires images to be recognized covering all the preset activity areas.
S402: and acquiring the prefabricated map under the condition that the image to be identified comprises the image of the target user carrying the augmented reality AR equipment.
In specific implementation, after the server identifies the acquired image to be identified and determines that the image to be identified comprises the target user carrying the AR equipment, the server acquires the prefabricated map generated according to the activity place.
S403: and matching the prefabricated map with the image to be identified, and determining a first target activity area where the target user is located.
In specific implementation, the extracted image features of the position where the target user is located in the image to be recognized and the image features of multiple positions nearby the position in the image to be recognized can be matched with the prefabricated map through position feature points, and the first target activity area where the target user is located is determined.
S404: and determining attribute information of the target user based on the image to be recognized.
In the step, the attribute information of the target user is acquired by identifying the image to be identified.
S405: and determining first AR special effect data according to the attribute information of the target user and the first target activity area where the target user is located.
In one embodiment, the AR special effect data corresponding to an effect image that needs to be displayed in the target activity area may be determined according to a first target activity area where the target user is located, the display mode of the effect image corresponding to the AR special effect data may be determined based on attribute information of the target user, and the first AR special effect data may be determined according to the AR special effect data and the display mode of the effect image corresponding to the AR special effect data.
S406: and carrying out face feature recognition on the acquired image to be recognized, and determining the identification information of the target user.
S407: and determining the carried identification information of the AR equipment based on the identification information of the target user.
In specific implementation, according to the corresponding relationship between the identification information of the target user and the pre-stored target user name and the identification information of the AR device, the identification information of the AR device carried by the target user can be determined.
S408: and judging whether the corresponding AR equipment logs in or not according to the identification information of the AR equipment.
In this step, after the identification information of the AR device carried is determined, it is necessary to determine whether the AR device corresponding to the AR device is logged in, if so, step S409 is executed, if not, it is indicated that the AR device corresponding to the identification information determined at this time is not logged in, if the first AR special effect data is continuously sent, the matching cannot be performed to the special effect receiving end, in this case, the user may already log out because there is no need to view an effect image corresponding to the AR special effect data, and the process ends.
S409: and sending the determined first AR special effect data to the identification information to determine the corresponding AR equipment.
And under the condition that the AR equipment corresponding to the identification information is determined to log in, sending the determined first AR special effect data to the identification information to determine the corresponding AR equipment, and based on the condition, the target user can watch the effect image corresponding to the first AR special effect data corresponding to the target activity area through the carried AR equipment.
S410: after the first AR special effect data are sent to the target user, the target user is determined to be located in a second target moving area according to the obtained image to be recognized.
In this step, after the first AR special effect data is sent to the target user, the server determines that the target user has changed the position in the activity place and is located in the second target activity area based on the acquired image to be recognized.
S411: and displaying an effect image corresponding to the second AR special effect data corresponding to the second target activity area to the target user.
In specific implementation, when the target activity area is determined to be located in the second target activity area, the steps S405 to S409 are returned to, and then, an effect image corresponding to the second AR special effect data corresponding to the second target activity area may be displayed to the target user.
According to the image display method provided by the embodiment of the disclosure, based on the acquired image to be recognized, after the target user is determined to move from the first target activity area to the second target activity area, the effect image corresponding to the second AR special effect data corresponding to the second target activity area is correspondingly displayed to the target user through the carried AR equipment, so that the use experience of the user is improved.
It will be understood by those of skill in the art that in the above method of the present embodiment, the order of writing the steps does not imply a strict order of execution and does not impose any limitations on the implementation, as the order of execution of the steps should be determined by their function and possibly inherent logic.
Based on the same inventive concept, an image display apparatus corresponding to the image display method is also provided in the embodiments of the present disclosure, and since the principle of the apparatus in the embodiments of the present disclosure for solving the problem is similar to the image display method described above in the embodiments of the present disclosure, the implementation of the apparatus may refer to the implementation of the method, and repeated details are not described again.
As shown in fig. 5, a schematic diagram of an image display apparatus provided for an embodiment of the present disclosure may include the following modules:
an obtaining module 501, configured to obtain an image to be identified;
a determining module 502, configured to determine, based on the image to be recognized, a target activity area where a target user is located when the image to be recognized includes an image of the target user carrying Augmented Reality (AR) equipment;
a sending module 503, configured to send AR special effect data matched with the target activity area to the AR device of the target user.
In a possible implementation manner, the determining module 502 is specifically configured to obtain a prefabricated map; matching the image to be recognized with a prefabricated map, and determining the position information of the target user based on the matching result; and determining a target activity area where the target user is located based on the position information of the target user and the position information of a preset activity area.
In a possible implementation manner, the sending module 503 is configured to determine identification information of the target user based on face feature information of the target user in the image to be recognized; determining pre-stored identification information of the AR equipment matched with the identification information of the target user based on the identification information of the target user; and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
In a possible implementation manner, the sending module 503 is configured to determine, based on the image to be identified, image feature information corresponding to the AR device; determining identification information of the AR equipment based on the image characteristic information corresponding to the AR equipment; and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
In a possible implementation manner, the sending module 503 is configured to determine whether the AR device logs in a target application based on the identification information of the AR device; and sending the AR special effect data matched with the target activity area to the AR equipment under the condition that the AR equipment logs in the target application.
In a possible embodiment, the apparatus further comprises:
a matching module 504 configured to determine the AR special effect data according to the following steps:
determining user attribute information of the target user; and determining AR special effect data matched with the user attribute information from multiple AR special effect data matched with the target activity area.
In a possible implementation manner, the obtaining module 501 is configured to obtain the image to be recognized, which is acquired by a group of image capturing devices whose shooting ranges cover a plurality of preset activity areas; the image pickup apparatus group includes at least one image pickup apparatus.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided in the embodiment of the present disclosure, and the computer device includes:
a processor 61 and a memory 62; the memory 62 stores machine-readable instructions executable by the processor 61, the processor 61 being configured to execute the machine-readable instructions stored in the memory 62, the processor 61 performing the following steps when the machine-readable instructions are executed by the processor 61: step S201: acquiring an image to be identified; step S202: in the case that the image to be recognized includes an image of a target user carrying an Augmented Reality (AR) device, determining a target activity area where the target user is located based on the image to be recognized, and step S203: and sending the AR special effect data matched with the target activity area to the AR equipment of the target user.
The memory 62 includes a memory 621 and an external memory 622; the memory 621 is also referred to as an internal memory, and temporarily stores operation data in the processor 51 and data exchanged with an external memory 622 such as a hard disk, and the processor 61 exchanges data with the external memory 622 via the memory 621.
The specific execution process of the instruction may refer to the steps of the image display method described in the embodiments of the present disclosure, and details are not repeated here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the image displaying method in the foregoing method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the image displaying method provided in the embodiment of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the image displaying method described in the above method embodiment, which may be specifically referred to in the above method embodiment, and are not described herein again. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK) or the like.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the system and the apparatus described above may refer to the corresponding process in the foregoing method embodiment, and details are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some of the technical features, within the technical scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (9)

1. An image presentation method, comprising:
acquiring an image to be identified;
under the condition that the image to be recognized comprises an image of a target user carrying Augmented Reality (AR) equipment, determining a target activity area where the target user is located based on the image to be recognized;
sending AR special effect data matched with the target activity area to the AR equipment of the target user; wherein the sending the AR special effect data matched with the target activity area to the AR device of the target user includes:
determining image characteristic information corresponding to the AR equipment based on the image to be identified;
determining identification information of the AR equipment based on the image characteristic information corresponding to the AR equipment;
sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment; the image feature information includes a color of the AR device and a device serial number.
2. The image presentation method according to claim 1, wherein the determining a target activity area in which the target user is located based on the image to be recognized comprises:
acquiring a prefabricated map;
matching the image to be recognized with a prefabricated map, and determining the position information of the target user based on the matching result;
and determining a target activity area where the target user is located based on the position information of the target user and the position information of a preset activity area.
3. The image presentation method of claim 1, wherein the sending the AR special effect data matching the target activity area to the AR device of the target user comprises:
determining identification information of the target user based on the face feature information of the target user in the image to be recognized;
determining pre-stored identification information of the AR equipment matched with the identification information of the target user based on the identification information of the target user;
and sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment.
4. The image presentation method of claim 1, wherein the sending, to the AR device, AR special effect data matching the target activity area based on the identification information of the AR device comprises:
determining whether the AR equipment logs in a target application or not based on the identification information of the AR equipment;
and sending the AR special effect data matched with the target activity area to the AR equipment under the condition that the AR equipment logs in the target application.
5. The image presentation method of any one of claims 1 to 4, wherein the AR special effect data is determined according to the following steps:
determining user attribute information of the target user;
and determining AR special effect data matched with the user attribute information from multiple AR special effect data matched with the target activity area.
6. The image display method according to any one of claims 1 to 5, wherein the acquiring the image to be recognized includes:
acquiring the images to be recognized, which are acquired by a camera device group with a shooting range covering a plurality of preset activity areas; the image pickup apparatus group includes at least one image pickup apparatus.
7. An image display apparatus, comprising:
the acquisition module is used for acquiring an image to be identified;
the determining module is used for determining a target activity area where a target user is located based on the image to be identified under the condition that the image to be identified comprises an image of the target user carrying Augmented Reality (AR) equipment;
a sending module, configured to send AR special effect data matched with the target activity area to an AR device of the target user; the sending module is configured to determine image feature information corresponding to the AR device based on the image to be recognized when sending the AR special effect data matched with the target activity area to the AR device of the target user;
determining identification information of the AR equipment based on the image feature information corresponding to the AR equipment;
sending AR special effect data matched with the target activity area to the AR equipment based on the identification information of the AR equipment; the image feature information includes a color of the AR device and a device serial number.
8. A computer device, comprising: a processor, a memory storing machine-readable instructions executable by the processor, the processor for executing the machine-readable instructions stored in the memory, the processor performing the steps of the image presentation method as claimed in any one of claims 1 to 6 when the machine-readable instructions are executed by the processor.
9. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when executed by a computer device, performs the steps of the image presentation method according to any one of claims 1 to 6.
CN202011193676.9A 2020-10-30 2020-10-30 Image display method and device, computer equipment and storage medium Active CN112288881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011193676.9A CN112288881B (en) 2020-10-30 2020-10-30 Image display method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011193676.9A CN112288881B (en) 2020-10-30 2020-10-30 Image display method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112288881A CN112288881A (en) 2021-01-29
CN112288881B true CN112288881B (en) 2023-01-20

Family

ID=74352705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011193676.9A Active CN112288881B (en) 2020-10-30 2020-10-30 Image display method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112288881B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927293A (en) * 2021-03-26 2021-06-08 深圳市慧鲤科技有限公司 AR scene display method and device, electronic equipment and storage medium
CN113470189A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 Special effect display method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150228123A1 (en) * 2014-02-07 2015-08-13 Datangle, Inc. Hybrid Method to Identify AR Target Images in Augmented Reality Applications
CN109191188A (en) * 2018-08-17 2019-01-11 连云港伍江数码科技有限公司 Determination method, apparatus, computer equipment and the storage medium of target information
CN209821786U (en) * 2019-03-28 2019-12-20 重庆爱奇艺智能科技有限公司 Device, virtual reality equipment and system for presenting application scene
CN110276251B (en) * 2019-05-13 2023-07-25 联想(上海)信息技术有限公司 Image recognition method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112288881A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN107820593B (en) Virtual reality interaction method, device and system
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
CN106355153A (en) Virtual object display method, device and system based on augmented reality
CN111311756B (en) Augmented reality AR display method and related device
CN112288881B (en) Image display method and device, computer equipment and storage medium
CN112070906A (en) Augmented reality system and augmented reality data generation method and device
CN105571583B (en) User position positioning method and server
CN111124567B (en) Operation recording method and device for target application
CN111862205A (en) Visual positioning method, device, equipment and storage medium
CN111696215A (en) Image processing method, device and equipment
CN111652987A (en) Method and device for generating AR group photo image
EP4191513A1 (en) Image processing method and apparatus, device and storage medium
CN111639979A (en) Entertainment item recommendation method and device
CN111623782A (en) Navigation route display method and three-dimensional scene model generation method and device
CN113178006A (en) Navigation map generation method and device, computer equipment and storage medium
CN112882576A (en) AR interaction method and device, electronic equipment and storage medium
CN111815782A (en) Display method, device and equipment of AR scene content and computer storage medium
CN110349504A (en) A kind of museum guiding system based on AR
CN111899349B (en) Model presentation method and device, electronic equipment and computer storage medium
CN113178017A (en) AR data display method and device, electronic equipment and storage medium
CN110852132B (en) Two-dimensional code space position confirmation method and device
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium
CN111580679A (en) Space capsule display method and device, electronic equipment and storage medium
CN111640194A (en) AR scene image display control method and device, electronic equipment and storage medium
CN112817454A (en) Information display method and device, related equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant