CN112488087B - Image recognition method based on augmented reality, cloud platform server and medium - Google Patents

Image recognition method based on augmented reality, cloud platform server and medium Download PDF

Info

Publication number
CN112488087B
CN112488087B CN202011632665.6A CN202011632665A CN112488087B CN 112488087 B CN112488087 B CN 112488087B CN 202011632665 A CN202011632665 A CN 202011632665A CN 112488087 B CN112488087 B CN 112488087B
Authority
CN
China
Prior art keywords
virtual world
data
world data
target
ordered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011632665.6A
Other languages
Chinese (zh)
Other versions
CN112488087A (en
Inventor
许东俊
刘风华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai dewu Information Technology Co.,Ltd.
Original Assignee
Shanghai Dewu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dewu Information Technology Co ltd filed Critical Shanghai Dewu Information Technology Co ltd
Priority to CN202011632665.6A priority Critical patent/CN112488087B/en
Publication of CN112488087A publication Critical patent/CN112488087A/en
Application granted granted Critical
Publication of CN112488087B publication Critical patent/CN112488087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of image data processing and virtual reality, in particular to an image identification method based on augmented reality, a cloud platform server and a medium. Firstly, acquiring a target real scene image sent by first terminal equipment; carrying out image identification processing on the target real scene image to obtain target image identification data; then, determining corresponding target virtual world data based on the target image identification data, wherein the target virtual world data comprise target social comment data, and the target social comment data are generated based on a comment operation of a second terminal device response user on a target object; and finally, generating target augmented reality data based on the target virtual world data and the target real scene image, and sending the target augmented reality data to the first terminal equipment, wherein the first terminal equipment is used for performing display processing based on the target augmented reality data. Based on the method, the method and the device can improve the convenience of the user for acquiring the information of the attention object.

Description

Image recognition method based on augmented reality, cloud platform server and medium
Technical Field
The invention relates to the technical field of image data processing and virtual reality, in particular to an image identification method based on augmented reality, a cloud platform server and a medium.
Background
The AR (Augmented Reality) technology is a technology for fusing a virtual world and a real world by calculating a position and an angle of an image in real time and superimposing a corresponding image, video, and 3D model on the image. The AR client can combine with the picture identification material directly stored in the local AR client to perform real-time image identification on the offline environment of the user, and corresponding display data are displayed in an enhanced mode according to the pre-configured display effect on the position of the identified specific offline target in the real scene.
With the development of computer technology, the application of social networking is also widely expanded. The social networking generally refers to communication interaction through the internet, for example, after a user a publishes information on a social platform, a user B may comment on the information, and in such a way, the comment audience is generally small. Thus, in order to overcome the foregoing problems, a solution is provided for collectively processing (e.g., displaying) comments for the same (same or same class, etc.) object.
However, the inventors have found that the above-described method of collectively processing comments on the same object has a problem that the user is less convenient to acquire information on an object of interest (interest).
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image recognition method based on augmented reality, a cloud platform server and a medium, so as to improve convenience of a user in acquiring information of an object of interest.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, the present invention provides an image recognition method based on augmented reality, which is applied to a cloud platform server, where the cloud platform server is connected to a first terminal device and a second terminal device, and the method includes:
acquiring a target real scene image sent by the first terminal device, wherein the target real scene image is generated by carrying out image acquisition operation on a target object based on the first terminal device;
carrying out image recognition processing on the target real scene image to obtain target image recognition data;
determining corresponding target virtual world data based on the target image identification data, wherein the target virtual world data comprise target social comment data, and the target social comment data are generated based on a comment operation of a second terminal device response user on the target object;
and generating target augmented reality data based on the target virtual world data and the target real scene image, and sending the target augmented reality data to the first terminal equipment, wherein the first terminal equipment is used for performing display processing based on the target augmented reality data.
In a second aspect, the present invention provides a cloud platform server; the cloud platform server comprises a processor and a memory which are communicated with each other, and the processor calls the computer program in the memory and runs the computer program to realize the method of the first aspect.
In a third aspect, the present invention provides a computer-readable storage medium; the computer readable storage medium has stored therein instructions that, when executed, cause a computer to perform the method of the first aspect.
According to the image identification method based on augmented reality, after the target reality scene image is obtained, the corresponding target virtual world data is determined based on the target image identification data of the target reality scene image, so that the target reality scene image can be augmented based on the target virtual world data including the target social comment data, and the target augmented reality data are obtained. In this way, in the process of displaying the target augmented reality data, the target social comment data is included, so that the user can conveniently acquire information of the attention object (namely, the target object corresponding to the target social comment data), and therefore the convenience of acquiring the information of the attention object by the user is improved, and the target augmented reality data has a high use value.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
Fig. 1 is a block diagram of a cloud platform server according to an embodiment of the present invention.
Fig. 2 is a schematic flowchart of steps included in the augmented reality-based image recognition method according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides a cloud platform server. Wherein the cloud platform server may include a memory and a processor.
In detail, the memory and the processor are electrically connected directly or indirectly to realize data transmission or interaction. For example, they may be electrically connected to each other via one or more communication buses or signal lines. The memory can have stored therein at least one software function (computer program) which can be present in the form of software or firmware. The processor may be configured to execute the executable computer program stored in the memory, so as to implement the augmented reality-based image recognition method provided by the embodiment of the present invention (described later).
Alternatively, the Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), a System on Chip (SoC), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Also, the structure shown in fig. 1 is only an illustration, and the cloud platform server may further include more or fewer components than those shown in fig. 1, or have a different configuration from that shown in fig. 1, for example, may further include a communication unit for information interaction with other devices.
With reference to fig. 2, an embodiment of the present invention further provides an image recognition method based on augmented reality, which is applicable to the cloud platform server. The method steps defined by the flow related to the augmented reality-based image recognition method can be implemented by the cloud platform server.
The specific process shown in FIG. 2 will be described in detail below.
Step S110, acquiring a target real scene image sent by the first terminal device.
In this embodiment, the cloud platform server is communicatively connected with a first terminal device, so that a target real scene image sent by the first terminal device can be acquired.
And the target real scene image is generated by carrying out image acquisition operation on a target object based on the first terminal equipment. That is, the first terminal device may capture a target object (e.g., a specific scene or a person) based on a user operation, so as to obtain a corresponding target real scene image, and then send the target real scene image to the cloud platform server.
And step S120, carrying out image recognition processing on the target real scene image to obtain target image recognition data.
In this embodiment, after the target real scene image is acquired based on step S110, the cloud platform server may perform image recognition processing on the target real scene image, so that corresponding target image recognition data may be obtained, strength of which is, for example, what scene a target object in the target real scene image is specifically (for example, a yellow mountain pine welcoming guests, etc.) or what person (for example, a certain star, etc.).
Step S130, corresponding target virtual world data is determined based on the target image identification data.
In this embodiment, after obtaining the target image identification data based on step S120, the cloud platform server may determine the target virtual world data based on the target image identification data.
The target virtual world data comprises target social comment data, and the target social comment data is generated based on the comment operation of the user on the target object in response to the second terminal device (the second terminal device is in communication connection with the cloud platform server, so that the target social comment data can be sent). For example, if the target object is a yellow mountain guest-welcoming pine, the target social comment data may be comments of other users on the yellow mountain guest-welcoming pine; if the target object is a certain star, the target social comment data can be comments of other users to the certain star. That is to say, after the cloud platform server receives target social comment data of the target object by the second terminal device, the target social comment data may be bound with the target object, so that the target social comment data may be determined based on target image identification data corresponding to the target object.
Step S140, generating target augmented reality data based on the target virtual world data and the target real scene image, and sending the target augmented reality data to the first terminal device.
In this embodiment, after determining the target virtual world data based on step S130, the cloud platform server may generate target augmented reality data based on the target virtual world data and the target real scene image. Then, the target augmented reality data may be sent to the first terminal device, so that the first terminal device may perform display processing based on the target augmented reality data, and a corresponding user may obtain the target social comment data.
Based on the method, in the process of displaying the target augmented reality data, the target social comment data is included, so that the user can conveniently acquire the information of the attention object (namely, the target object corresponding to the target social comment data), the convenience of acquiring the information of the attention object by the user is improved, and the method has high use value.
In the first aspect, it should be noted that, in step S130, a specific manner for determining the target virtual world data is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, the number of the second terminal devices may be plural, so that the target image identification data may correspond to plural virtual world data, that is, plural social comment data, and the target virtual world data may be selected from the plural virtual world data, that is, the target social comment data may be selected from the plural social comment data, in consideration of requirements and conditions of display. Based on this, step S130 may include the steps of:
firstly, determining a plurality of corresponding virtual world data based on the target image identification data, wherein social comment data in each virtual world data are generated based on each second terminal device responding to comment operation of a corresponding user on the target object;
secondly, based on the generation time information of the plurality of virtual world data, sequencing the plurality of virtual world data according to the sequence from morning to evening to form a virtual world data ordered set (that is, in the virtual world data ordered set, each virtual world data is arranged according to a certain sequence, the sequence can be determined based on the generation time information of each virtual world data, and the generation time information can be determined by the generation time of the included social comment data, so that the virtual world data corresponding to the generated later social comment data can be arranged later and the virtual world data corresponding to the generated earlier social comment data can be arranged earlier);
then, the ordered set of virtual world data is subjected to screening processing to obtain target virtual world data (that is, part of the virtual world data can be screened out from a plurality of virtual world data included in the ordered set of virtual world data to determine the target virtual world data, so that the problem of content confusion during display due to the need of displaying all the virtual world data is avoided).
Optionally, in the above example, a specific manner of performing the screening processing on the ordered set of virtual world data is not limited, and may be selected according to an actual application requirement.
In a first example, the screening process may be performed based on the following steps:
a first step of regarding each piece of virtual world data other than the last piece of virtual world data in the ordered set of virtual world data, regarding the piece of virtual world data and an adjacent piece of virtual world data subsequent to the piece of virtual world data as two adjacent pieces of virtual world data (for example, the ordered set of virtual world data is "virtual world data 1, virtual world data 2, virtual world data 3, and virtual world data 4", so that the piece of virtual world data 1 and the piece of virtual world data 2 can be regarded as two adjacent pieces of virtual world data, the piece of virtual world data 2 and the piece of virtual world data 3 can be regarded as two adjacent pieces of virtual world data, and the piece of virtual world data 3 and the piece of virtual world data 4 can be regarded as two adjacent pieces of virtual world data to obtain 3 sets of two adjacent pieces of virtual world data);
secondly, calculating similarity information between social comment data in every two adjacent virtual world data, wherein the similarity information is used for representing the similarity between emotional color data of the two corresponding social comment data, and the emotional color data is obtained by performing semantic recognition on the social comment data (for example, if the emotional color data of the social comment data corresponding to the virtual world data 1 is positive color data, the emotional color data of the social comment data corresponding to the virtual world data 2 is positive color data, the corresponding similarity information is 1, if the emotional color data of the social comment data corresponding to the virtual world data 2 is positive color data, and the emotional color data of the social comment data corresponding to the virtual world data 3 is neutral color data), the corresponding similarity information is 0.5; if the emotional color data of the social comment data corresponding to the virtual world data 3 is positive color data and the emotional color data of the social comment data corresponding to the virtual world data 4 is medium and negative color data, the corresponding similarity information is 0; if the color data are neutral color data or derogative color data, the corresponding similarity information is 1; if one is derogative color data and the other is neutral color data, the corresponding similarity information is 0.5);
a third step of determining whether each of the similarity information is greater than similarity threshold information (the specific value of the similarity threshold information is not limited, for example, in combination with the foregoing example, the similarity threshold information may be 0 in an alternative example; and in another alternative example, the similarity threshold information may be 0.5);
fourthly, taking each piece of similarity information larger than the similarity threshold value information as target similarity information to obtain at least one piece of target similarity information;
fifthly, for each piece of target similarity information, discarding the previous virtual world data and retaining the next virtual world data in the two pieces of virtual world data corresponding to the target similarity information (that is, because the social comment data corresponding to the two adjacent pieces of virtual world data are similar, one of the two pieces of virtual world data can be discarded, and considering that the later data has higher reliability or reality, the previous virtual world data can be discarded);
sixthly, aiming at each piece of other similarity information except the target similarity information, reserving the two pieces of virtual world data corresponding to the other similarity information (namely, because the social comment data corresponding to the two pieces of virtual world data are not similar, both the two pieces of virtual world data need to be reserved);
seventhly, sequencing at least one piece of virtual world data based on the reserved virtual world data according to the sequence of the generation time information from morning to evening to form a virtual world data ordered representative set (that is, the virtual world data ordered representative set can be used for representing the virtual world data ordered set);
eighth, when the ordered virtual world data representation set includes a plurality of virtual world data, the ordered virtual world data representation set is divided into a plurality of ordered virtual world data representation subsets (for example, the ordered virtual world data representation set may be divided equally according to the order, such as "virtual world data 1, virtual world data 2, virtual world data 3, virtual world data 4, virtual world data 5, and virtual world data 6", two ordered virtual world data representation subsets may be obtained, which are "virtual world data 1, virtual world data 2, virtual world data 3", "virtual world data 4, virtual world data 5, and virtual world data 6", respectively), each ordered representative subset of the virtual world data comprises a plurality of pieces of virtual world data, and the number of the pieces of virtual world data is the same;
ninth, regarding each ordered virtual world data representative subset (excluding the last ordered virtual world data representative subset), taking the ordered virtual world data representative subset and a subsequent ordered virtual world data representative subset adjacent to the ordered virtual world data representative subset as two adjacent ordered virtual world data representative subsets;
tenth, for each two adjacent ordered subsets of virtual world data, according to the integrated similarity information between the social comment data in the plurality of virtual world data included in the two adjacent ordered subsets of virtual world data (for example, the number of different emotional color data corresponding to the social comment data included in the two ordered subsets of virtual world data may be determined first, as the former ordered subset of virtual world data corresponds to 3 positive color data, 4 neutral color data, 3 negative color data, and the latter ordered subset of virtual world data corresponds to 2 positive color data, 4 neutral color data, 4 negative color data, respectively; then, the similarity of the corresponding emotional color data is performed, since each ordered subset of virtual world data includes 10 numbers of virtual world data, thus, the similarity of positive color data is (10 + 3)/[ (3-2) +1], the similarity of neutral color data is (10 + 4)/[ (4-4) +1], and the similarity of derogative color data is (10 + 4)/[ (4-3) +1 ]; finally, an average of a plurality of similarities may be calculated, thereby obtaining the comprehensive similarity information), obtaining set similarity information between the two adjacent orderly-represented subsets of the virtual world data (that is, the set similarity between the two orderly-represented subsets of the virtual world data may be represented based on the comprehensive similarity between the included social comment data);
a tenth step of determining, based on each set similarity information, whether two adjacent ordered virtual world data representative subsets corresponding to the set similarity information belong to a repeated ordered representative subset (for example, two adjacent ordered virtual world data representative subsets corresponding to each set similarity information that is greater than a set similarity threshold information may be used as repeated ordered representative subsets, where the set similarity threshold information may be configured based on a specific precision requirement and a calculation rule of the set similarity information);
a twelfth step of, for each two adjacent ordered virtual world data representative subsets belonging to the repeated ordered representative subsets, discarding a previous ordered virtual world data representative subset of the two adjacent ordered virtual world data representative subsets, and keeping a next ordered virtual world data representative subset of the two adjacent ordered virtual world data representative subsets (that is, since the next ordered virtual world data representative subset can represent the previous ordered virtual world data representative subset, the effectiveness of the next ordered virtual world data representative subset is higher, the previous ordered virtual world data representative subset can be discarded);
a thirteenth step of reserving the two adjacent ordered representative subsets of virtual world data for every two adjacent ordered representative subsets of virtual world data that do not belong to the repeated ordered representative subsets (that is, since the latter ordered representative subset of virtual world data may not effectively represent the former ordered representative subset of virtual world data, the two adjacent ordered representative subsets of virtual world data may be reserved);
and fourteenth, taking each reserved virtual world data orderly representing the virtual world data included in the subset as target virtual world data corresponding to the target image identification data.
In a second example, the screening process may be performed based on the following steps:
the method comprises the steps that firstly, emotion color data of social comment data in the virtual world data are determined for each piece of virtual world data in the virtual world data ordered set, wherein the emotion color data are obtained by performing semantic recognition on the social comment data;
secondly, clustering a plurality of virtual world data included in the ordered set of virtual world data based on the emotional color data to obtain at least one virtual world data cluster (for example, the virtual world data corresponding to the positive color data is in the first virtual world data cluster, the virtual world data corresponding to the neutral color data is in the second virtual world data cluster, and the virtual world data corresponding to the de-sense color data is in the third virtual world data cluster);
a third step of determining, for each of the virtual world data clusters, a latest virtual world data based on the generation time information (which may be generation time information of social comment data) of each of the virtual world data in the virtual world data family (that is, the latest virtual world data may be determined in the first virtual world data cluster in the above example, the latest virtual world data may be determined in the second virtual world data cluster in the above example, and the latest virtual world data may be determined in the third virtual world data cluster in the above example);
and fourthly, taking each determined virtual world data as target virtual world data (for example, three target virtual world data can be obtained on the basis of the above example).
In a third example, the screening process may be performed based on the following steps:
first, regarding each piece of virtual world data other than the last piece of virtual world data in the ordered set of virtual world data, taking the piece of virtual world data and an adjacent piece of virtual world data of the piece of virtual world data as two adjacent pieces of virtual world data (which may be combined with the explanation of the foregoing example, and are not described in detail herein);
secondly, calculating similarity information between social comment data in every two adjacent pieces of virtual world data, wherein the similarity information is used for representing the similarity between emotional color data of the two corresponding pieces of social comment data, and the emotional color data is obtained by performing semantic recognition on the social comment data (the explanation of the foregoing example can be combined, and is not repeated one by one);
thirdly, determining whether each piece of similarity information is greater than similarity threshold information (which may be combined with the explanation of the foregoing example, and is not described in detail herein);
fourthly, taking each piece of similarity information larger than the similarity threshold value information as target similarity information to obtain at least one piece of target similarity information;
fifthly, discarding two pieces of virtual world data corresponding to each piece of other similarity information except the target similarity information, and reserving two pieces of virtual world data corresponding to each piece of target similarity information (that is, because social comment data corresponding to two adjacent pieces of virtual world data are dissimilar, both the pieces of virtual world data can be discarded, and because social comment data corresponding to two adjacent pieces of virtual world data are similar, both the pieces of social comment data can be reserved, so that the reserved social comment data have high consistency, and a user can conveniently acquire the mainstream tendency of social comment);
sixthly, sequencing at least one piece of virtual world data based on the at least one piece of virtual world data, wherein the at least one piece of virtual world data is sequenced from morning to evening according to the sequence of the generation time information, so as to form a virtual world data ordered representative set (which can be combined with the explanation of the previous example, and is not repeated here);
seventhly, when the ordered virtual world data representation set comprises a plurality of pieces of virtual world data, dividing the ordered virtual world data representation set into a plurality of ordered virtual world data representation subsets, wherein each ordered virtual world data representation subset comprises a plurality of pieces of virtual world data, and the number of the pieces of virtual world data is the same (the ordered virtual world data representation subsets can be combined with the explanation of the previous example, and are not described in detail herein);
eighthly, regarding each ordered virtual world data representative subset, taking the ordered virtual world data representative subset and a subsequent ordered virtual world data representative subset adjacent to the ordered virtual world data representative subset as two adjacent ordered virtual world data representative subsets (which may be combined with the explanation of the foregoing example, and is not described in detail herein);
ninth, for each two adjacent ordered representative subsets of virtual world data, obtaining set similarity information between the two adjacent ordered representative subsets of virtual world data according to comprehensive similarity information between social comment data in the multiple pieces of virtual world data of the two adjacent ordered representative subsets of virtual world data (which may be combined with the explanation of the foregoing example, and is not described in detail herein);
tenth, determining whether two adjacent ordered representative subsets of virtual world data corresponding to the set similarity information belong to a repeated ordered representative subset based on each set similarity information (which may be combined with the explanation of the foregoing example, and is not described herein again);
a tenth step, for every two adjacent ordered virtual world data representative subsets belonging to the repeated ordered representative subset, reserving the two adjacent ordered virtual world data representative subsets (that is, since the two adjacent ordered virtual world data representative subsets belong to the repeated ordered representative subset, that is, have higher similarity, and after the screening in the previous step, the virtual world data already has higher tendency, so that the two similar ordered virtual world data representative subsets are reserved, and the tendency can be better shown);
a twelfth step of discarding two adjacent ordered virtual world data represented subsets not belonging to the repeated ordered represented subset, wherein if one ordered virtual world data represented subset and the previous ordered virtual world data represented subset belong to the repeated ordered represented subset and the next ordered virtual world data represented subset do not belong to the repeated ordered represented subset, or if one ordered virtual world data represented subset and the previous ordered virtual world data represented subset do not belong to the repeated ordered represented subset and the next ordered virtual world data represented subset belong to the repeated ordered represented subset, the one ordered virtual world data represented subset is retained (that is, if the difference between one ordered virtual world data represented subset and the two ordered virtual world data represented subsets respectively belongs to the repeated ordered represented subset) The ordered representative subset of virtual world data can be kept in the repeated ordered representative subset and the data which does not belong to the repeated ordered representative subset);
and step thirteen, taking each reserved virtual world data orderly representing the virtual world data included in the subset as the target virtual world data corresponding to the target image identification data.
In a fourth example, the screening process may be performed based on the following steps:
first, determining emotional color data of social comment data in the virtual world data for each piece of virtual world data in the ordered set of virtual world data, wherein the emotional color data is obtained based on semantic recognition of the social comment data (which may be combined with the explanation of the foregoing example, which is not described herein one by one);
secondly, based on the emotional color data, clustering a plurality of virtual world data included in the ordered set of virtual world data to obtain at least one virtual world data cluster (which may be combined with the explanation of the foregoing example, and is not described in detail herein);
thirdly, determining the amount of virtual world data included in each virtual world data cluster, and taking the virtual world data cluster with the largest amount as a target virtual world data cluster (that is, the virtual world data cluster with the largest amount of virtual world data included can be taken as the target virtual world data cluster, so that a user can conveniently acquire the mainstream tendency of comments);
fourthly, each virtual world data included in the target virtual world data cluster is used as target virtual world data.
In a fifth example, in particular, the screening process may be performed based on the following steps:
first, determining emotional color data of social comment data in the virtual world data for each virtual world data in the ordered set of virtual world data, wherein the emotional color data is obtained based on semantic recognition of the social comment data (which may be combined with the explanation of the foregoing example, and is not described here any more);
secondly, performing assignment processing on the emotional color data based on a preset rule aiming at each emotional color data to obtain a data value corresponding to the emotional color data, wherein the emotional color data comprises positive color data, neutral color data and negative color data, the data value corresponding to the positive color data is a first numerical value (such as 1), the data value corresponding to the neutral color data is a second numerical value (such as 0), the data value corresponding to the negative color data is a third numerical value (such as-1), and the first numerical value, the second numerical value and the third numerical value are all different;
third, regarding each data value, using the data value and the generation time information of the virtual world data corresponding to the data value as a set of corresponding data to obtain multiple sets of corresponding data (i.e., 6 sets of corresponding data are obtained by using the first numerical value and generation time information 1, the first numerical value and generation time information 2, the first numerical value and generation time information 3, the second numerical value and generation time information 4, the third numerical value and generation time information 5, the first numerical value and generation time information 6, and the third numerical value and generation time information 7);
fourthly, performing curve fitting processing based on the multiple groups of corresponding data to obtain corresponding data value-generation time curves, and obtaining an average distance value based on an average value of distances between each group of corresponding data and the data value-generation time curves;
fifthly, judging whether the average distance value is smaller than a preset distance value (the preset distance value can be configured according to actual requirements);
sixthly, if the average distance value is smaller than the preset distance value (that is, the multiple groups of corresponding data are concentrated), taking each group of corresponding data on the data value-generation time curve as target corresponding data;
a seventh step of determining a first screening value (e.g., an average value of the average distance value and the minimum value) based on the average distance value and a minimum value of a distance between each set of corresponding data and the data value-generation time curve, and determining a second screening value (e.g., an average value of the average distance value and the maximum value) based on the average distance value and a maximum value of a distance between each set of corresponding data and the data value-generation time curve, if the average distance value is greater than or equal to the preset distance value (i.e., the sets of corresponding data are relatively scattered), wherein the first screening value is greater than the minimum value and less than the average distance value, and the second screening value is greater than the average distance value and less than the maximum value;
eighthly, determining whether the distance between each group of corresponding data and the data value-generation time curve is larger than the first screening value and smaller than the second screening value or not;
ninth, each group of corresponding data with the distance larger than the first screening value and smaller than the second screening value is determined as target corresponding data;
and step ten, determining the virtual world data corresponding to each target corresponding data as target virtual world data.
In a sixth example, in particular, the screening process may be performed based on the following steps:
first, determining emotional color data of social comment data in the virtual world data for each virtual world data in the ordered set of virtual world data, wherein the emotional color data is obtained based on semantic recognition of the social comment data (which may be combined with the explanation of the foregoing example, and is not described here any more);
secondly, performing assignment processing on the emotional color data based on a preset rule for each emotional color data to obtain a data value corresponding to the emotional color data, where the emotional color data includes positive color data, neutral color data, and negative color data, the data value corresponding to the positive color data is a first value, the data value corresponding to the neutral color data is a second value, the data value corresponding to the negative color data is a third value, and the first value, the second value, and the third data are all different (which may be combined with the explanation of the foregoing example, and are not described herein one by one);
third, regarding each data value, the data value and the generation time information of the virtual world data corresponding to the data value are used as a set of corresponding data to obtain multiple sets of corresponding data (which may be combined with the explanation of the foregoing example, and is not described in detail herein);
fourthly, performing curve fitting processing based on the multiple groups of corresponding data to obtain a corresponding data value-generation time curve (which may be combined with the explanation of the foregoing example and is not described herein one by one), and performing data prediction processing based on the data value-generation time curve to obtain a corresponding target data value (for example, emotional color data corresponding to social comment data appearing at the next time or time period may be predicted based on the data value-generation time curve);
fifthly, selecting target virtual world data from a plurality of virtual world data included in the ordered set of virtual world data based on the target data value, wherein the data value corresponding to the social comment data in the target virtual world data is the target data value (that is, if the target data value corresponds to the positive color data, the virtual world data corresponding to the social comment data corresponding to the positive color data in the ordered set of virtual world data is used as the target virtual world data; if the target data value corresponds to the neutral color data, the virtual world data corresponding to the social comment data corresponding to the neutral color data in the ordered set of virtual world data is used as the target virtual world data; if the target data value corresponds to the negative color data, the virtual world data corresponding to the social comment data corresponding to the negative color data in the ordered set of virtual world data is selected, as target virtual world data).
It is to be understood that, in the fourth example, the specific manner of determining the target virtual world data cluster is not limited, and may be selected according to the actual application requirements.
For example, in an alternative example, the target virtual world data cluster may be determined based on the following steps:
firstly, determining the quantity of virtual world data included in each virtual world data cluster, and determining the virtual world data cluster with the maximum quantity; secondly, if the number of the virtual world data clusters with the largest number is multiple, calculating average generation time information of multiple pieces of virtual world data included in each of the multiple virtual world data clusters, and determining the latest virtual world data cluster based on the average generation time information; then, the latest virtual world data cluster is set as a target virtual world data cluster (that is, the virtual world data cluster whose average generation time is the latest may be set as the target virtual world data cluster).
For another example, in another alternative example, the target virtual world data cluster may be determined based on the following steps:
firstly, determining the quantity of virtual world data included in each virtual world data cluster, and determining the virtual world data cluster with the maximum quantity; next, if there are a plurality of virtual world data clusters with the largest number, one of the virtual world data clusters having first virtual world data is used as a target virtual world data cluster, where the first virtual world data is one of all virtual world data included in the plurality of virtual world data clusters whose generation time information is the latest (that is, a virtual world data cluster in which the virtual world data whose generation time is the latest can be used as the target virtual world data cluster).
In the second aspect, it should be noted that, in step S140, a specific manner of generating the augmented reality data is not limited, and may be selected according to actual application requirements.
For example, in an alternative example, step S140 may include the steps of:
firstly, determining a target area in the target real scene image based on the obtained target position information; and secondly, fusing the image corresponding to the target area in the target reality scene image with the target virtual world data to generate target augmented reality data.
Optionally, in the above example, a specific manner of determining the target area is not limited, and may be selected according to an actual application requirement.
For example, in one alternative example, the target area may be determined based on the following steps:
firstly, obtaining target position information sent by the first terminal device, wherein the target position information is generated based on the position region selection operation of the first terminal device responding to the displayed target real scene image by a user; secondly, a target area is determined in the target real scene image based on the target position information (that is, the target area can be determined based on the display requirement of the user corresponding to the first terminal device, and the display requirement of the user on the target virtual world data is met).
Therefore, from the perspective of the user of the first terminal device, the user may operate the first terminal device to perform image acquisition to obtain the target real scene image, and then the first terminal device may display the target real scene image, so that the user may perform region selection operation on the target real scene image, thereby generating target position information. Finally, the user can see the target augmented reality data which is displayed by the first terminal device and formed on the basis of the target position information.
Based on the same inventive concept, embodiments of the present invention provide a computer-readable storage medium; the computer-readable storage medium has stored therein instructions that, when executed, cause a computer to perform the augmented reality based image recognition method. An embodiment of the present invention provides a computer-readable storage medium; the computer-readable storage medium has stored therein a computer program which, when executed, implements the augmented reality-based image recognition method.
In summary, according to the image recognition method based on augmented reality provided by the present invention, after the target real scene image is acquired, the corresponding target virtual world data is determined based on the target image recognition data of the target real scene image, so that the target real scene image can be augmented based on the target virtual world data including the target social comment data, and thus the target augmented reality data is obtained. In this way, in the process of displaying the target augmented reality data, the target social comment data is included, so that the user can conveniently acquire information of the attention object (namely, the target object corresponding to the target social comment data), and therefore the convenience of acquiring the information of the attention object by the user is improved, and the target augmented reality data has a high use value.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus and method embodiments described above are illustrative only, as the flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, an electronic device, or a network device) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (3)

1. An image recognition method based on augmented reality is characterized in that the image recognition method is applied to a cloud platform server, the cloud platform server is connected with a plurality of first terminal devices and a plurality of second terminal devices, and the method comprises the following steps:
acquiring a target real scene image sent by the first terminal device, wherein the target real scene image is generated by carrying out image acquisition operation on a target object based on the first terminal device;
carrying out image recognition processing on the target real scene image to obtain target image recognition data; determining a plurality of corresponding virtual world data based on the target image identification data, wherein social comment data in each virtual world data are generated based on each second terminal device responding to comment operation of a corresponding user on the target object;
based on the generation time information of the plurality of virtual world data, sequencing the plurality of virtual world data according to the sequence from morning to evening to form a virtual world data ordered set;
regarding each virtual world data except the last virtual world data in the ordered set of virtual world data, taking the virtual world data and the adjacent next virtual world data of the virtual world data as two adjacent virtual world data;
calculating similarity information between social comment data in every two adjacent pieces of virtual world data aiming at every two adjacent pieces of virtual world data, wherein the similarity information is used for representing the similarity between emotion color data of the two corresponding pieces of social comment data, and the emotion color data is obtained by performing semantic recognition on the social comment data;
determining whether each of the similarity information is greater than similarity threshold information;
taking each piece of similarity information larger than the similarity threshold information as target similarity information to obtain at least one piece of target similarity information;
for each piece of target similarity information, discarding the previous virtual world data and reserving the next virtual world data in the two pieces of virtual world data corresponding to the target similarity information;
for each piece of other similarity information except the target similarity information, reserving two pieces of virtual world data corresponding to the other pieces of similarity information;
sequencing at least one piece of virtual world data based on at least one piece of reserved virtual world data according to the sequence of generation time information from morning to evening to form a virtual world data ordered representative set;
when the ordered virtual world data representative set comprises a plurality of pieces of virtual world data, dividing the ordered virtual world data representative set into a plurality of ordered virtual world data representative subsets, wherein each ordered virtual world data representative subset comprises a plurality of pieces of virtual world data, and the number of the pieces of virtual world data is the same;
aiming at each virtual world data ordered representation sub-set, taking the virtual world data ordered representation sub-set and a subsequent virtual world data ordered representation sub-set adjacent to the virtual world data ordered representation sub-set as two adjacent virtual world data ordered representation sub-sets; for each two adjacent ordered virtual world data representative subsets, acquiring set similarity information between the two adjacent ordered virtual world data representative subsets according to comprehensive similarity information between social comment data in the multiple pieces of virtual world data of the two adjacent ordered virtual world data representative subsets;
determining whether two adjacent ordered representative subsets of virtual world data corresponding to the set similarity information belong to repeated ordered representative subsets or not based on the set similarity information; for each two adjacent ordered virtual world data representative subsets belonging to the repeated ordered representative subsets, discarding the previous ordered virtual world data representative subset in the two adjacent ordered virtual world data representative subsets, and keeping the next ordered virtual world data representative subset in the two adjacent ordered virtual world data representative subsets; for each two adjacent ordered representative subsets of the virtual world data, which do not belong to the repeated ordered representative subsets, keeping the two adjacent ordered representative subsets of the virtual world data;
taking the virtual world data included in each reserved ordered representative subset of the virtual world data as target virtual world data corresponding to the target image identification data, wherein the target virtual world data includes target social comment data;
and generating target augmented reality data based on the target virtual world data and the target real scene image, and sending the target augmented reality data to the first terminal equipment, wherein the first terminal equipment is used for performing display processing based on the target augmented reality data.
2. A cloud platform server, characterized in that the cloud platform server comprises a processor and a memory which are communicated with each other, and the processor calls the computer program in the memory and runs the computer program to realize the method of claim 1.
3. A computer-readable storage medium having stored therein instructions that, when executed, cause a computer to perform the method of claim 1.
CN202011632665.6A 2020-12-31 2020-12-31 Image recognition method based on augmented reality, cloud platform server and medium Active CN112488087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011632665.6A CN112488087B (en) 2020-12-31 2020-12-31 Image recognition method based on augmented reality, cloud platform server and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011632665.6A CN112488087B (en) 2020-12-31 2020-12-31 Image recognition method based on augmented reality, cloud platform server and medium

Publications (2)

Publication Number Publication Date
CN112488087A CN112488087A (en) 2021-03-12
CN112488087B true CN112488087B (en) 2021-08-17

Family

ID=74916023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011632665.6A Active CN112488087B (en) 2020-12-31 2020-12-31 Image recognition method based on augmented reality, cloud platform server and medium

Country Status (1)

Country Link
CN (1) CN112488087B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116506680B (en) * 2023-06-26 2023-09-19 北京万物镜像数据服务有限公司 Comment data processing method and device for virtual space and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462363A (en) * 2014-12-08 2015-03-25 百度在线网络技术(北京)有限公司 Aspect displaying method and device
CN107817897A (en) * 2017-10-30 2018-03-20 努比亚技术有限公司 A kind of information intelligent display methods and mobile terminal
CN111225287A (en) * 2019-11-27 2020-06-02 网易(杭州)网络有限公司 Bullet screen processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462363A (en) * 2014-12-08 2015-03-25 百度在线网络技术(北京)有限公司 Aspect displaying method and device
CN107817897A (en) * 2017-10-30 2018-03-20 努比亚技术有限公司 A kind of information intelligent display methods and mobile terminal
CN111225287A (en) * 2019-11-27 2020-06-02 网易(杭州)网络有限公司 Bullet screen processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112488087A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN109922355B (en) Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN108108821A (en) Model training method and device
EP2933780A1 (en) Reality augmenting method, client device and server
US20170068643A1 (en) Story albums
US20170061644A1 (en) Image analyzer, image analysis method, computer program product, and image analysis system
CN112488087B (en) Image recognition method based on augmented reality, cloud platform server and medium
US9734624B2 (en) Deep image data compression
TW201436552A (en) Method and apparatus for increasing frame rate of an image stream using at least one higher frame rate image stream
CN115240043A (en) Data processing method and device, electronic equipment and readable storage medium
US10002458B2 (en) Data plot processing
CN111080781A (en) Three-dimensional map display method and mobile terminal
CN114399595A (en) Automatic image processing method, system and terminal for three-dimensional panoramic digital exhibition hall
US9298744B2 (en) Method and apparatus for ordering images in an image set based on social interactions and viewer preferences
US20170331909A1 (en) System and method of monitoring and tracking online source content and/or determining content influencers
CN108159694B (en) Flexible body flutter simulation method, flexible body flutter simulation device and terminal equipment
CN106021325B (en) Friend recommendation method and device
CN108874269B (en) Target tracking method, device and system
CN107038687B (en) Method and device for generating rarefied image
EP3182332A1 (en) Systems and methods for hair segmentation
CN113099401B (en) Internet of things data transmission method and equipment
CN113438500B (en) Video processing method and device, electronic equipment and computer storage medium
CN110858879A (en) Video stream processing method, device and computer readable storage medium
CN113538642A (en) Virtual image generation method and device, electronic equipment and storage medium
EP3809314A1 (en) 3d object detection from calibrated 2d images background
CN111369612A (en) Three-dimensional point cloud image generation method and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210803

Address after: 200082 4th floor, building 23, No. 1142, Kongjiang Road, Yangpu District, Shanghai

Applicant after: Shanghai dewu Information Technology Co.,Ltd.

Address before: 510700 Guangzhou Zhiwu Internet Technology Co., Ltd., room 220-239, building 1, No. 9, Shenzhou Road, Huangpu District, Guangzhou, Guangdong

Applicant before: Guangzhou smart Internet Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant