WO2019024853A1 - 一种图像处理方法、装置及存储介质 - Google Patents
一种图像处理方法、装置及存储介质 Download PDFInfo
- Publication number
- WO2019024853A1 WO2019024853A1 PCT/CN2018/097860 CN2018097860W WO2019024853A1 WO 2019024853 A1 WO2019024853 A1 WO 2019024853A1 CN 2018097860 W CN2018097860 W CN 2018097860W WO 2019024853 A1 WO2019024853 A1 WO 2019024853A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- real object
- social network
- image data
- feature
- real
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 230000003190 augmentative effect Effects 0.000 claims abstract description 86
- 238000000034 method Methods 0.000 claims abstract description 65
- 238000009877 rendering Methods 0.000 claims abstract description 28
- 238000012545 processing Methods 0.000 claims description 47
- 230000008859 change Effects 0.000 claims description 14
- 230000005540 biological transmission Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 21
- 230000001815 facial effect Effects 0.000 description 34
- 230000008569 process Effects 0.000 description 26
- 230000000875 corresponding effect Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 16
- 239000011521 glass Substances 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 230000006854 communication Effects 0.000 description 9
- 230000033001 locomotion Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 239000002131 composite material Substances 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000009189 diving Effects 0.000 description 3
- 210000004709 eyebrow Anatomy 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012913 prioritisation Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0277—Online advertisement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5854—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
- G06F16/784—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9536—Search customisation based on social or collaborative filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/01—Social networking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/94—Hardware or software architectures specially adapted for image or video understanding
- G06V10/95—Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/30—Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
Definitions
- the present application relates to the field of image technologies, and in particular, to an image processing method, apparatus, and storage medium.
- Displaying the image of an object in various scenarios of a social network is a basic function of the client, and the presently displayed method is single, taking the object as a user as an example, and the related technology usually uses a virtual character image or a self-portrait avatar to display the image of the user. And play a role in the identification of social networks; however, this approach is currently difficult to adapt to social networks to personalize the needs of users, has become a constraint on the diversity of social networks.
- the embodiments of the present invention provide an image processing method, device, and storage medium, which can solve the above technical problems and effectively expand the presentation manner of objects in a social network.
- an image processing method including:
- the virtual object in the augmented reality model is rendered according to a position of the real object in the rendered image to form the real object and the virtual object that are displayed together.
- an image processing apparatus including:
- An identification module configured to identify a feature of a real object in the environment from the obtained image data
- a querying module configured to query a social network with the feature of the real object, and determine that the real object has an attribute of the social network
- a model module configured to obtain an augmented reality model in the social network that is adapted to the real object
- a rendering module configured to render according to the obtained image data, and render the virtual object in the augmented reality model according to a position of the real object in the rendered image to form a common display The real object and the virtual object.
- the embodiment of the present application provides a storage medium, where an executable program is stored, and when the executable program is executed by a processor, the image processing method provided by the embodiment of the present application is implemented.
- an image processing apparatus including:
- a memory for storing an executable program
- the image processing method provided by the embodiment of the present application is implemented when the processor is configured to execute the executable program stored in the memory.
- the real object belonging to the social network can be quickly identified for the image data in any scene of the social network, and the corresponding scene is merged.
- the augmented reality model of the real object in the social network forms a real object and a virtual object that are displayed together, and provides a way of expanding the object in the social network to achieve the effect of combining virtual reality;
- the augmented reality model for different real objects in the social network has diversified features, so that when applied to the rendering of image data, the differentiated display effect of different objects is realized.
- 1-1 is a schematic structural diagram of an optional hardware of an image processing apparatus according to an embodiment of the present disclosure
- 1-2 is a schematic structural diagram of an optional function of an image processing apparatus according to an embodiment of the present disclosure
- FIG. 2 is a schematic structural diagram of an optional system implemented as an AR device according to an embodiment of the present disclosure
- FIG. 3 is a schematic diagram of another optional structure of an image processing apparatus according to an embodiment of the present disclosure.
- FIG. 4 is a schematic flowchart of an optional implementation process of an image processing method according to an embodiment of the present disclosure
- FIG. 5 is a schematic diagram of another optional implementation process of an image processing method according to an embodiment of the present disclosure.
- FIG. 6 is a schematic diagram of facial feature points provided by an embodiment of the present application.
- FIG. 7 is a schematic diagram of an effect of jointly displaying a real object and a virtual object according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of an effect of jointly displaying a real object and a virtual object according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of an effect of jointly displaying a real object and a virtual object according to an embodiment of the present application.
- FIG. 10-1 and FIG. 10-2 are schematic diagrams showing effects of a cartoon character dressing and a custom network virtual character according to an embodiment of the present application;
- FIG. 11 is a schematic diagram of still another optional implementation process of an image processing method according to an embodiment of the present disclosure.
- AR Augmented Reality
- Augmented reality technology can seamlessly integrate real-world information with information in the virtual world, and use information technology (visual, sound, taste, etc.) that cannot be experienced in the real world to be superimposed by scientific technology simulation, and then apply virtual information to The real world is perceived by human senses to achieve a sensory experience that combines virtual reality.
- information technology visual, sound, taste, etc.
- calculating the position and posture of a real object in a real image ie, only a photo or video of a real object in the real world
- an image including a virtual object such as an image or a video
- a three-dimensional (3D, Three-Dimensional) model, etc. to add an image including a virtual object to a real image in a three-dimensional space.
- the virtual item based on the face positioning is added to realize the effect of the face dressing; for example, according to the two-dimensional code of the scanned product, near the displayed two-dimensional code Display product information and/or store and address where the item can be purchased, and so on.
- Augmented reality can also realize real-time interaction according to the scene.
- the fighting action in the game is controlled by the glove or hand stick of the AR system; or, in the chess game of AR, You can control the pieces through the gloves of the AR system, and so on.
- Client refers to the client installed in the device, or the third-party client in the device, used to support various applications based on social networks, to achieve a variety of social functions, such as video call function or send pictures Features, etc.
- HMD head-mounted displays
- Social network a network that supports multiple users to communicate with each other through a client (such as QQ, enterprise IM) on a server deployed on a network (such as a wide area network or a local area network).
- a client such as QQ, enterprise IM
- server deployed on a network (such as a wide area network or a local area network).
- Image data is the representation of the intensity and spectrum (color) of each point of light on the image of the real object in the environment. According to the intensity of the light and the spectrum information, the image information of the real world is converted into data information. Image data for easy digitization and analysis.
- the augmented reality model is a digital scene for augmented reality outlined by the image processing device through digital graphic technology, such as personalized AR dressing in a social network, which may be a hat, glasses, and background image.
- the image data includes people and objects in real life, including natural scenery such as rivers and mountains, and human landscapes such as urban landscapes and architectural landscapes or other types of objects.
- virtual object when the client renders the image data, it needs to render the virtual object that does not exist in the environment where the image data is collected, and realize the fusion of the real object and the virtual object, thereby realizing the improvement of the display effect or the enhancement of the information amount; for example, when the real object When it is a character, the virtual object may be various props and virtual backgrounds for dressing up the character image, or may be a personal business card.
- Rendering the visual image of the real object and the virtual object that the rendering engine outputs to the screen using the rendering engine.
- some appropriate rendering is performed on the image or video including the real object, such as adding some virtuality in the image or video of the user to the current social scene.
- Objects to create special effects are possible.
- FIG. 1-1 is an optional hardware structure diagram of an image processing apparatus according to an embodiment of the present application.
- various devices such as desktop computers and notebooks, running the client may be implemented.
- Computer and smartphone The image processing apparatus 100 shown in FIG. 1-1 includes at least one processor 101, a memory 102, a display component 103, at least one communication interface 104, and a camera 105.
- the various components in image processing device 100 are coupled together by a bus system 106. It will be appreciated that the bus system 106 is used to implement connection communication between these components.
- the bus system 106 includes, in addition to the configuration data bus, a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are labeled as bus system 106 in FIG.
- the display component 103 can include an image processing device display, a mobile phone display, a tablet display, etc. for display.
- the communication interface 104 may include an antenna system, Bluetooth, Wireless Fidelity, Near Field Communication (NFC) modules, and/or data lines, and the like.
- NFC Near Field Communication
- the camera 105 can be a standard camera, a telephoto camera, a wide-angle lens, a zoom camera, a digital light field camera, and a digital camera.
- memory 102 can be either volatile memory or non-volatile memory, and can include both volatile and nonvolatile memory.
- the memory 102 in the embodiment of the present application is used to store various types of configuration data to support the operation of the image processing apparatus 100.
- Examples of such configuration data include a program for operating on the image processing apparatus 100, such as the client 1021, and an operating system 1022 and a database 1023, wherein the program implementing the method of the embodiment of the present application may be included in the client 1021.
- Processor 101 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the image processing method may be completed by an integrated logic circuit of hardware in the processor 101 or an instruction in a form of software.
- the processor 101 described above may be a general purpose processor, a digital signal processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like.
- DSP digital signal processor
- the processor 101 can implement or execute the methods, steps, and logic blocks provided in the embodiments of the present application.
- a general purpose processor can be a microprocessor or any conventional processor or the like.
- the steps of the method provided by the embodiment of the present application may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
- the software module may be located in a storage medium, and the storage medium is located in the memory 102.
- the processor 101 reads the information in the memory 102 and completes the image processing method provided by the embodiment of the present application.
- FIG. 1-1 The function structure of the image processing apparatus shown in FIG. 1-1 is described.
- the software implementation is taken as an example.
- FIG. 1-2 FIG. 1-2 is a running client according to an embodiment of the present application.
- An optional functional structure diagram of the image processing apparatus in which the local client and the peer client are opposite concepts are described in conjunction with the respective functional modules shown in FIG. 1-2.
- FIG. 1-1 the figure can be understood.
- 1-2 shows the implementation of the functional modules on the hardware.
- the identification module 210 is configured to identify features of the real object in the environment from the obtained image data.
- the identification module 210 receives image data of the peer client collection environment and transmitted in the social network, and identifies features of the real object located in the peer client environment from the received image data; And/or, the collection environment forms image data, and identifies features of the real object located in the local client environment from the collected image data.
- the identification module 210 is specifically configured to: when communicating with a peer client in the social network, collect the local client environment to form image data for transmission to the peer client, and collect the image data from the collected client.
- the image data identifies the feature of the real object in the local client environment; or, when responding to the local client's collection operation, the local client environment is collected to form image data, and the local client is identified from the collected image data.
- the characteristics of real objects in the environment are specifically configured to: when communicating with a peer client in the social network, collect the local client environment to form image data for transmission to the peer client, and collect the image data from the collected client.
- the image data identifies the feature of the real object in the local client environment; or, when responding to the local client's collection operation, the local client environment is collected to form image data, and the local client is identified from the collected image data.
- the characteristics of real objects in the environment are specifically configured to: when communicating with a peer client in the social network, collect the local client environment to form image data for transmission to the peer
- the identifying module 210 is specifically configured to: before obtaining an augmented reality model adapted to the real object in the social network, determine that the feature of the identified real object meets a condition that the social network can recognize, including: At least one of the following: when the image feature points are identified, the number of identified image feature points exceeds the feature point data amount threshold; when the biometric is identified, the integrity of the identified biometrics exceeds the integrity threshold.
- the query module 220 is configured to query the social network with the characteristics of the real object to determine whether the real object has attributes belonging to the social network.
- the attributes of the social network involved in the embodiments of the present application are for functions carried by the social network, such as media functions (such as content aggregation), social, e-commerce, and payment, etc., and the members involved in the process of implementing these functions are of type/ Functional induction, for example, includes:
- the payment object attribute indicates that the member is an account that receives the payment
- the shared object attribute also known as the shared item attribute, indicates that the member is a shared item in the social network, such as food, goods, and the like;
- the shared media information attribute indicates that the member is a shared media information in the social network, such as video, audio, and mobile games, and various products that do not have actual forms.
- the query module 220 is specifically configured to: query a feature database of the social network by using a feature of the real object; and determine that the real object belongs to the social network when the real object matches the feature of the registered user of the social network. Registered user, the real object has the registered user attribute of the social network; when the real object matches the feature of the shared object of the social network, the real object is determined to be the shared object of the social network, and the real object has the social network at this time The object properties are shared.
- the model module 230 is configured to obtain an augmented reality model adapted to the real object in the model library of the social network.
- the model module 230 is specifically configured to: when the real object is a registered user in the social network, obtain a virtual object preset by the registered user in the social network, where the virtual object includes at least one of the following: a virtual item, a virtual background, and a filter; when the real object is a shared object in the social network, obtaining a virtual object in the social network for the shared object, the virtual object includes at least one of the following: a social network for the shared object Article; an advertisement for a shared object in a social network.
- the model module 230 is specifically configured to: invoke the identification service of the server, and identify the feature of the real object from the obtained image data; or, open the image recognition thread, and open the image recognition The obtained image data is identified in the thread to obtain the characteristics of the real object.
- the rendering module 240 is configured to perform rendering according to the obtained image data, and render the virtual object in the augmented reality model according to the position of the real object in the rendered image to form a real object and a virtual object that are jointly displayed. .
- the rendering module 240 is specifically configured to: detect a pose change of the real object in the image data; and render the output in the augmented reality model and the pose in the position of the real object in the output image.
- the adaptively adapted virtual objects form superimposed real objects and virtual objects.
- the query module 220 is specifically configured to query an augmented reality model adapted to a real object in a local cache or database; when not queried, the social network query is adapted to the real object.
- Augmented reality model adapted to a real object in a local cache or database
- the structure of the AR device is implemented as an AR device when the image processing device provided by the embodiment of the present application is implemented as an AR lens.
- FIG. 2 is an embodiment of the image processing device provided by the embodiment of the present application.
- FIG. 3 is another embodiment of an image processing apparatus provided by an embodiment of the present application.
- FIGS. 2 and 3 Although the structure of the image processing apparatus is shown in FIGS. 2 and 3, respectively, it can be understood that the structures shown in FIGS. 2 and 3 can be used in combination, and image data from the acquisition environment can be realized to render the output image data and virtual.
- the composite display effect of the object will be described with respect to the components involved in FIGS. 2 and 3.
- the camera is configured to acquire image data of an environment including a real object, including an image or a video, and send the acquired image or video to an image synthesizer to perform a synthesizing operation with the virtual object of the augmented reality model.
- a scene generator configured to acquire position information of the head in the image data according to the position information of the real object in the image data, extract a virtual object corresponding to the position information in the augmented reality model, and extract the virtual object The object is sent to the image synthesizer.
- the scene generator is further configured to generate a virtual object according to the location information, and send the virtual object to the display, where the virtual object is used to superimpose the real object on the image synthesizer.
- the image synthesizer is configured to synthesize the acquired image or video of the real object and the virtual object, and render the composite image or the composite video, and the rendering result is periodically refreshed to the display.
- the display is configured to display the composite image or the composite video sent by the image synthesizer to form a common display effect of the real object and the virtual object of the augmented reality model.
- FIG. 4 is an embodiment of the present application.
- An optional implementation flow diagram of the provided image processing method the image processing device obtains the image data formed by the environment including the real object, and the virtual image of the augmented reality model, and the following steps are involved:
- Step 501 Obtain image data including a real object.
- Obtaining the image data of the real object is the first step to realize the augmented reality. Only the image in the real world is input into the image processing device, and the generated virtual image extracted from the augmented reality model by the image processing device is synthesized and output to the above display. On the component, the user can see the final enhanced scene image.
- the image data of the real object can be collected by the above-mentioned camera.
- the digital light field camera can acquire complete light field information when shooting a real object such as a person or a natural scene, so that the user is in the process of using the image processing apparatus. It can realize where the human eye wants to see and where to autofocus; moreover, the acquired light is the set of light collected in the real light field. When synthesized with the virtual image, it can be seen from the glasses that it cannot be distinguished. Of course, it is also possible to receive image data collected and transmitted by other image processing devices.
- the image processing apparatus collects image data in a real environment through a camera. Since a real object exists in the real environment, the collected image data includes a real object. In another possible manner, the image data including the real object is acquired by another image processing device and then transmitted to the image processing device of the embodiment, and the image processing device receives the image data.
- Step 502 Detect location information of a real object.
- the virtual objects must be merged into the exact position in the real world. Therefore, the position of real objects in the image data is detected in real time, even in the direction of real object motion. Tracking, in order to help the system decide which virtual object in the augmented reality model is displayed and where the virtual object is displayed, and reconstruct the coordinate system according to the observer's field of view.
- a video detection method for recognizing a predefined mark, an object or a reference point in a video image according to a pattern recognition technique, and then calculating a coordinate transformation matrix according to the offset and the rotation angle thereof.
- the coordinate conversion matrix is used to represent the position information of the real object; or the angle of the user's head rotation is measured by the gyroscope to determine the position information of the real object to determine how to convert the coordinates and content of the virtual object in the field of view.
- the image processing apparatus may acquire a plurality of image data including the real object, and track the trajectory of the real object motion according to the position and posture change between the image data. To determine the location information of the real object in each image data.
- the position and posture change of each image data can be detected by a gyroscope, or a tracking algorithm can be used to track two adjacent image data.
- Step 503 Obtain a virtual object from the augmented reality model.
- the display In order to obtain the immersion of the AR device, the display must be displayed with a realistic image and simulated and displayed in an augmented reality scene. Therefore, the image processing apparatus obtains a virtual object from the augmented reality model.
- the coordinate transfer matrix from the predefined mark to the mark in the current augmented reality scene is reconstructed, and the image processing device draws the virtual object in the augmented reality model according to the coordinate transfer matrix, and Rendering.
- Step 504 Combine the real object and the virtual object into a video or directly display according to the location information.
- the image synthesizer of the image processing device first calculates an affine transformation of the virtual object coordinates to the camera plane according to the position information of the camera and the positioning mark of the real object, and then draws the virtual object on the viewing plane according to the affine transformation matrix, thereby The virtual object is merged with the video or photo of the real object and displayed on the display to form an effect of the real object and the virtual object being displayed together.
- a virtual object is synthesized with a video or image of a real object, and displayed on a call interface of the client, such as a video of a caller's video or image.
- a call interface of the client such as a video of a caller's video or image.
- Superimposing virtual objects such as hats and glasses in real time greatly enhances the fun of video conversations; or, in the scene of scanning real objects under the client line using social networks, the user is displayed together on the image of the real object.
- Personal business cards in social networks enable seamless access to offline social and online social networking.
- FIG. 5 is a schematic flowchart of another optional implementation of the image processing method provided by the embodiment of the present application. The steps are explained.
- Step 601 The local client obtains image data.
- the image data may be obtained by the user himself or herself by calling the camera, that is, collecting the environment to form image data in the process of the local client; or using the local client.
- the peer client sends, that is, receives the peer client collection environment in the social network and transmits in the process of the local client.
- Image data identifying features of real objects in the environment from the received image data.
- Step 602 The local client identifies the feature of the real object from the obtained image data.
- the real objects may be natural scenes, human scenes, and living objects (including humans) in nature.
- the real object has a variety of feature types, such as image features, including: feature points of the face, contour features of the object, texture features, etc.; and biometric features, including voiceprint features, iris features, fingerprint features, and the like.
- image features including: feature points of the face, contour features of the object, texture features, etc.
- biometric features including voiceprint features, iris features, fingerprint features, and the like.
- the local client captures one or more facial images including the face of the user by calling a camera of the host device of the host device, and performs face detection on the captured facial image.
- the identification of the feature points for example, from the dimension recognition of the shape features, the detection of different facial organs by the external contour features, and the recognition of facial feature points of different parts of the facial organs.
- a multi-frame facial image may also be acquired, and the multi-frame captured facial image is respectively recognized to obtain a plurality of facial feature points in each facial image.
- the position, for example, the facial feature point includes any one or more of an eye feature point, a nose feature point, a lip feature point, a eyebrow feature point, and a face edge feature point.
- the multi-frame facial image may be continuously captured.
- the facial image may be a continuous multi-frame facial image in the captured video within a specified time period, for example, 1 second or 0.5 second;
- the face image can also be a multi-frame face image that is discretely distributed on the time axis in the captured video.
- each facial feature point obtained by digital marker recognition for example, 1 to 20 shown in FIG. 6 represents the facial expression.
- Edge feature points, 21 to 28 and 29 to 36 correspond to the user's left eye feature point and right eye feature point
- 37 to 44 and 88 represent the user's left eye feature point, of which 88 is the left eye pupil
- 89 represents the right eye feature point of the user, wherein 89 is a right eye pupil feature point
- 53-65 represents a user's nose feature point
- 66-87 represents a user's lip feature point.
- the feature recognition of the real object is described by taking the facial feature recognition as an example, wherein the facial feature recognition technology is generally divided according to the different criteria adopted, according to the different features.
- the local feature based method may utilize the local geometric features of the face, such as the relative position and relative distance of some facial organs (eyes, nose, mouth, etc.) to describe the face. Its characteristic components usually include Euclidean distance, curvature and angle between feature points, which can achieve an efficient description of the salient features of the face.
- the integral feature method is used to locate the facial feature points, and the multi-dimensional facial feature vector is identified by the Euclidean distance between the feature points as the feature component for classification.
- the characteristic components mainly include: the vertical distance between the eyebrow and the center of the eye: a plurality of description data of the curvature of the eyebrow; the width of the nose and the vertical position of the nose; the position of the nostril and the width of the face, etc., through the identification of the facial feature information, in the identification process Get 100% correct recognition rate.
- the local feature based method may also be an empirical description of the general characteristics of the facial features.
- facial images have some obvious basic features.
- facial regions usually include facial features such as eyes, nose and mouth, and their brightness is generally lower than that of the surrounding area; the eyes are roughly symmetrical, and the nose and mouth are distributed on the axis of symmetry.
- the local feature-based method is not limited to the type based on the local feature method in the embodiment of the present application, in addition to the above-described integral projection method and the prior rule method.
- the method based on the whole is to make the face image as a whole, and perform some transformation processing to identify the feature.
- the method considers the overall attribute of the face, and also preserves the topological relationship between the face parts and the components. Information of its own.
- the subspace analysis method can be used to find a linear or nonlinear spatial transformation according to a certain target, and compress the original high dimensional data into a low dimensional subspace, so that the distribution of data in this subspace is more compact and the calculation is reduced. The complexity.
- a set of rectangular mesh nodes may be placed on the facial image, and the features of each node are described by multi-scale wavelet features at the node, and the connection relationship between the nodes is represented by geometric distance, thereby forming a two-dimensional structure.
- the face representation of the topology map In the face recognition process, recognition is based on the similarity between nodes and connections in the two images.
- the ensemble-based method is not limited to the type based on the overall method, in addition to the subspace analysis method and the elasticity map matching method described above, and the neural network-based method.
- the feature recognition of the image data may be divided into the following two manners according to different execution subjects of the recognition feature:
- Manner 1 The local client invokes the identification service of the server, and sends image data to the server's identification service.
- the server identifies the feature of the real object from the obtained image data, and returns to the local client.
- the first method is especially applicable to the case where the computing resources of the local client are limited, and the computing resources consumed by the local client for feature recognition and the delay caused thereby can be effectively reduced.
- the computing resources of the local client are limited, and the computing resources consumed by the local client for feature recognition and the delay caused thereby can be effectively reduced.
- the identification service of the server may be invoked, and the identity of the real object is identified from the obtained image data by the identification service of the server, and returned to the local client.
- Manner 2 The image recognition thread is opened in the process of the local client, and the obtained image data is identified in the opened image recognition thread to obtain the feature of the real object.
- the image recognition thread can be turned on in the process of the host device of the client.
- the feature recognition operation is completed by the client device itself, because during the recognition process, the user may still listen to music, or open a game, or a video process, in order to not occupy other application resources, the process may be in the client.
- the number of threads that can be opened can be determined according to the computational complexity of the recognition (such as the frame rate of the video, the resolution of the photo, etc.). If the computational complexity is low, only a relatively small number of threads can be opened, if the complexity of the calculation is recognized. Higher, you can open multiple threads.
- the obtained image data is identified in the opened image recognition thread to obtain the feature of the real object, thereby ensuring that the feature information of the recognized image data is normally performed, and also avoids interrupting processes or threads of other applications.
- the feature of the identified real object satisfies a condition capable of recognizing the real object, and the condition includes at least one of the following: when the image feature point is recognized, the identified The number of image feature points exceeds the feature point data amount threshold; when the biometric is identified, the integrity of the identified biometric exceeds the integrity threshold; if the condition is met, the subsequent steps are performed, otherwise return to step 601 until the satisfaction is satisfied Characteristics of the condition.
- any of the following may occur: 1) in a dark environment; 2) the real object is in motion; 3) the camera is in motion; 4) the real object
- the feature portion is occluded, for example, when the user himself is photographing the face, most of the face is blocked.
- the acquired feature information is insufficient to complete the subsequent operation. Therefore, before the social network is queried by the feature of the real object, the quantity or integrity of the corresponding feature information is judged, and the A full feature query results in the consumption of computing resources by the social network.
- the eyes, nose, and mouth are no less than 100 feature points.
- the light is too dark, or the user and the user The camera is in a state of relative motion, or most of the user's face is occluded. Therefore, after the captured image is removed from the same feature point and the invalid feature point, the eyes, nose, and mouth are all below 100 feature points. If the collection fails, you need to re-collect it; otherwise, you can perform the next step.
- Step 603 The local client queries the social network according to the characteristics of the real object to determine whether the real object belongs to the social network. If yes, step 604 is performed; otherwise, the process returns to step 601.
- the feature database running in the server in the social network searches whether the preset feature information matching the feature of the real object is stored, and if so, determines the real The object belongs to the social network; if not, it does not belong to the social network.
- the user selects the “QQ-AR” function option to take an image of himself or another user.
- the QQ client collects the user's face in the photo.
- Feature information according to the feature information, whether the feature information of the user exists in the social network. If the user uploads the image of the user in advance, the preset feature information of the user's face is pre-stored in the social network, thereby The preset feature information of the user may be found to determine that the user belongs to the social network; if the user does not upload his own image, it is determined that the user does not belong to the social network.
- the corresponding features are used to query the social network through the feature database, such as The feature database of the registered user, the texture feature of the shared object, the graphic code, and the like are used to query the feature database of the social network; then, according to the query result, the following two scenarios can be divided.
- Scenario 1 The type of the queried object is a registered user of the social network.
- the feature database of the social network is queried with the feature of the real object; when the real object matches the feature of the registered user of the social network, the real object is determined to be a registered user belonging to the social network.
- the local client when the user of the local client captures an image of the user or other user, the local client obtains image data about the person, and queries the feature database in the network according to the feature in the image data, and pre-stores the feature data.
- the image of the user in the image data it may be determined that the user is a registered user belonging to the social network, and the ID of the registered user on the social network is obtained.
- the image feature of the registered user is stored in the feature database in advance, and the image feature of the unregistered user is not stored. Therefore, whether the user is a registered user of the social network can be determined according to whether the feature in the image data of the user is stored in the feature database.
- Scenario 2 The type of the queried object is the shared object in the social network.
- the feature database of the social network is queried with the feature of the real object; when the real object matches the feature of the shared object of the social network, the real object is determined to be the shared object of the social network.
- the client acquires image data about the real object, obtains feature information about the real object, such as a product QR code or a silhouette of the scene, and then according to the obtained feature.
- the real object may be determined as the shared object of the social network, and the ID of the shared object in the social network is obtained, and the social Related content in the network about shared content is supported based on ID queries.
- a common application is: when the user sees the product shared by a certain user on the social network, when not knowing where to buy it, then only need to scan the QR code or After the scanning is completed, the store can be jointly displayed on the image processing device screen or the HMD in an AR manner, and the address information of the store, wherein the address information can be an actual address or a website address, such as an e-commerce. The network address is thus purchased.
- Step 604 The local client obtains an augmented reality model adapted from the real object from the social network.
- the virtual object in the augmented reality model preset by the registered user in the social network is obtained, and the virtual object may be used to implement a dressing effect, for example, including At least one of the following: virtual props, virtual backgrounds, and filters.
- the above filter may be an internal filter, a built-in filter and an external filter; of course, the virtual object can also achieve the effect of information display, such as displaying a user's business card in a social network and sharing information index.
- the server of the social network by identifying and matching the facial features of the user, an image matching the facial features of the user in the social network is found, and the ID in the corresponding social network is obtained through the matched image. According to the ID, the associated augmented reality model is found as an adapted augmented reality model.
- the augmented reality model of the registered user of the social network may be a personal business card that is randomly assigned to the network to display at least the registered user, and may also be Personalized settings based on the user.
- the virtual object when the real object is a shared object in the social network, obtaining a virtual object in the social network for the shared object, the virtual object includes at least one of: a social network for the shared object Article; an advertisement for a shared object in a social network.
- the user can align the item or the scene through "QQ-AR", and then the screen will An animation of the item or scene being scanned appears, and after the animation ends, it indicates that the product or the scene is scanned successfully, and then, according to the package, shape, barcode or QR code of the product, the article or advertisement associated with the item is found. Or purchase the store's store and address, etc.; or, based on information such as the characteristics, shape, and location of the scene, find an article or advertisement associated with it.
- a solution for buffering the augmented reality model in the cache of the local client is provided, for example, for the user of the local client, the social network calculates the potential friend, the interested user or the product. And pre-push the corresponding augmented reality model to the local client for caching to speed up the rendering speed of the virtual object and avoid delay.
- social network such as query augmented reality model prioritization, involving the following two different query results:
- Method 1 Store in the cache or database of the host device
- an augmented reality model adapted from a real object from a social network before obtaining an augmented reality model adapted from a real object from a social network, first, in the client's cache or the database of the host device, the ID of the real object in the social network is queried with the real object.
- the augmented reality model is configured such that, for the case where the local client has stored the corresponding augmented reality model, it is not necessary to request the social network every time, and the rendering speed of the virtual object in the real model can be enhanced to minimize the delay.
- the user obtains the facial feature parameter of the user after aligning himself with the captured image or a video.
- the client queries the cache according to the feature parameter to cache the previously used augmented reality model. For example, a personalized AR dress is set, and if so, the augmented reality model is obtained from the cache, thus improving the efficiency of acquiring the augmented reality model.
- Method 2 Store on a social network server
- the augmented reality model of the real object is not queried in the cache and the database of the host device of the local client, the augmented reality model storing the real object is queried to the server of the social network by the ID of the real object.
- Step 605 The local client performs rendering according to the obtained image data.
- Step 606 The local client renders the virtual object in the augmented reality model according to the position of the real object in the rendered image, and forms a real object and a virtual object that are commonly displayed.
- Method 1 Devices such as smartphones and computers
- the client installed in the device such as the smart phone and the computer acquires the augmented reality model
- the augmented reality model is synthesized with the real object carried in the image data transmitted during the instant communication process, The way the synthesized video or image is displayed on the smartphone screen or computer screen.
- VR glasses are based on the display mode of the transmissive HMD of video synthesis technology.
- the real-world video or image is acquired by the camera, and then the generated or acquired virtual object is synthesized with the real-world video or image, and corresponding rendering is performed. And then display it on the display through the HMD.
- the user performs video chat with other users of the social network through the local client, and receives image data of the opposite client (bearing images of other users), and the local client performs face data on the image data.
- Feature recognition 71 identifying that the user is a registered user of the social network, querying the augmented reality model predetermined by the user in the social network as AR dressing-diving glasses, in the process of rendering, according to the relative position of the AR glasses dressing and the user's human eye,
- the diving glasses 72 are rendered in front of the eyes of the human eye.
- the local client performs video collection on the environment where the host device is located, including collecting image data of the face in the environment, performing face feature recognition 81, and identifying the user of the local client as a social network.
- the registered user querying the social network to obtain a predetermined augmented reality model for the AR, including the background 83 corresponding to the water wave and the diving glasses 82; the relative position according to the diving glasses 82, the virtual background 83 and the user's human eye, and the background 83 and the user
- the hierarchical relationship is to place the virtual background 83 on the bottom layer of the user, avoiding the background 83 occluding the user.
- the user uses the scanning function of the local client to call the camera of the host device to scan the face of the newly recognized friend, that is, collect image data for the face in the environment, and perform face feature recognition 91 to identify
- the newly-recognized friend is a registered user of the social network, and the predetermined augmented reality model is queried for the AR dressing.
- the AR dressing in the interface of the local client displaying the face, the rabbit face 92 and the mouth opening action are performed according to the face position rendering. After the personalized dressing of 93, after the synthesis, the user appeared in the head of the rabbit's ears and the mouth opened.
- the local client detects a pose change of the real object in the image data, where the pose change may be the user and The relative position between the client devices changes, or the angle changes.
- the change in the angle may be a change in the side view angle, the top view angle, or the look up angle between the user and the client.
- the virtual object adapted to the pose change in the augmented reality model is rendered, and the superposed real object and the virtual object are formed to ensure the seamless fusion of the real object and the virtual object.
- the local client detects that the user's location moves according to the scanned image data, and the local client uses the AR software development kit (SDK) of the device such as HDM or mobile phone to track and match the rendered real object. That is, as the real object highlights the movement or the distance between the local client and the real object and the angle of the object, the corresponding widget and background of the augmented reality model will also perform corresponding rendering changes, thereby forming a better augmented reality effect.
- SDK AR software development kit
- FIG. 11 is a schematic flowchart of another optional implementation of the image processing method according to the embodiment of the present application. Identify the server and social dressing server, including the following steps:
- Step 801 The client performs an acquisition operation.
- the client can acquire an image containing a face and perform an operation of acquiring features on the image to acquire feature points included in the image.
- the user to be scanned here is referred to as user C.
- Step 802 The client determines whether there are enough feature points, if yes, step 803 is performed, otherwise step 802 is continued.
- the number of the collected feature points is obtained, and it is determined whether the number of the feature points exceeds the threshold value of the feature point data amount. If it is exceeded, the number of the feature points is sufficient, the scanning face is successful, and if not, the feature point is indicated. Insufficient quantity, need to continue to collect.
- Step 803 The client detects whether there is a cache of the AR dressing in the local area. If yes, step 804 is performed; if not, step 805 is performed.
- Step 804 The client displays an AR picture or video.
- the AR picture or video is: a composite picture of the AR dressing and the image taken by the user, and the AR dresses a composite video of the video taken by the user.
- the client After obtaining the AR dressing, the client combines the AR dress with the image or video captured by the user to obtain an AR picture or video, and implements an effect of adding a dress to the user in the AR picture or video.
- Step 805 The client uploads a photo to the face recognition server.
- face recognition is required at the face recognition server to perform a matching operation with the image stored in the face recognition server based on the recognized result.
- Step 806 The face recognition server identifies that the matching is successful.
- scenario 1 user C is the user who uses the client, and has not set the AR dress
- scenario 2 the user C is another.
- Step 807 The face recognition server acquires a social network account.
- the social network account number may be a QQ number, or may be a micro signal, or other IM account number.
- the social dressing server stores the personalized dressing corresponding to each social network account. After the face recognition server recognizes the registered user, the social network account of the registered user is obtained, and the social network account is obtained for the social dressing server to pull through the social network account. Personalized dress up.
- Step 808 The face recognition server sends a request to the social dressing server to pull the personalized dress model.
- the request carries the acquired social network account.
- Step 809 The social dressing server pulls the personalized dressing model.
- the face recognition server will obtain a personalized dressing model from the social dressing server, and then recommend a corresponding personalized dressing model to the client through the social dressing server.
- the face server will obtain the AR dressing set by the user C from the social dressing server, and then recommend the corresponding personalized dressing model to the client through the social dressing server. If the user does not set the AR dressup, then the operation ends.
- Step 810 Send the personalized dressing model to the client.
- Step 811 The client loads the model according to the local ARSDK.
- the client uses the AR SDK of the device such as HDM or mobile phone to track and match the displayed content and graphics, so that the personalized dressing follows the user's motion and renders changes, thereby forming a better augmented reality. Effect.
- Scene 1 Online Social - Instant Video Chat, AR Dress Up
- the user of the local client uses instant communication (including QQ, WeChat, etc.) to perform video chat with the peer user (such as friends and relatives), the user calls the camera in the local client to capture the video or image in real time, thereby obtaining the real object.
- Video or image parameters in order to highlight the personalized, active chat atmosphere, before the video or image is taken (also in the process of capturing video or images) to add corresponding virtual objects, such as personalized dress and virtual background.
- the user of the peer client when the user uses the camera to take a video or photo, it can also be similar to the local user, dress the captured video or picture, or directly transfer the captured video or photo to the local user, by the local user. Perform the above dressing operation.
- the AR dressing described above can be replaced with other information of the user in the social network, such as a personal business card, including an account number, a graphic code, and the like in the social network.
- Scenario 2 Realizing AR Dress Up in the Process of Online Social-Video Transmission
- both users may send a funny, good-looking video or photo that they think they have taken to the other party, for example, a local user ( Or the opposite user) took a photo of the meal, the instant messaging client will identify the features of the real object in the photo, in order to match the corresponding dress according to the identified features, and then add a matching dress on the photo Send to the peer user (or local user).
- instant messaging including QQ, WeChat, etc.
- Scenario 3 Offline Social - Client scans other users
- the local client such as the user of the mobile QQ, clicks the “sweep” option on the client, then selects “QQ-AR”, aligns the face of the user to be scanned, and then a real scan is displayed on the screen.
- the animation of the object When the animation ends, it indicates that the scan is successful, that is, the feature of the real object in the environment is identified from the collected image data, and the corresponding ID is extracted from the social network based on the feature query, and the AR preset by the user is pulled according to the queried ID.
- the client side instantly forms the effect of dressing on the scanned person's face.
- Scene 4 Offline Social - Client scans the user himself
- the local client such as the user of the mobile QQ, uses the camera to align the user's own face through the “QQ-AR”, and then an animation of the face being scanned appears on the screen, and then the animation is finished, indicating that the face is scanned. success.
- At least one sexual dressing can be selected at the bottom of the screen. After the user selects the personalized dress that he likes, the personalized dress will be applied to the screen.
- personalized dressing can be virtual items, virtual backgrounds and filters, etc.
- the virtual items can be hats, glasses or other facial pendants.
- Augmented reality models for different real objects in social networks have diversified features, such as AR-style dressings, social business cards, etc., as needed, so that when applied to image data rendering, different objects are differentiated. display effect.
- the client identifies the feature from the image data locally or by calling the server's identification service according to the situation, which is beneficial to reduce the delay and realize the synchronous display of the real object and the virtual object.
- the virtual object can be displayed in time on the client, and the real object and the virtual object are not displayed due to the network reason. The problem is not synchronized.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- Library & Information Science (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Strategic Management (AREA)
- Data Mining & Analysis (AREA)
- General Business, Economics & Management (AREA)
- Economics (AREA)
- Marketing (AREA)
- Computer Graphics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Tourism & Hospitality (AREA)
- Primary Health Care (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Entrepreneurship & Innovation (AREA)
- Game Theory and Decision Science (AREA)
- Computational Linguistics (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (23)
- 一种图像处理方法,其特征在于,应用于图像处理装置中,包括:从所获得的图像数据识别出环境中真实对象的特征;以所述真实对象的特征查询社交网络,确定所述真实对象具有所述社交网络的属性;获得所述社交网络中与所述真实对象适配的增强现实模型;根据所获得的图像数据进行渲染,以及,根据所述真实对象在所渲染形成的图像中的位置,对所述增强现实模型中的虚拟对象进行渲染,形成共同显示的所述真实对象与所述虚拟对象。
- 如权利要求1所述的方法,其特征在于,所述从所获得的图像数据识别出环境中真实对象的特征,包括:接收所述社交网络中的对端客户端采集环境并传输的图像数据,从所接收的图像数据中识别位于对端客户端环境的真实对象的特征;和/或,采集环境形成图像数据,从所采集形成的图像数据中识别位于本端客户端环境的真实对象的特征。
- 如权利要求2所述的方法,其特征在于,所述从所采集形成的图像数据中识别位于本端客户端环境的真实对象的特征,包括:当与所述社交网络中的对端客户端通信时,采集本端客户端环境形成用于传输至所述对端客户端的图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征;或者,当响应所述本端客户端的采集操作时,采集本端客户端环境形成图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征。
- 如权利要求1所述的方法,其特征在于,还包括:当获得所述社交网络中与所述真实对象适配的增强现实模型之前,判断所识别出真实对象的特征满足社交网络能够识别的条件,所述条件包括以下至少之一:当识别出图像特征点时,所识别出的图像特征点的数量超出特征点数据量阈值;当识别出生物特征时,所识别出的生物特征的完整程度超出完整程度阈值。
- 如权利要求1所述的方法,其特征在于,所述以所述真实对象的特征查询社交网络,确定所述真实对象具有所述社交网络的属性,包括:以所述真实对象的特征查询所述社交网络的特征数据库;当所述真实对象与所述社交网络的注册用户的特征匹配时,确定所述真实对象具有所述社交网络的注册用户属性;当所述真实对象与所述社交网络的被分享对象的特征匹配时,确定所述真实对象具有所述社交网络的被分享对象属性。
- 如权利要求1所述的方法,其特征在于,所述获得所述社交网络中与所述真实对象适配的增强现实模型,包括:当所述真实对象为所述社交网络中的注册用户时,获得所述注册用户在所述社交网络中预设的虚拟对象,所述虚拟对象包括以下至少之一:虚拟道具、虚拟背景和滤镜;当所述真实对象具有所述社交网络的被分享对象的属性时,获得所述社交网络中针对所述被分享对象的虚拟对象,所述虚拟对象包括以下至少之一:所述社交网络中针对所述被分享对象的文章;所述社交网络中针对所述被分享对象的广告。
- 如权利要求1所述的方法,其特征在于,所述从所获得的图像数据识别出环境中真实对象的特征,包括:调用服务器的识别服务,从所获得的图像数据中识别出环境中真实对象的特征;或者,开启图像识别线程,在所开启的图像识别线程中识别所获得的图像数据,得到环境中真实对象的特征。
- 如权利要求1所述的方法,其特征在于,所述根据所述真实对象在所渲染形成的图像中的位置,对所述增强现实模型中的虚拟对象进行渲染,包括:检测所述真实对象在所述图像数据中的位姿变化;在所述真实对象在所输出的图像中的位置,渲染输出所述增强现实模型中与所述位姿变化适配的虚拟对象。
- 如权利要求1所述的方法,其特征在于,所述从所述社交网络获得与所述真实对象适配的增强现实模型,包括:在本端客户端查询与所述真实对象适配的增强现实模型;当未查询到时,从所述社交网络查询得到与所述真实对象适配的增强现实模型。
- 一种图像处理装置,其特征在于,包括:识别模块,用于从所获得的图像数据识别出环境中真实对象的特征;查询模块,用于以所述真实对象的特征查询社交网络,确定所述真实对象具有所述社交网络的属性;模型模块,用于获得所述社交网络中与所述真实对象适配的增强现实模型;渲染模块,用于根据所获得的图像数据进行渲染,以及,根据所述真实对象在所渲染形成的图像中的位置,对所述增强现实模型中的虚拟对象进行渲染,形成共同显示的所述真实对象与所述虚拟对象。
- 如权利要求10所述的装置,其特征在于,所述识别模块,具体用于:接收所述社交网络中的对端客户端采集环境并传输的图像数据,从所接收的图像数据中识别位于对端客户端环境的真实对象的特征;和/或,采集环境形成图像数据,从所采集形成的图像数据中识别位于本端客户端环境的真实对象的特征。
- 如权利要求11所述的装置,其特征在于,所述识别模块,具体用于:当与所述社交网络中的对端客户端通信时,采集本端客户端环境形成用于传输至所述对端客户端的图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征;或者,当响应所述本端客户端的采集操作时,采集本端客户端环境形成图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征。
- 如权利要求10所述的装置,其特征在于,所述识别模块,还用于当获得所述社交网络中与所述真实对象适配的增强现实模型之前,判断所识别出真实对象的特征满足社交网络能够识别的条件,所述条件包括以下至少之一:当识别出图像特征点时,所识别出的图像特征点的数量超出特征点数据量阈值;当识别出生物特征时,所识别出的生物特征的完整程度超出完整程度阈值。
- 一种图像处理装置,其特征在于,包括:存储器,用于存储可执行程序;处理器,用于执行所述存储器中存储的可执行程序时,实现如下操作:从所获得的图像数据识别出环境中真实对象的特征;以所述真实对象的特征查询社交网络,确定所述真实对象具有所述社交网络的属性;获得所述社交网络中与所述真实对象适配的增强现实模型;根据所获得的图像数据进行渲染,以及,根据所述真实对象在所渲染形成的图像中的位置,对所述增强现实模型中的虚拟对象进行渲染,形成共同显示的所述真实对象与所述虚拟对象。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:接收所述社交网络中的对端客户端采集环境并传输的图像数据,从所接收的图像数据中识别位于对端客户端环境的真实对象的特征;和/或,采集环境形成图像数据,从所采集形成的图像数据中识别位于本端客户端环境的真实对象的特征。
- 根据权利要求15所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:当与所述社交网络中的对端客户端通信时,采集本端客户端环境形成用于传输至所述对端客户端的图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征;或者,当响应所述本端客户端的采集操作时,采集本端客户端环境形成图像数据,从所采集的图像数据中识别本端客户端环境中真实对象的特征。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:当获得所述社交网络中与所述真实对象适配的增强现实模型之前,判断所识别出真实对象的特征满足社交网络能够识别的条件,所述条件包括以下至少之一:当识别出图像特征点时,所识别出的图像特征点的数量超出特征点数据量阈值;当识别出生物特征时,所识别出的生物特征的完整程度超出完整程度阈值。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:以所述真实对象的特征查询所述社交网络的特征数据库;当所述真实对象与所述社交网络的注册用户的特征匹配时,确定所述真实对象具有所述社交网络的注册用户属性;当所述真实对象与所述社交网络的被分享对象的特征匹配时,确定所述真实对象具有所述社交网络的被分享对象属性。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:当所述真实对象为所述社交网络中的注册用户时,获得所述注册用户在所述社交网络中预设的虚拟对象,所述虚拟对象包括以下至少之一:虚拟道具、虚拟背景和滤镜;当所述真实对象具有所述社交网络的被分享对象的属性时,获得所述社交网络中针对所述被分享对象的虚拟对象,所述虚拟对象包括以下至少之一:所述社交网络中针对所述被分享对象的文章;所述社交网络中针对所述被分享对象的广告。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:调用服务器的识别服务,从所获得的图像数据中识别出环境中真实对象的特征;或者,开启图像识别线程,在所开启的图像识别线程中识别所获得的图像数据,得到环境中真实对象的特征。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行所述可执行程序时,实现如下操作:检测所述真实对象在所述图像数据中的位姿变化;在所述真实对象在所输出的图像中的位置,渲染输出所述增强现实模型中与所述位姿变化适配的虚拟对象。
- 根据权利要求14所述的装置,其特征在于,所述处理器,还用于执行 所述可执行程序时,实现如下操作:在本端客户端查询与所述真实对象适配的增强现实模型;当未查询到时,从所述社交网络查询得到与所述真实对象适配的增强现实模型。
- 一种存储介质,其特征在于,存储有可执行程序,所述可执行程序被处理器执行时,实现如权利要求1至9任一项所述的图像处理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020505230A JP7098120B2 (ja) | 2017-08-04 | 2018-08-01 | 画像処理方法、装置及記憶媒体 |
KR1020207004027A KR102292537B1 (ko) | 2017-08-04 | 2018-08-01 | 이미지 처리 방법 및 장치, 및 저장 매체 |
US16/780,891 US11182615B2 (en) | 2017-08-04 | 2020-02-03 | Method and apparatus, and storage medium for image data processing on real object and virtual object |
US17/478,860 US20220004765A1 (en) | 2017-08-04 | 2021-09-17 | Image processing method and apparatus, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710661746.0 | 2017-08-04 | ||
CN201710661746.0A CN108305317B (zh) | 2017-08-04 | 2017-08-04 | 一种图像处理方法、装置及存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/780,891 Continuation US11182615B2 (en) | 2017-08-04 | 2020-02-03 | Method and apparatus, and storage medium for image data processing on real object and virtual object |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019024853A1 true WO2019024853A1 (zh) | 2019-02-07 |
Family
ID=62872576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/097860 WO2019024853A1 (zh) | 2017-08-04 | 2018-08-01 | 一种图像处理方法、装置及存储介质 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11182615B2 (zh) |
JP (1) | JP7098120B2 (zh) |
KR (1) | KR102292537B1 (zh) |
CN (1) | CN108305317B (zh) |
TW (1) | TWI708152B (zh) |
WO (1) | WO2019024853A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112672185A (zh) * | 2020-12-18 | 2021-04-16 | 脸萌有限公司 | 基于增强现实的显示方法、装置、设备及存储介质 |
WO2021079829A1 (ja) * | 2019-10-25 | 2021-04-29 | Necソリューションイノベータ株式会社 | 表示装置、イベント支援システム、表示方法、及びイベント支援システムの生産方法 |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI439960B (zh) | 2010-04-07 | 2014-06-01 | Apple Inc | 虛擬使用者編輯環境 |
KR101988319B1 (ko) * | 2013-09-09 | 2019-06-12 | 엘지전자 주식회사 | 이동 단말기 및 이의 제어 방법 |
US9912860B2 (en) | 2016-06-12 | 2018-03-06 | Apple Inc. | User interface for camera effects |
DK180859B1 (en) | 2017-06-04 | 2022-05-23 | Apple Inc | USER INTERFACE CAMERA EFFECTS |
CN108305317B (zh) | 2017-08-04 | 2020-03-17 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
DK180078B1 (en) | 2018-05-07 | 2020-03-31 | Apple Inc. | USER INTERFACE FOR AVATAR CREATION |
US11722764B2 (en) | 2018-05-07 | 2023-08-08 | Apple Inc. | Creative camera |
US10375313B1 (en) * | 2018-05-07 | 2019-08-06 | Apple Inc. | Creative camera |
CN109165571B (zh) | 2018-08-03 | 2020-04-24 | 北京字节跳动网络技术有限公司 | 用于插入图像的方法和装置 |
CN109040824B (zh) * | 2018-08-28 | 2020-07-28 | 百度在线网络技术(北京)有限公司 | 视频处理方法、装置、电子设备和可读存储介质 |
US10573057B1 (en) * | 2018-09-05 | 2020-02-25 | Citrix Systems, Inc. | Two-part context-based rendering solution for high-fidelity augmented reality in virtualized environment |
DK201870623A1 (en) | 2018-09-11 | 2020-04-15 | Apple Inc. | USER INTERFACES FOR SIMULATED DEPTH EFFECTS |
WO2020056689A1 (zh) * | 2018-09-20 | 2020-03-26 | 太平洋未来科技(深圳)有限公司 | 一种ar成像方法、装置及电子设备 |
US10674072B1 (en) | 2019-05-06 | 2020-06-02 | Apple Inc. | User interfaces for capturing and managing visual media |
US11770601B2 (en) | 2019-05-06 | 2023-09-26 | Apple Inc. | User interfaces for capturing and managing visual media |
US11087430B2 (en) * | 2018-09-28 | 2021-08-10 | Apple Inc. | Customizable render pipelines using render graphs |
US11128792B2 (en) | 2018-09-28 | 2021-09-21 | Apple Inc. | Capturing and displaying images with multiple focal planes |
US11321857B2 (en) | 2018-09-28 | 2022-05-03 | Apple Inc. | Displaying and editing images with depth information |
CN109933788B (zh) * | 2019-02-14 | 2023-05-23 | 北京百度网讯科技有限公司 | 类型确定方法、装置、设备和介质 |
US11706521B2 (en) | 2019-05-06 | 2023-07-18 | Apple Inc. | User interfaces for capturing and managing visual media |
CN112102498A (zh) * | 2019-06-18 | 2020-12-18 | 明日基金知识产权控股有限公司 | 用于将应用虚拟地附接到动态对象并实现与动态对象的交互的系统和方法 |
JP7221156B2 (ja) | 2019-06-28 | 2023-02-13 | 富士フイルム株式会社 | 画像処理システム、画像処理方法及びプログラム |
CN110868635B (zh) * | 2019-12-04 | 2021-01-12 | 深圳追一科技有限公司 | 视频处理方法、装置、电子设备及存储介质 |
US11921998B2 (en) | 2020-05-11 | 2024-03-05 | Apple Inc. | Editing features of an avatar |
DK202070624A1 (en) | 2020-05-11 | 2022-01-04 | Apple Inc | User interfaces related to time |
CN111652107B (zh) * | 2020-05-28 | 2024-05-21 | 北京市商汤科技开发有限公司 | 对象计数方法及装置、电子设备和存储介质 |
US11054973B1 (en) | 2020-06-01 | 2021-07-06 | Apple Inc. | User interfaces for managing media |
CN111899350A (zh) * | 2020-07-31 | 2020-11-06 | 北京市商汤科技开发有限公司 | 增强现实ar图像的呈现方法及装置、电子设备、存储介质 |
US11212449B1 (en) | 2020-09-25 | 2021-12-28 | Apple Inc. | User interfaces for media capture and management |
CN112348889B (zh) * | 2020-10-23 | 2024-06-07 | 浙江商汤科技开发有限公司 | 视觉定位方法及相关装置、设备 |
US11449155B2 (en) * | 2021-01-11 | 2022-09-20 | Htc Corporation | Control method of immersive system |
KR102472115B1 (ko) * | 2021-02-04 | 2022-11-29 | (주)스마트큐브 | 다자간 온라인 업무 협업을 위한 증강현실 기반의 화상회의를 제공하기 위한 장치 및 이를 위한 방법 |
JP7427786B2 (ja) * | 2021-02-09 | 2024-02-05 | 北京字跳▲網▼絡技▲術▼有限公司 | 拡張現実に基づく表示方法、機器、記憶媒体及びプログラム製品 |
US11539876B2 (en) | 2021-04-30 | 2022-12-27 | Apple Inc. | User interfaces for altering visual media |
US11778339B2 (en) | 2021-04-30 | 2023-10-03 | Apple Inc. | User interfaces for altering visual media |
TWI768913B (zh) | 2021-05-20 | 2022-06-21 | 國立中正大學 | 眼睛中心定位方法及其定位系統 |
US11776190B2 (en) | 2021-06-04 | 2023-10-03 | Apple Inc. | Techniques for managing an avatar on a lock screen |
CN113479105A (zh) * | 2021-07-20 | 2021-10-08 | 钟求明 | 一种基于自动驾驶车辆的智能充电方法及智能充电站 |
TWI792693B (zh) * | 2021-11-18 | 2023-02-11 | 瑞昱半導體股份有限公司 | 用於進行人物重辨識的方法與裝置 |
CN116645525B (zh) * | 2023-07-27 | 2023-10-27 | 深圳市豆悦网络科技有限公司 | 一种游戏图像识别方法及处理系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102332095A (zh) * | 2011-10-28 | 2012-01-25 | 中国科学院计算技术研究所 | 一种人脸运动跟踪方法和系统以及一种增强现实方法 |
CN106295504A (zh) * | 2016-07-26 | 2017-01-04 | 车广为 | 人脸识别基础上的增强显示方法 |
GB2544885A (en) * | 2015-10-30 | 2017-05-31 | 2Mee Ltd | Communication system and method |
CN107004290A (zh) * | 2015-01-06 | 2017-08-01 | 索尼公司 | 效果生成装置、效果生成方法以及程序 |
CN108305317A (zh) * | 2017-08-04 | 2018-07-20 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4367424B2 (ja) * | 2006-02-21 | 2009-11-18 | 沖電気工業株式会社 | 個人識別装置,個人識別方法 |
US10872322B2 (en) * | 2008-03-21 | 2020-12-22 | Dressbot, Inc. | System and method for collaborative shopping, business and entertainment |
JP5237394B2 (ja) * | 2009-01-30 | 2013-07-17 | 富士通フロンテック株式会社 | 認証装置、撮像装置、認証方法および認証プログラム |
US20100208033A1 (en) * | 2009-02-13 | 2010-08-19 | Microsoft Corporation | Personal Media Landscapes in Mixed Reality |
US20110052012A1 (en) * | 2009-03-31 | 2011-03-03 | Myspace Inc. | Security and Monetization Through Facial Recognition in Social Networking Websites |
WO2012139270A1 (en) * | 2011-04-11 | 2012-10-18 | Intel Corporation | Face recognition control and social networking |
KR101189043B1 (ko) * | 2011-04-27 | 2012-10-08 | 강준규 | 영상통화 서비스 및 그 제공방법, 이를 위한 영상통화서비스 제공서버 및 제공단말기 |
US8332424B2 (en) * | 2011-05-13 | 2012-12-11 | Google Inc. | Method and apparatus for enabling virtual tags |
CN102916986A (zh) * | 2011-08-01 | 2013-02-06 | 环达电脑(上海)有限公司 | 人脸辨识的搜寻系统及其方法 |
EP2759127A4 (en) | 2011-09-23 | 2014-10-15 | Tangome Inc | REINFORCEMENT OF A VIDEO CONFERENCE |
KR20130063876A (ko) * | 2011-12-07 | 2013-06-17 | (주)엘에이치에스지 | 클라우드 컴퓨팅을 이용한 증강 현실 시스템 및 생성 방법 |
US9258626B2 (en) * | 2012-01-20 | 2016-02-09 | Geun Sik Jo | Annotating an object in a video with virtual information on a mobile terminal |
CN103368816A (zh) * | 2012-03-29 | 2013-10-23 | 深圳市腾讯计算机系统有限公司 | 基于虚拟人物形象的即时通讯方法及系统 |
WO2014031899A1 (en) * | 2012-08-22 | 2014-02-27 | Goldrun Corporation | Augmented reality virtual content platform apparatuses, methods and systems |
US10027727B1 (en) * | 2012-11-21 | 2018-07-17 | Ozog Media, LLC | Facial recognition device, apparatus, and method |
JP5991536B2 (ja) * | 2013-02-01 | 2016-09-14 | パナソニックIpマネジメント株式会社 | メイクアップ支援装置、メイクアップ支援方法、およびメイクアップ支援プログラム |
CN103544724A (zh) * | 2013-05-27 | 2014-01-29 | 华夏动漫集团有限公司 | 一种利用增强现实与卡片识别技术在移动智能终端实现虚拟动漫角色的系统及方法 |
WO2015027196A1 (en) * | 2013-08-22 | 2015-02-26 | Bespoke, Inc. | Method and system to create custom products |
JP2015095031A (ja) * | 2013-11-11 | 2015-05-18 | アルパイン株式会社 | 情報処理装置およびコメント投稿方法 |
CN103679204A (zh) * | 2013-12-23 | 2014-03-26 | 上海安琪艾可网络科技有限公司 | 基于智能移动设备平台的图像识别与创作应用系统及方法 |
US10901765B2 (en) * | 2014-01-22 | 2021-01-26 | Nike, Inc. | Systems and methods of socially-driven product offerings |
KR20150126289A (ko) * | 2014-05-02 | 2015-11-11 | 한국전자통신연구원 | 증강현실 기반 소셜 네트워크 서비스 정보를 제공하는 내비게이션 장치와 메타데이터 처리장치 및 그 방법 |
US20160042557A1 (en) * | 2014-08-08 | 2016-02-11 | Asustek Computer Inc. | Method of applying virtual makeup, virtual makeup electronic system, and electronic device having virtual makeup electronic system |
US9798143B2 (en) * | 2014-08-11 | 2017-10-24 | Seiko Epson Corporation | Head mounted display, information system, control method for head mounted display, and computer program |
DE102014226625A1 (de) * | 2014-10-31 | 2016-05-04 | Cortado Ag | Verfahren zur Übertragung von Druckdaten, Server und mobiles Endgerät |
CN104504423B (zh) * | 2014-12-26 | 2018-02-13 | 山东泰宝防伪制品有限公司 | Ar增强现实立体防伪系统及其方法 |
US9754355B2 (en) * | 2015-01-09 | 2017-09-05 | Snap Inc. | Object recognition based photo filters |
EP3272078B1 (en) * | 2015-03-18 | 2022-01-19 | Snap Inc. | Geo-fence authorization provisioning |
CN104901873A (zh) * | 2015-06-29 | 2015-09-09 | 曾劲柏 | 一种基于场景和动作的网络社交系统 |
JP6589573B2 (ja) * | 2015-11-06 | 2019-10-16 | ブラザー工業株式会社 | 画像出力装置 |
US11237629B2 (en) * | 2016-02-06 | 2022-02-01 | Maximilian Ralph Peter von und zu Liechtenstein | Social networking technique for augmented reality |
CN106200917B (zh) * | 2016-06-28 | 2019-08-30 | Oppo广东移动通信有限公司 | 一种增强现实的内容显示方法、装置及移动终端 |
CN106200918B (zh) * | 2016-06-28 | 2019-10-01 | Oppo广东移动通信有限公司 | 一种基于ar的信息显示方法、装置和移动终端 |
US20180047200A1 (en) * | 2016-08-11 | 2018-02-15 | Jibjab Media Inc. | Combining user images and computer-generated illustrations to produce personalized animated digital avatars |
CN109952610B (zh) * | 2016-11-07 | 2021-01-08 | 斯纳普公司 | 图像修改器的选择性识别和排序 |
JP6753276B2 (ja) * | 2016-11-11 | 2020-09-09 | ソニー株式会社 | 情報処理装置、および情報処理方法、並びにプログラム |
US11635872B2 (en) * | 2016-11-22 | 2023-04-25 | Snap Inc. | Smart carousel of image modifiers |
CN106780757B (zh) * | 2016-12-02 | 2020-05-12 | 西北大学 | 一种增强现实的方法 |
US10636175B2 (en) * | 2016-12-22 | 2020-04-28 | Facebook, Inc. | Dynamic mask application |
CN106846237A (zh) * | 2017-02-28 | 2017-06-13 | 山西辰涵影视文化传媒有限公司 | 一种基于Unity3D的增强实现方法 |
GB2560031B (en) * | 2017-02-28 | 2020-05-27 | PQ Solutions Ltd | Binding data to a person's identity |
CN107820591A (zh) * | 2017-06-12 | 2018-03-20 | 美的集团股份有限公司 | 控制方法、控制器、智能镜子和计算机可读存储介质 |
US9980100B1 (en) * | 2017-08-31 | 2018-05-22 | Snap Inc. | Device location based on machine learning classifications |
KR101968723B1 (ko) * | 2017-10-18 | 2019-04-12 | 네이버 주식회사 | 카메라 이펙트를 제공하는 방법 및 시스템 |
US11488359B2 (en) * | 2019-08-28 | 2022-11-01 | Snap Inc. | Providing 3D data for messages in a messaging system |
-
2017
- 2017-08-04 CN CN201710661746.0A patent/CN108305317B/zh active Active
-
2018
- 2018-08-01 KR KR1020207004027A patent/KR102292537B1/ko active IP Right Grant
- 2018-08-01 WO PCT/CN2018/097860 patent/WO2019024853A1/zh active Application Filing
- 2018-08-01 TW TW107126766A patent/TWI708152B/zh active
- 2018-08-01 JP JP2020505230A patent/JP7098120B2/ja active Active
-
2020
- 2020-02-03 US US16/780,891 patent/US11182615B2/en active Active
-
2021
- 2021-09-17 US US17/478,860 patent/US20220004765A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102332095A (zh) * | 2011-10-28 | 2012-01-25 | 中国科学院计算技术研究所 | 一种人脸运动跟踪方法和系统以及一种增强现实方法 |
CN107004290A (zh) * | 2015-01-06 | 2017-08-01 | 索尼公司 | 效果生成装置、效果生成方法以及程序 |
GB2544885A (en) * | 2015-10-30 | 2017-05-31 | 2Mee Ltd | Communication system and method |
CN106295504A (zh) * | 2016-07-26 | 2017-01-04 | 车广为 | 人脸识别基础上的增强显示方法 |
CN108305317A (zh) * | 2017-08-04 | 2018-07-20 | 腾讯科技(深圳)有限公司 | 一种图像处理方法、装置及存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021079829A1 (ja) * | 2019-10-25 | 2021-04-29 | Necソリューションイノベータ株式会社 | 表示装置、イベント支援システム、表示方法、及びイベント支援システムの生産方法 |
CN112672185A (zh) * | 2020-12-18 | 2021-04-16 | 脸萌有限公司 | 基于增强现实的显示方法、装置、设备及存储介质 |
CN112672185B (zh) * | 2020-12-18 | 2023-07-07 | 脸萌有限公司 | 基于增强现实的显示方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
KR20200020960A (ko) | 2020-02-26 |
US11182615B2 (en) | 2021-11-23 |
TWI708152B (zh) | 2020-10-21 |
CN108305317B (zh) | 2020-03-17 |
JP2020529084A (ja) | 2020-10-01 |
KR102292537B1 (ko) | 2021-08-20 |
US20220004765A1 (en) | 2022-01-06 |
CN108305317A (zh) | 2018-07-20 |
TW201911082A (zh) | 2019-03-16 |
JP7098120B2 (ja) | 2022-07-11 |
US20200285851A1 (en) | 2020-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI708152B (zh) | 圖像處理方法、裝置及儲存介質 | |
US10789453B2 (en) | Face reenactment | |
US11670059B2 (en) | Controlling interactive fashion based on body gestures | |
US11734866B2 (en) | Controlling interactive fashion based on voice | |
US11816926B2 (en) | Interactive augmented reality content including facial synthesis | |
WO2023070021A1 (en) | Mirror-based augmented reality experience | |
US11900506B2 (en) | Controlling interactive fashion based on facial expressions | |
EP4315256A1 (en) | Facial synthesis in augmented reality content for third party applications | |
US20240096040A1 (en) | Real-time upper-body garment exchange | |
CN114779948B (zh) | 基于面部识别的动画人物即时交互控制方法、装置及设备 | |
WO2023121897A1 (en) | Real-time garment exchange | |
WO2023121896A1 (en) | Real-time motion and appearance transfer | |
US20240161242A1 (en) | Real-time try-on using body landmarks | |
US20230316665A1 (en) | Surface normals for pixel-aligned object | |
WO2023192426A1 (en) | Surface normals for pixel-aligned object | |
EP4396781A1 (en) | Controlling interactive fashion based on body gestures | |
CN117041670A (zh) | 图像处理方法及相关设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18841835 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020505230 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20207004027 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18841835 Country of ref document: EP Kind code of ref document: A1 |