CN111582965A - Processing method of augmented reality image - Google Patents

Processing method of augmented reality image Download PDF

Info

Publication number
CN111582965A
CN111582965A CN201910123519.1A CN201910123519A CN111582965A CN 111582965 A CN111582965 A CN 111582965A CN 201910123519 A CN201910123519 A CN 201910123519A CN 111582965 A CN111582965 A CN 111582965A
Authority
CN
China
Prior art keywords
image
electronic device
augmented reality
target
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910123519.1A
Other languages
Chinese (zh)
Inventor
王天佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Litchi Place Ltd
Original Assignee
Litchi Place Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Litchi Place Ltd filed Critical Litchi Place Ltd
Priority to CN201910123519.1A priority Critical patent/CN111582965A/en
Publication of CN111582965A publication Critical patent/CN111582965A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a processing method of an augmented reality image, which is applied to an electronic device and comprises the following steps: taking a picture through a camera lens of the electronic device; identifying the target object from the photo by an identification algorithm; executing a back removing procedure on the photo through image editing software to generate an object image corresponding to the target object; converting the object image into an augmented reality image; displaying the augmented reality image on a display unit of the electronic device; and moving the electronic device and aligning the augmented reality image with the modified target in the environment, thereby simulating the appearance of the modified target according to the target object.

Description

Processing method of augmented reality image
Technical Field
The present invention relates to image processing methods, and particularly to a method for processing augmented reality images.
Background
With the prevalence of networks, a plurality of online shopping platforms appear in the market, so that consumers can shop at home conveniently.
However, the biggest problem of online shopping is that consumers cannot try on or try on before purchasing, so that the consumers often buy articles which are not suitable for themselves, and the biggest reason for the high return rate of online shopping is also the reason.
As mentioned above, the same problem is also present in plane publications such as magazines, catalogues and the like. When the consumer sees the article of interest in the above-mentioned publication, the consumer often fails to try on or try on the article and abandons the idea of further searching and purchasing the article.
In addition, even if the consumer sees a physical object in the life, the consumer may be prevented from purchasing the physical object due to the failure to access the physical object (e.g., clothes worn by strangers or tiles on buildings) or difficulty in accessing the physical object (e.g., watches or shoes displayed in a show window).
In view of the above problems, there is a need in the market to provide a novel technique for assisting a consumer to generate an augmented reality image of an interested article in real time, so that the consumer can apply the article to a target to be modified, thereby determining the purchase intention of the consumer.
Disclosure of Invention
It is therefore an objective of the claimed invention to provide a method for processing augmented reality images, which is convenient for a user to generate an augmented reality image of a specific object and to simulate the appearance of the object using the generated augmented reality image.
In order to achieve the above object, the method for processing augmented reality images of the present invention is applied to an electronic device, and comprises the following steps:
a) taking a picture through a camera lens of the electronic device, wherein the picture at least comprises a target object;
b) the electronic device identifies the target object in the picture through an identification algorithm;
c) the electronic device executes a back removing program on the photo through image editing software to generate an object image corresponding to the target object;
d) converting the object image into an augmented reality image;
e) displaying the augmented reality image on a display unit of the electronic device; and
f) the electronic device is moved to align the augmented reality image displayed by the display unit with a modification target in an alignment environment, thereby simulating the appearance of the modification target after the modification target is applied to the target object.
As mentioned above, the method further comprises the following steps:
d1) after the step d), editing the augmented reality image through the image editing software; wherein, the step e) is to display the edited augmented reality image.
As mentioned above, the step d1) is an editing operation of enlarging, reducing, stretching, deforming, replacing layers or inserting characters into the augmented reality image by the image editing software.
As mentioned above, in the step b), the recognition algorithm analyzes the photo by artificial intelligence to recognize the target object from the photo.
As described above, the artificial intelligence analyzes the image, color, shape and other features of the photo to distinguish the target object and the background information from the photo, and the electronic device removes the background information from the photo and retains the target object to generate the object image when executing the back removing process.
As mentioned above, the method further comprises the following steps:
a0) before the step a, placing the target object in a target background; wherein the picture taken in the step a) includes the target object and the target background.
As mentioned above, the method further comprises the following steps:
g) after the step d1), determining whether the editing operation is completed;
h) repeating the step d1 before the editing operation is completed); and
i) and outputting an edited movie consisting of a plurality of edited augmented reality images after the editing action is finished.
As mentioned above, the image editing software is Photoshop software.
As mentioned above, in the step D), the object image is converted into a 2D augmented reality image.
As mentioned above, the electronic device is connected to a 3D database, and the step D) is to convert the object image into a 3D augmented reality image according to the three-dimensional information in the 3D database.
Compared with the related technology, the invention can achieve the technical effect that the electronic device automatically converts the images of the physical objects around the life or the objects displayed on the webpage, the magazine and the type record into the amplified actual images, thereby being beneficial for a user to directly sleeve the object on the target to be modified so as to confirm the appearance of the target after the object is used.
Drawings
FIG. 1 is a block diagram of a first embodiment of an electronic device according to the present invention;
FIG. 2 is a first embodiment of an image processing flow chart according to the present invention;
FIG. 3A is a diagram illustrating a first operation of the AR image according to the present invention;
FIG. 3B is a diagram illustrating a second operation of the AR image according to the present invention;
FIG. 3C is a diagram illustrating a third operation of the AR image according to the present invention;
FIG. 3D is a diagram illustrating a fourth operation of the AR image according to the present invention;
FIG. 4 is a flowchart of a first embodiment of an image processing method according to the present invention.
Wherein, the reference numbers:
1 … electronic device;
11 … camera lens;
12 … a processing unit;
13 … storage unit;
131 … identifying the algorithm;
132 … video editing software;
133 … 3D database;
14 … display element;
15 … input unit;
2 … user;
3 … computer screen;
31 … target item;
4 … photo;
5 … AR images;
6 … editing the AR image;
7 … modifying the target;
S10-S24 and S30-S44 ….
Detailed Description
A preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.
The invention discloses an Augmented Reality (AR) image processing method (hereinafter, referred to as a processing method for short in the specification), which can assist a user to quickly generate an AR image, so that the user can align the AR image with a modification target (a person, an object or a position) to simulate the modified appearance of the modification target in advance.
Fig. 1 is a block diagram of an electronic device according to a first embodiment of the present invention. The processing method of the present invention is mainly applied to the electronic device 1 shown in fig. 1, and specifically, a user can operate the electronic device 1 to immediately convert a target object into an AR image, and apply the AR image to the modification target, so as to view in advance the appearance of the modification target after the target object is actually applied. Therefore, the user can conveniently determine whether to buy and use the target object.
As shown in fig. 1, the electronic device 1 mainly includes a camera lens 11, a processing unit 12, a storage unit 13, a display unit 14 and an input unit 15, wherein the camera lens 11, the processing unit 12, the storage unit 13, the display unit 14 and the input unit 15 are electrically connected to each other through a bus respectively. For the convenience of operation, the electronic device 1 may be a handheld electronic device, such as a smart phone or a tablet computer, but is not limited thereto.
In one embodiment, the electronic device 1 can mainly take a photo through the camera lens 11, and process the taken photo through the processing unit 12. In this embodiment, the storage unit 13 at least stores the recognition algorithm 131 and the image editing software 132. The processing unit 12 may execute the recognition algorithm 131 after taking the picture taken by the camera lens 11, so as to process the picture by the recognition algorithm 131, thereby recognizing the image of the target object from the picture, and further converting the image of the target object into the AR image.
Specifically, the user can operate the electronic device 1 to aim the camera lens 11 at a target object of interest (e.g., a physical object, a content of a magazine or a web page, etc.), and take a picture, wherein the taken picture at least includes the target object and background information other than the target object.
Then, after the photo is taken, the processing unit 12 can recognize the portion belonging to the target object from the photo by the recognition algorithm 131, and perform a back-off procedure on the photo by the image editing software 132 to generate an object image corresponding to the target object. In this embodiment, the object image generated by the back removing procedure only includes the image of the target object in the photo, but does not include the back information.
Then, the processing unit 12 may further perform a conversion process on the object image to convert the object image into an AR image. In this embodiment, since the object image only includes the target object, the AR image generated by the conversion processing is the AR image of the target object.
For example, if the user takes a photo in the store through the camera lens 11 of the electronic device 1, the photo includes the target object (e.g. watch) and background information (e.g. watch box, table and cabinet), the electronic device 1 can recognize the image of the watch from the photo through the recognition algorithm 131 after taking the photo, and remove the background information of the watch box, table and cabinet in the photo through the image editing software 132 to complete the back-removing process and generate the object image. Moreover, the electronic device 1 may further perform a conversion process to convert the object image into an AR image of the watch. After the AR image of the watch is generated, the user can move the electronic device 1 by himself to apply the AR image to the wrist of the user and view the appearance of the user wearing the watch.
In one embodiment, the image editing software 132 may be, for example, but not limited to, Photoshop editing software developed by Adole corporation. The image editing software 132 not only executes a back-off program, but also performs editing operations required by the user on the AR image after the AR image is generated.
Specifically, after the AR image is generated, the processing unit 12 may automatically perform the related editing operation on the AR image through the image editing software 132, or may accept the external operation of the user through the input unit 15, so that the user manually performs the required editing operation on the AR image. In this embodiment, the editing operation may be, for example, zooming in, zooming out, stretching, deforming, replacing layers, or inserting characters, but not limited thereto. In the invention, the user can edit the AR image according to the characteristics (shape, size, position, angle and the like) of the modification target to be applied, so that the AR image can be directly applied to the modification target after the editing is finished.
The electronic device 1 can display the AR image through the display unit 14, so that the user can move the electronic device 1 to align the AR image displayed on the display unit 14 with a modified object (e.g., a person, a pet, etc. or a wall, a window, a floor, etc.) in the environment to simulate the appearance of the modified object after the object represented by the AR image is actually applied.
In an embodiment, the display unit 14 and the input unit 15 may be separate elements (for example, the display unit 14 is a screen, and the input unit 15 is a keyboard or a touch pad), or may be integrated into a single element (for example, a touch screen), without limitation.
Fig. 2 is a flowchart illustrating an image processing method according to a first embodiment of the present invention. Fig. 2 discloses various steps of the processing method of the present invention, which are mainly implemented by the electronic device 1 shown in fig. 1.
As shown in fig. 2, to execute the processing method of the present invention, a user operates the electronic device 1 to align the camera lens 11 of the electronic device 1 with a target object of interest (step S10), and take a photo (step S12), wherein the photo at least includes an image of the target object.
In one embodiment, the target object may be an object that is not directly accessible to the user (e.g., an object published in a magazine or displayed on a computer screen). In this embodiment, the user can directly operate the electronic device 1 to take a picture including the target object, and in the subsequent step, the processing unit 12 is used to extract the image of the target object from the whole picture.
In another embodiment, the target object may be a physical object (e.g., clothing, jewelry, material, etc.) that is directly accessible to the user. In this embodiment, the user can first place the target object in a target background (e.g. a white background), and then take the photo by the electronic device 1. In the embodiment, the picture taken by the electronic device 1 only includes the images of the target object and the target background. In this way, the processing unit 12 can use less resources to complete the identification process and the back-off process in the subsequent processes, thereby improving the processing performance and avoiding the waste of resources.
After step S12, the processing unit 12 of the electronic device 1 executes the recognition algorithm 131 to recognize the target object from the photo by the recognition algorithm 131 (step S14).
It is noted that the recognition algorithm 131 may be an Artificial Intelligence (AI) algorithm, which recognizes the photo by using pre-trained AI to recognize the target object from the photo.
Specifically, the provider of the recognition algorithm 131 may pre-train the artificial intelligence so that the artificial intelligence recognizes the appearance (including images, colors, shapes and other recognizable features) of a plurality of target objects (e.g., clothes, pants, shoes, watches, ornaments, lights, ceiling fans, tiles, curtains, etc.). Accordingly, in step S14, the artificial intelligence can analyze the image, color, shape and other features of the obtained photo to distinguish the target object and the background information other than the target object from the photo.
After the step S14, the processing unit 12 can virtually divide the received photo into the target object to be preserved and the background information that is not to be preserved, so the processing unit 12 can further execute the image editing software 132 and perform the back-removing procedure on the photo through the image editing software 132 to generate the object image corresponding to the target object (step S16).
Specifically, in the step S16, the processing unit 12 mainly actively deletes the image identified as the background information by the recognition algorithm 131 in the photo by the image editing software 132, and retains the image identified as the target object by the recognition algorithm 131, and generates the object image according to the image of the target object, thereby achieving the purpose of removing the back.
Next, the processing unit 12 further performs a conversion process on the generated object image to convert the object image into an AR image (step S18), and displays the AR image through the display unit 14 of the electronic device 1 (S22).
In this embodiment, the AR image refers to an image that can coexist with an image of a real surrounding, that is, when the electronic device 1 starts the camera lens 11 and loads the AR image, the display unit 14 can display an instant image captured by the camera lens 11 and can simultaneously display the loaded AR image. In this way, the user can operate the electronic device 1 and turn on the camera lens 11, and move the electronic device 1 to make the AR image displayed by the display unit 14 to be a modified object in the alignment environment (step S24), thereby simulating the modified object actually applying the target object.
It should be noted that the AR image generated in step S18 may not satisfy the user' S requirement (e.g., too small size or deformation) due to the direction, angle and distance of the photo. Therefore, after the step S18, the processing unit 12 may further edit the generated AR image by the image editing software 132 (step S20). In this embodiment, the processing unit 12 mainly displays the edited AR image on the display unit 14 in step S22, thereby facilitating the user to apply the AR image displayed on the display unit 14 to the modification target more appropriately.
In one embodiment, the processing unit 12 may automatically perform a related editing operation on the AR image through the image editing software 132 (for example, automatically correct a skewed image), or receive an external operation from a user through the input unit 15, so that the user manually operates the image editing software 132 to perform a desired editing operation on the AR image. In the present embodiment, the image editing software 132 may be, for example, Photoshop software, and the editing operation may be, for example, zooming in, zooming out, stretching, deforming, layer replacing, or inserting text, but is not limited thereto.
Please refer to fig. 3A to 3D, which are schematic diagrams illustrating a first operation to a fourth operation of the AR image according to the present invention.
As shown in fig. 3A, an image of a target object 31 is displayed on the computer screen 3, and in the embodiment of fig. 3A, the target object 31 is a tie, for example. The user 2 holds the electronic device 1 and directs the camera lens 11 of the electronic device 1 to the computer screen 3 to take a picture 4, and the picture 4 at least includes an image of the target object 31.
Next, as shown in fig. 3B, after the electronic device 1 obtains the photo 4, the processing unit 12 analyzes the photo 4 by the recognition algorithm 131 to distinguish the target object 31 and the background information in the photo 4. Also, the processing unit 12 executes a background removing procedure to delete the background information in the photo 4 through the image editing software 132, thereby retaining the image of the target object 31 and generating an object image corresponding to the target object 31. Then, the processing unit 12 performs a conversion operation on the object image to generate an AR image 5 corresponding to the target object 31.
In an embodiment, the processing unit 12 mainly converts the generated object image into a 2D AR image, but is not limited thereto. As shown in fig. 1, the storage unit 13 may further store a 3D database 133, wherein the 3D database 133 records three-dimensional information of a plurality of target objects. In another embodiment, the electronic device 12 can further convert the generated object image into a 3D AR image according to the three-dimensional information in the 3D database 133, so as to facilitate subsequent utilization by the user.
Fig. 1 illustrates an example of storing the 3D database 133 in the storage unit 13 of the electronic device 1. In other embodiments, the electronic device 1 may also be connected to the internet through a wireless communication unit (not shown), and when the object image is to be converted into the 3D AR image, the electronic device is connected to the remote 3D database through the internet to access the three-dimensional information in the 3D database, without limitation.
Next, as shown in fig. 3C, the user may edit the generated AR image 5 through the input unit 15 to further generate an edited AR image 6. Specifically, the user may use the image editing software 132 through the input unit 15 (e.g., a touch screen) of the electronic device 1 to perform editing actions such as enlarging, reducing, rotating, stretching, deforming, replacing layers, or inserting characters on the AR image 5.
Next, as shown in fig. 3D, the electronic apparatus 1 displays the edited AR image 6 on the display unit 14. The user can operate the electronic device 1, and align the edited AR image 6 displayed by the electronic device 1 with the modification target 7 in the environment (in this embodiment, the second user is taken as an example). Thus, the electronic device 1 can directly display the appearance of the target object (tie) actually applied to the modified target (second user) by the user on the display unit 14. In this embodiment, the user can determine whether the necktie is suitable for the second user by generating and editing the AR image 6, thereby determining whether to purchase the necktie.
As another example, the user may operate the electronic device 1 to take a picture of a watch shown in a cabinet and generate an AR image of the watch by the processing unit 12. The user can aim the AR image of the watch at the wrist to simulate the appearance of the watch on the user, thereby judging whether to buy the watch.
For another example, the user may operate the electronic device 1 to take a picture of a ceramic tile published in the catalog and generate an AR image of the ceramic tile by the processing unit 12. The user can aim at the AR image of the ceramic tile to the position of the floor to be paved so as to simulate the appearance of the floor paved with the ceramic tile, thereby judging whether the ceramic tile needs to be purchased.
For another example, the user may operate the electronic device 1 to take a picture of a curtain in a neighborhood home and generate an AR image of the curtain by the processing unit 12. The user can aim at the AR image of the curtain to the position of the window in the house so as to simulate the appearance of the window in the house after the curtain is arranged, and therefore whether the curtain needs to be purchased or not is judged.
In other embodiments, the electronic device 1 may also be an endoscope apparatus in a hospital, for example. In this embodiment, the physician can take a picture of the normal part, abnormal part or cancer cell in the patient's body by using the endoscope lens, and generate a corresponding AR image after the back-off procedure or the back-off procedure. Therefore, the doctor can use the AR image displayed by the display to perform perspective positioning of the normal part, the abnormal part or the cancer cell, so that the patient can more easily know the physical condition of the patient.
In addition, besides generating the AR image to facilitate the user to apply the AR image to the modified target, the user can edit the AR image by the processing method provided by the invention, thereby generating the AR film.
Please refer to fig. 4, which is a flowchart illustrating an image processing method according to a first embodiment of the present invention.
FIG. 4 discloses a similar image processing flow to that of FIG. 2. Specifically, the user can operate the electronic device 1 to aim the camera lens 11 at the target object (step S30), and take a picture including at least the target object (step S32). Then, the processing unit 12 recognizes the target object from the photo by the recognition algorithm 131 (step S34), and performs a back-off procedure on the photo by the image editing software 132 to generate an object image corresponding to the target object (step S36). Then, the processing unit performs a conversion process on the generated object image to convert the object image into an AR image (step S38).
Then, the processing unit 12 further executes the image editing software 132, wherein the electronic device 1 may automatically or manually operate the image editing software 132 via the user to edit the generated AR image (step S40).
In the embodiment of fig. 4, after the step S40, the processing unit 12 continuously determines whether the editing operation of the electronic device 1 or the user is completed (step S42), and returns to the step S40 before the editing operation is completed, so as to continuously edit the AR image by the image editing software 132.
In this embodiment, the electronic device 1 or the user may perform multiple editing operations on the AR image through the image editing software 132 to generate multiple continuous AR images, and form an AR movie with the multiple AR images. If the processing unit 12 determines in step S40 that the editing operation is completed, the processing unit 12 may output an edited movie composed of a plurality of edited AR images (step S44). In an embodiment, the processing unit 12 may determine that the editing operation is completed when the user triggers the storage button or stops editing for a set time, but is not limited thereto.
Through the steps shown in fig. 4, the user can use the target object of interest as the creation material, capture the image of the target object through the camera lens 11 of the electronic device 1, and create a 2D or 3D AR film based on the image of the target object through the processing unit 12, the recognition algorithm 131, the image editing software 132 and the 3D database 133. Therefore, the creation quality and creation speed of the AR film can be effectively improved.
Through the technical scheme of the invention, the user can easily convert the articles which are seen and interested in the life surroundings into the AR images in real time, so that the user can apply the articles on the modification target or create the corresponding AR film based on the articles, and the method is quite convenient.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, so that equivalent variations using the teachings of the present invention are all included in the scope of the present invention, and it is obvious that the present invention is not limited thereto.

Claims (10)

1. A processing method of augmented reality images is applied to an electronic device, and is characterized by comprising the following steps:
a) taking a picture through a camera lens of the electronic device, wherein the picture at least comprises a target object;
b) the electronic device identifies the target object in the picture through an identification algorithm;
c) the electronic device executes a back removing program on the photo through image editing software to generate an object image corresponding to the target object;
d) converting the object image into an augmented reality image;
e) displaying the augmented reality image on a display unit of the electronic device; and
f) the electronic device is moved to align the augmented reality image displayed by the display unit with a modification target in an alignment environment, thereby simulating the appearance of the modification target after the modification target is applied to the target object.
2. The method of claim 1, further comprising the steps of:
d1) after the step d), editing the augmented reality image through the image editing software;
wherein, the step e) is to display the edited augmented reality image.
3. The method of claim 2, wherein the step d1) is an editing operation of enlarging, reducing, stretching, deforming, layer replacing or embedding characters into the augmented reality image by the image editing software.
4. The method as claimed in claim 2, wherein the recognition algorithm analyzes the photo by artificial intelligence to recognize the target object from the photo in the step b).
5. The method as claimed in claim 4, wherein the artificial intelligence analyzes the image, color, shape and other features of the photo to distinguish the object and background information from the photo, and the electronic device removes the background information from the photo and retains the object to generate the object image when performing the back-removing process.
6. The method of claim 1, further comprising the steps of:
a0) before the step a, placing the target object in a target background;
wherein the picture taken in the step a) includes the target object and the target background.
7. The method of claim 2, further comprising the steps of:
g) after the step d1), determining whether the editing operation is completed;
h) repeating the step d1 before the editing operation is completed); and
i) and outputting an edited movie consisting of a plurality of edited augmented reality images after the editing action is finished.
8. The method of claim 2, wherein the image editing software is Photoshop software.
9. The method as claimed in claim 1, wherein the step D) converts the object image into a 2D augmented reality image.
10. The method as claimed in claim 1, wherein the electronic device is connected to a 3D database, and the step D) converts the object image into a 3D augmented reality image according to the three-dimensional information in the 3D database.
CN201910123519.1A 2019-02-18 2019-02-18 Processing method of augmented reality image Pending CN111582965A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910123519.1A CN111582965A (en) 2019-02-18 2019-02-18 Processing method of augmented reality image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910123519.1A CN111582965A (en) 2019-02-18 2019-02-18 Processing method of augmented reality image

Publications (1)

Publication Number Publication Date
CN111582965A true CN111582965A (en) 2020-08-25

Family

ID=72124350

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910123519.1A Pending CN111582965A (en) 2019-02-18 2019-02-18 Processing method of augmented reality image

Country Status (1)

Country Link
CN (1) CN111582965A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060005A (en) * 2009-09-10 2011-03-24 Hitachi Solutions Ltd Online shopping virtual try-on system
US20130307851A1 (en) * 2010-12-03 2013-11-21 Rafael Hernández Stark Method for virtually trying on footwear
CN103597519A (en) * 2011-02-17 2014-02-19 麦特尔有限公司 Computer implemented methods and systems for generating virtual body models for garment fit visualization
US20140149264A1 (en) * 2011-06-14 2014-05-29 Hemanth Kumar Satyanarayana Method and system for virtual collaborative shopping
TWM540332U (en) * 2016-12-22 2017-04-21 Kang Siang Technology Co Ltd Augmented reality shopping system
US20180033202A1 (en) * 2016-07-29 2018-02-01 OnePersonalization Limited Method and system for virtual shoes fitting
TW201821949A (en) * 2016-11-30 2018-06-16 香港商阿里巴巴集團服務有限公司 Augmented-reality-based offline interaction method and apparatus
CN109214876A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 A kind of fitting method and system based on augmented reality

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011060005A (en) * 2009-09-10 2011-03-24 Hitachi Solutions Ltd Online shopping virtual try-on system
US20130307851A1 (en) * 2010-12-03 2013-11-21 Rafael Hernández Stark Method for virtually trying on footwear
CN103597519A (en) * 2011-02-17 2014-02-19 麦特尔有限公司 Computer implemented methods and systems for generating virtual body models for garment fit visualization
US20140149264A1 (en) * 2011-06-14 2014-05-29 Hemanth Kumar Satyanarayana Method and system for virtual collaborative shopping
US20180033202A1 (en) * 2016-07-29 2018-02-01 OnePersonalization Limited Method and system for virtual shoes fitting
TW201821949A (en) * 2016-11-30 2018-06-16 香港商阿里巴巴集團服務有限公司 Augmented-reality-based offline interaction method and apparatus
TWM540332U (en) * 2016-12-22 2017-04-21 Kang Siang Technology Co Ltd Augmented reality shopping system
CN109214876A (en) * 2017-06-29 2019-01-15 深圳市掌网科技股份有限公司 A kind of fitting method and system based on augmented reality

Similar Documents

Publication Publication Date Title
JP6055160B1 (en) Cosmetic information providing system, cosmetic information providing apparatus, cosmetic information providing method, and program
CN111787242B (en) Method and apparatus for virtual fitting
US10789699B2 (en) Capturing color information from a physical environment
JP2021008126A (en) Generation of 3d-printed custom-made wearing material
CN105447047B (en) It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided
WO2017005014A1 (en) Method and device for searching matched commodities
CN105373929B (en) Method and device for providing photographing recommendation information
CN110021061A (en) Collocation model building method, dress ornament recommended method, device, medium and terminal
JP2020502662A (en) Intelligent automatic cropping of images
CN106203286A (en) The content acquisition method of a kind of augmented reality, device and mobile terminal
JP6656572B1 (en) Information processing apparatus, display control method, and display control program
CN110928411A (en) AR-based interaction method and device, storage medium and electronic equipment
CN109074680A (en) Realtime graphic and signal processing method and system in augmented reality based on communication
US20160042233A1 (en) Method and system for facilitating evaluation of visual appeal of two or more objects
CN114187392B (en) Virtual even image generation method and device and electronic equipment
CN110267079B (en) Method and device for replacing human face in video to be played
CN110084675A (en) Commodity selling method, the network terminal and the device with store function on a kind of line
WO2018059258A1 (en) Implementation method and apparatus for providing palm decoration virtual image using augmented reality technology
CN116452745A (en) Hand modeling, hand model processing method, device and medium
CN108896035B (en) Method and equipment for realizing navigation through image information and navigation robot
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
CN110597397A (en) Augmented reality implementation method, mobile terminal and storage medium
CN111582965A (en) Processing method of augmented reality image
CN110177216A (en) Image processing method, device, mobile terminal and storage medium
CN112102018A (en) Intelligent fitting mirror implementation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200825