CN108932088B - Virtual object collection method and portable electronic device - Google Patents

Virtual object collection method and portable electronic device Download PDF

Info

Publication number
CN108932088B
CN108932088B CN201710361691.1A CN201710361691A CN108932088B CN 108932088 B CN108932088 B CN 108932088B CN 201710361691 A CN201710361691 A CN 201710361691A CN 108932088 B CN108932088 B CN 108932088B
Authority
CN
China
Prior art keywords
virtual
processing unit
acquired
electronic device
portable electronic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710361691.1A
Other languages
Chinese (zh)
Other versions
CN108932088A (en
Inventor
黄明月
洪荣昭
邱敏祈
戴凯欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201710361691.1A priority Critical patent/CN108932088B/en
Publication of CN108932088A publication Critical patent/CN108932088A/en
Application granted granted Critical
Publication of CN108932088B publication Critical patent/CN108932088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device

Abstract

A virtual object collection method implemented by a portable electronic device comprises the following steps: (A) and displaying one virtual element to be acquired in the plurality of virtual elements. (B) When receiving a collection instruction which is input by a user and represents to collect the virtual element to be collected, adding the virtual element to be collected into a collected element list. (C) When a response request input by a user and related to at least one target virtual element in the acquired element list is received, a plurality of different object options are generated and displayed, wherein one target object option is related to one target virtual object. (D) When a response selection input by a user is received, whether the response selection is related to the target object option is judged. (E) When the answer selection is determined to be related to the target object option, the target virtual object is added to an collected object list.

Description

Virtual object collection method and portable electronic device
Technical Field
The present invention relates to a virtual object collecting method, and more particularly, to a virtual object collecting method using a portable electronic device. The invention also relates to a portable electronic device for executing the virtual object collection method.
Background
In the society with advanced technology, the learning knowledge pipeline is no longer limited to reading books, and how to absorb new knowledge during entertainment to achieve the purpose of happy learning is a topic worth of research.
Disclosure of Invention
The invention aims to provide a virtual object collection method which can assist learning and is interesting.
Therefore, the virtual object collection method of the present invention is implemented by a portable electronic device, the portable electronic device comprising a display unit, an input unit and a processing unit, the method comprising the steps of:
(A) the processing unit of the portable electronic device displays one virtual element to be acquired of the plurality of virtual elements through the display unit.
(B) When the processing unit displays the virtual element to be acquired through the display unit and receives an acquisition instruction which is input by a user and represents acquisition of the virtual element to be acquired through the input unit, the processing unit adds the virtual element to be acquired into an acquired element list.
(C) When the processing unit receives a response request input by a user and related to at least one target virtual element in the collected element list and corresponding to one target virtual object of a plurality of virtual objects through the input unit, the processing unit generates a plurality of different object options according to the response request and displays the object options through the display unit, wherein each of the object options is related to one of the virtual objects, and one of the object options is related to the target virtual object.
(D) When the processing unit receives a response selection which is input by a user and is related to one of the object options from the input unit, the processing unit judges whether the response selection is related to the target object option.
(E) When the processing unit determines that the response selection is related to the target object option, the processing unit adds the target virtual object to a collected object list.
In some embodiments of the virtual object collection method of the present invention, the portable electronic device further comprises an image capturing unit for continuously generating an original image, and the method further comprises before the step (a):
(F) the processing unit judges whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired.
When the processing unit judges that the current original image conforms to one of the image templates corresponding to the virtual element to be acquired, the processing unit generates an augmented reality image according to the current original image and the virtual element to be acquired, and displays the augmented reality image through the display unit.
In some embodiments of the virtual object collection method of the present invention, the portable electronic device further comprises a positioning unit for continuously generating a piece of position data representing a position of the portable electronic device, and the method further comprises before step (F):
(G) the processing unit judges whether the portable electronic device is positioned at one of a plurality of preset positions corresponding to the preset position of the virtual element to be acquired according to the current position data.
And (F) when the processing unit judges that the portable electronic device is positioned at one of the preset positions corresponding to the virtual element to be acquired according to the current position data, the processing unit judges whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired.
In some embodiments of the virtual object collection method of the present invention, in step (B), the capture instruction is associated with one of a plurality of virtual capture tools corresponding to the virtual capture tool of the virtual element to be captured.
The invention also provides a portable electronic device capable of executing the virtual object collection method.
The invention discloses a portable electronic device which comprises a display unit, an input unit and a processing unit. The processing unit is electrically connected with the display unit and the input unit. The processing unit displays one virtual element to be acquired of the plurality of virtual elements through the display unit. When the processing unit displays the virtual element to be acquired through the display unit and receives an acquisition instruction which is input by a user and represents acquisition of the virtual element to be acquired through the input unit, the processing unit adds the virtual element to be acquired into an acquired element list. When the processing unit receives a response request input by a user and related to at least one target virtual element in the collected element list and corresponding to one target virtual object of a plurality of virtual objects through the input unit, the processing unit generates a plurality of different object options according to the response request and displays the object options through the display unit, wherein each of the object options is related to one of the virtual objects, and one of the object options is related to the target virtual object. When the processing unit receives a response selection which is input by a user and is related to one of the object options through the input unit, the processing unit judges whether the response selection is related to the target object option. When the processing unit determines that the response selection is related to the target object option, the processing unit adds the target virtual object to a collected object list.
In some embodiments of the portable electronic device of the present invention, the portable electronic device further comprises an image capturing unit for continuously generating an original image. The processing unit firstly judges whether the current original image accords with one of a plurality of image templates corresponding to the image template of the virtual element to be acquired, and when the processing unit judges that the current original image accords with one of the image templates corresponding to the image template of the virtual element to be acquired, the processing unit generates an augmented reality image according to the current original image and the virtual element to be acquired and displays the virtual element to be acquired in a mode of displaying the augmented reality image through the display unit.
In some embodiments of the portable electronic device of the present invention, the portable electronic device further comprises a positioning unit for continuously generating a piece of position data representing a position of the portable electronic device. The processing unit judges whether the portable electronic device is located at one of a plurality of preset places corresponding to the virtual element to be acquired according to the current position data, and when the processing unit judges that the portable electronic device is located at one of the preset places corresponding to the virtual element to be acquired according to the current position data, the processing unit then judges whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired.
In some embodiments of the portable electronic device of the present invention, the capture command is associated with one of a plurality of virtual capture tools corresponding to the virtual capture tool of the virtual element to be captured.
The invention has the beneficial effects that: the user can collect the virtual elements through the portable electronic device, respond to the object options to collect the virtual objects, and obtain fun and achievement feeling in the process.
Drawings
FIG. 1 is a block diagram of a portable electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an acquisition process of a virtual object collection method performed by the embodiment;
FIG. 3 is a schematic diagram illustrating an augmented reality image showing a virtual element to be acquired according to the embodiment;
FIG. 4 is a flowchart illustrating an answering flow of the virtual object collection method;
FIG. 5 is a schematic diagram illustrating an embodiment showing a list of collected elements and a plurality of object options; and
FIG. 6 is a diagram illustrating an example of a collected object list and a virtual object included therein.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, an embodiment of the portable electronic device 1 of the present invention may be, for example, a smart phone or a tablet computer, but is not limited thereto. The portable electronic device 1 is installed with a virtual object collection application program and includes a display unit 11, an input unit 12, an image capturing unit 13, a positioning unit 14, a storage unit 15, and a processing unit 16 electrically connected to the display unit 11, the input unit 12, the image capturing unit 13, the positioning unit 14, and the storage unit 15.
The storage unit 15 stores a plurality of predetermined locations D1, a plurality of virtual objects D2, a plurality of virtual elements D3, a plurality of image templates D4, a collected element list D5, and a collected object list D6. The virtual objects D2 are, for example, animals representing a plurality of different breeds, respectively. For example, the two virtual objects D2 represent a five-color bird and a red-belly squirrel, respectively, but not limited thereto. The virtual element D3 corresponds to the predetermined location D1 and the image template D4, respectively, and the image template D4 is related to an environmental landscape of the predetermined location D1, respectively. Each of the virtual elements D3 corresponds to one of the virtual objects D2, and each virtual element D3 represents a trace left by its corresponding virtual object D2. For example, three of the virtual elements D3 represent a five-color bird feather, a five-color bird droppings, and a red belly squirrel footprint, but not limited thereto. Specifically, in the present embodiment, the virtual object D2 and the virtual element D3 are both in the form of images (as shown in fig. 3 and 6), and are drawn according to the actual appearance and features of the animal or trace represented by the images. In other embodiments, the virtual object D2 may also represent a plurality of different varieties of plants, and the virtual element D3 is related to leaves, fruits, flowers, and the like of various plants, for example, but not limited to this embodiment.
The image capturing unit 13 is used for continuously generating an original image. The positioning unit 14 is used for continuously generating a piece of position data representing the position of the portable electronic device 1.
When the portable electronic device 1 is operated by the user to open the virtual object collection application, the portable electronic device 1 then executes a virtual object collection method. The virtual object collecting method comprises a collecting process and a response process. Referring to fig. 2, the acquisition process will be described in detail.
First, in step S1, the processing unit 16 displays a map information via the display unit 11, the map information being in the form of a planar map similar to a Google map, for example, and the map information indicating the location of the portable electronic device 1 and the location of the predetermined location D1. The user can go to one of the predetermined locations D1 through the map information displayed on the display unit 11. Next, step S2 is executed.
In step S2, the processing unit 16 determines whether the portable electronic device 1 is located at one of the predetermined locations D1 according to the current position data generated by the positioning unit 14. For convenience, when the portable electronic device 1 is located at one of the predetermined locations D1, the virtual element D3 corresponding to the predetermined location D1 where the portable electronic device 1 is currently located is defined as a virtual element to be collected. If the determination result of the processing unit 16 is yes, step S3 is executed. If the determination result of the processing unit 16 is no, step S2 is executed again.
In step S3, the processing unit 16 controls the image capturing unit 13 to generate the original image and display the original image via the display unit 11. The user can shoot a lens (not shown) of the image shooting unit 13 all around towards the surrounding environment, and the shooting is similar to a natural scientist searching for traces of animals and plants. Next, step S4 is executed.
In step S4, the processing unit 16 determines whether the current original image matches one of the image templates D4 corresponding to the virtual element to be captured, in other words, the processing unit 16 determines whether the current original image and the image template D4 corresponding to the virtual element to be captured are the same scene. If the determination result of the processing unit 16 is yes, step S5 is executed. If the determination result of the processing unit 16 is no, step S4 is executed again.
In step S5, the processing unit 16 displays the virtual element to be collected via the display unit 11. Specifically, the processing unit 16 generates an augmented reality image according to the current original image and the virtual element to be captured by displaying the virtual element to be captured through the display unit 11, and then displays the augmented reality image through the display unit 11 as shown in fig. 3. Next, step S6 is executed.
In step S6, the processing unit 16 determines whether a user input is received via the input unit 12 and represents a collection command for collecting the virtual element to be collected. In this embodiment, the collection instruction is associated with one of a plurality of virtual collection tools corresponding to the virtual element to be collected. For example, the user must first select a virtual tweezer tool to collect the virtual elements to be collected, which represent "five-color bird feather". For example, the user should select a virtual shovel tool to collect the virtual elements to be collected representing the five-color bird strike. For another example, the user should first select a virtual camera tool to collect the virtual elements to be collected representing the footprint of the Chi-belly squirrel, but not limited thereto. If the processing unit 16 determines that the collecting instruction is received, step S7 is executed. If the processing unit 16 determines that the collecting instruction is not received, step S6 is executed again.
In step S7, the processing unit 16 adds the to-be-collected virtual element to the collected element list D5. The processing unit 16 can also display the collected element list D5, for example, in the form shown in fig. 5, for the user to browse the collected virtual element D3.
The above-mentioned steps S1 to S7 are the collecting process of the virtual object collecting method. Next, referring to fig. 4, a detailed description is given below of a response flow of the virtual object collection method.
First, in step S8, the processing unit 16 determines whether a response request input by the user is received via the input unit 12. The response request is associated with one or more virtual elements D3 in the collected elements list D5. Referring to fig. 5, more specifically, the response request is generated by the user selecting one or more virtual elements D3 in the collected element list D5 through the input unit 12. For convenience of illustration, each virtual element D3 selected by the user in the collected element list D5 is defined as a target virtual element. It is further noted that, in the present embodiment, one or more target virtual elements correspond to one of the plurality of virtual objects D2, that is, the response request is generated if the user selects only one target virtual element, however, if the user selects a plurality of target virtual elements, the target virtual elements are required to correspond to the same virtual object D2, and the response request is generated. For example, if the two target virtual elements selected by the user represent "five-color bird feather" and "five-color bird rejection", respectively, the response request will be generated because the two target virtual elements both correspond to the virtual object D2 representing "five-color bird". As another example, if the two target virtual elements selected by the user represent "five-color bird feather" and "chinchilla footprint", respectively, the response request will not be generated because the two target virtual elements correspond to two different virtual objects D2 representing "five-color bird" and "chinchilla", respectively. If the processing unit 16 determines that the response request is received, step S9 is executed. If the processing unit 16 determines that the response request is not received, step S8 is executed again.
In step S9, for convenience of illustration, the virtual object D2 corresponding to one or more target virtual elements is defined as a target virtual object. As shown in fig. 5, the processing unit 16 generates a plurality of different object options 100 according to the response request, and displays the object options 100 through the display unit 11. Each of the object options 100 is associated with one of the virtual objects D2, and a target of the object options 100 is associated with the target virtual object. In the present embodiment, the object option 100 is displayed in the form of a text block, but in other embodiments, the object option 100 may also be displayed in the form of an animated image, for example, but not limited to the present embodiment. Next, step S10 is executed.
In step S10, the processing unit 16 determines whether an input is received by the user via the input unit 12 and is associated with a response selection of one of the object options 100. If the determination result of the processing unit 16 is yes, step S11 is executed. If the determination result of the processing unit 16 is no, step S10 is executed again.
In step S11, the processing unit 16 determines whether the answer selection is related to the target item option. For example, assume that the user selects two target virtual elements D3 representing "feathers of five-colored birds" and "expulsion of five-colored birds" respectively to generate the response request in step S8, and that the processing unit 16 generates three item options 100 representing "five-colored birds", "chinchilla" and "red-crowed chicken" respectively according to the response request in step S9. In which the object option 100 representing "five-color bird" is the target object option, so that in step S10, the processing unit 16 can be enabled to determine yes in this step only if the user inputs a response selection related to the object option 100 representing "five-color bird". If the determination result of the processing unit 16 is yes, step S12 is executed. If the determination result of the processing unit 16 is no, step S13 is executed.
In step S12, the processing unit 16 displays a correct answer message via the display unit 11 and adds the target virtual object D2 to the collected object list D6. Referring to fig. 6, it should be noted that the processing unit 16 not only can display the collected virtual object D2 in the collected object list D6 through the display unit 11, but also can display an introduction message related to the collected virtual object D2.
In step S13, the processing unit 16 displays a reply error message via the display unit 11.
The aforementioned steps S8 to S13 are the response flow of the virtual object collection method.
It should be noted that, in other embodiments, after the processing unit 16 determines that the user has input the correct answer choice in step S11, a certain amount of virtual currency can be distributed to the user, and the virtual currency can be used to exchange more virtual collection tools, for example, but not limited thereto.
In addition, the virtual object collection method is not limited to outdoor applications. For example, in another embodiment, the virtual object collection method can also be applied to a museum or art gallery. In another embodiment, the portable electronic device 1 does not display map information, and does not determine whether its own location is located at any predetermined location D1. Instead, a visiting line of the museum is provided with a plurality of two-dimensional bar codes, and when a user visits and finds any one of the two-dimensional bar codes, the user can scan the two-dimensional bar codes by the image shooting unit 13 of the portable electronic device 1 to acquire the virtual element D3 corresponding to the two-dimensional bar code. In another embodiment, the virtual objects D2 can be historical relics such as sites, ancient sites, sculptures, paintings, etc., respectively, and the virtual elements D3 can be puzzles, image data, voice data, background presentations, etc. related to the historical relics, but not limited thereto.
In summary, the present embodiment can make the user enjoy collecting the virtual element D3, responding to the object option 100, and collecting the virtual object D2 by executing the virtual object D2 collection method. In addition, in the present embodiment, the virtual object D2 and the virtual element D3 are combined with various animals in nature, so that the user needs to determine what the corresponding virtual object D2 corresponds to according to the image of each virtual element D3 to smoothly collect various virtual objects D2. Further, the user can not only view and appreciate his own collected virtual object D2 to generate a sense of achievement when browsing the collected object list D6, but also read the introduction message related to the collected virtual object D2, thereby increasing the knowledge. In addition, the present embodiment can be applied to indoor venues such as museums and art museums, and can be applied to general visitors as well as group visitors, so as to further enhance the interest of visiting, achieve the effect of teaching through lively activities, and therefore, the present invention can be achieved.
The above description is only an example of the present invention, and the scope of the present invention should not be limited thereby, and all the simple equivalent changes and modifications made according to the claims and the description of the present invention are included in the scope of the present invention.

Claims (6)

1. A virtual object collection method is implemented by a portable electronic device, which comprises a display unit, an input unit and a processing unit; the method is characterized in that: the method comprises the following steps:
(A) the processing unit of the portable electronic device displays one virtual element to be acquired of the plurality of virtual elements through the display unit;
(B) when the processing unit displays the virtual element to be acquired through the display unit and receives an acquisition instruction which is input by a user and represents acquisition of the virtual element to be acquired through the input unit, the processing unit adds the virtual element to be acquired into an acquired element list;
(C) when the processing unit receives a response request which is input by a user and is related to at least one target virtual element in the acquired element list and the at least one target virtual element corresponds to one target virtual object of a plurality of virtual objects through the input unit, the processing unit generates a plurality of different object options according to the response request and displays the object options through the display unit, wherein each object option is related to one of the virtual objects, and one target object option of the object options is related to the target virtual object;
(D) when the processing unit receives a response selection which is input by a user and is related to one of the object options through the input unit, the processing unit judges whether the response selection is related to the target object option or not; and
(E) when the processing unit judges that the answer selection is related to the target object option, the processing unit adds the target virtual object into a collected object list;
the portable electronic device further comprises an image capturing unit for continuously generating an original image, and the method further comprises, before the step (a):
(F) the processing unit judges whether the current original image conforms to one of a plurality of image templates corresponding to the image template of the virtual element to be acquired; and
when the processing unit judges that the current original image conforms to one of the image templates corresponding to the virtual element to be acquired, the processing unit generates an augmented reality image according to the current original image and the virtual element to be acquired, and displays the augmented reality image through the display unit.
2. The virtual object collection method of claim 1, wherein: the portable electronic device further comprises a positioning unit for continuously generating a position data representing the position of the portable electronic device, wherein the method further comprises, before the step (F):
(G) the processing unit judges whether the portable electronic device is positioned at one of a plurality of preset places corresponding to the preset place of the virtual element to be acquired according to the current position data; and
and (F) when the processing unit judges that the portable electronic device is positioned at one of the preset positions corresponding to the virtual element to be acquired according to the current position data, the processing unit judges whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired.
3. The virtual object collection method of claim 1, wherein: in step (B), the capture instruction is associated with one of a plurality of virtual capture tools corresponding to the virtual capture tool of the virtual element to be captured.
4. A portable electronic device; the method is characterized in that: the portable electronic device comprises:
a display unit;
an input unit; and
a processing unit electrically connected with the display unit and the input unit;
the processing unit displays one to-be-acquired virtual element of the plurality of virtual elements through the display unit;
when the processing unit displays the virtual element to be acquired through the display unit and receives an acquisition instruction which is input by a user and represents acquisition of the virtual element to be acquired through the input unit, the processing unit adds the virtual element to be acquired into an acquired element list;
when the processing unit receives a response request which is input by a user and is related to at least one target virtual element in the acquired element list and the at least one target virtual element corresponds to one target virtual object of a plurality of virtual objects through the input unit, the processing unit generates a plurality of different object options according to the response request and displays the object options through the display unit, wherein each object option is related to one of the virtual objects, and one target object option of the object options is related to the target virtual object;
when the processing unit receives a response selection which is input by a user and is related to one of the object options through the input unit, the processing unit judges whether the response selection is related to the target object option or not;
when the processing unit judges that the answer selection is related to the target object option, the processing unit adds the target virtual object into a collected object list;
the portable electronic device also comprises an image shooting unit for continuously generating an original image;
the processing unit is used for judging whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired or not, and when the processing unit judges that the current original image conforms to one of the image templates corresponding to the virtual element to be acquired, the processing unit generates an augmented reality image according to the current original image and the virtual element to be acquired and displays the virtual element to be acquired in a mode of displaying the augmented reality image through the display unit.
5. The portable electronic device of claim 4, wherein: the portable electronic device also comprises a positioning unit used for continuously generating a piece of position data representing the position of the portable electronic device;
the processing unit judges whether the portable electronic device is located at one of a plurality of preset places corresponding to the virtual element to be acquired according to the current position data, and when the processing unit judges that the portable electronic device is located at one of the preset places corresponding to the virtual element to be acquired according to the current position data, the processing unit judges whether the current original image conforms to one of the image templates corresponding to the virtual element to be acquired.
6. The portable electronic device of claim 4, wherein: the acquisition instruction is related to one of a plurality of virtual acquisition tools corresponding to the virtual acquisition tool of the virtual element to be acquired.
CN201710361691.1A 2017-05-22 2017-05-22 Virtual object collection method and portable electronic device Active CN108932088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710361691.1A CN108932088B (en) 2017-05-22 2017-05-22 Virtual object collection method and portable electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710361691.1A CN108932088B (en) 2017-05-22 2017-05-22 Virtual object collection method and portable electronic device

Publications (2)

Publication Number Publication Date
CN108932088A CN108932088A (en) 2018-12-04
CN108932088B true CN108932088B (en) 2020-07-14

Family

ID=64450640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710361691.1A Active CN108932088B (en) 2017-05-22 2017-05-22 Virtual object collection method and portable electronic device

Country Status (1)

Country Link
CN (1) CN108932088B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111957030A (en) * 2019-05-20 2020-11-20 黄明月 Augmented reality game system and method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002410A (en) * 2012-11-21 2013-03-27 北京百度网讯科技有限公司 Augmented reality method and system for mobile terminals and mobile terminals
CN103902032A (en) * 2012-12-27 2014-07-02 鸿富锦精密工业(深圳)有限公司 Display device, portable device and virtual situation control method
CN104461318A (en) * 2013-12-10 2015-03-25 苏州梦想人软件科技有限公司 Touch read method and system based on augmented reality technology
CN104575215A (en) * 2013-10-09 2015-04-29 郑夙芬 3D nursing situational simulation digital learning system
CN105844991A (en) * 2016-05-30 2016-08-10 湖南亿谷科技发展股份有限公司 Method and device for processing answer data
TWM528001U (en) * 2016-05-24 2016-09-01 Nat Taichung University Science & Technology Augmented reality environment learning device integrated with concept map
TWM537702U (en) * 2016-11-19 2017-03-01 Ping-Yuan Tsai Augmented reality learning and reference system and architecture thereof
CN106648322A (en) * 2016-12-21 2017-05-10 广州市动景计算机科技有限公司 Method of triggering interactive operation with virtual object and device and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9098948B2 (en) * 2010-10-22 2015-08-04 Telefonaktiebolaget L M Ericsson (Publ) Image matching apparatus and image matching method
US20130339118A1 (en) * 2012-06-14 2013-12-19 Gbl Systems Corporation Bulk purchasing by ad hoc consumer groups

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103002410A (en) * 2012-11-21 2013-03-27 北京百度网讯科技有限公司 Augmented reality method and system for mobile terminals and mobile terminals
CN103902032A (en) * 2012-12-27 2014-07-02 鸿富锦精密工业(深圳)有限公司 Display device, portable device and virtual situation control method
CN104575215A (en) * 2013-10-09 2015-04-29 郑夙芬 3D nursing situational simulation digital learning system
CN104461318A (en) * 2013-12-10 2015-03-25 苏州梦想人软件科技有限公司 Touch read method and system based on augmented reality technology
TWM528001U (en) * 2016-05-24 2016-09-01 Nat Taichung University Science & Technology Augmented reality environment learning device integrated with concept map
CN105844991A (en) * 2016-05-30 2016-08-10 湖南亿谷科技发展股份有限公司 Method and device for processing answer data
TWM537702U (en) * 2016-11-19 2017-03-01 Ping-Yuan Tsai Augmented reality learning and reference system and architecture thereof
CN106648322A (en) * 2016-12-21 2017-05-10 广州市动景计算机科技有限公司 Method of triggering interactive operation with virtual object and device and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A context-aware progressive inquiry-based;Hui-Chun Chu,et al.;《2016 5th IIAI International Congress on Advanced Applied Informatics》;20160901;353-356 *
增强现实技术在博物馆中的应用研究;李文霞 等;《电脑知识与应用》;20140131;第10卷(第1期);160-162、184 *

Also Published As

Publication number Publication date
CN108932088A (en) 2018-12-04

Similar Documents

Publication Publication Date Title
CN104461318B (en) Reading method based on augmented reality and system
KR20240038163A (en) Body pose estimation
CN109688451B (en) Method and system for providing camera effect
CN109815776B (en) Action prompting method and device, storage medium and electronic device
KR101992424B1 (en) Apparatus for making artificial intelligence character for augmented reality and service system using the same
US11657085B1 (en) Optical devices and apparatuses for capturing, structuring, and using interlinked multi-directional still pictures and/or multi-directional motion pictures
CN103562957B (en) Information provider unit, information providing method and information providing system
CN108109161B (en) Video data real-time processing method and device based on self-adaptive threshold segmentation
CN111651047B (en) Virtual object display method and device, electronic equipment and storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN111359201A (en) Jigsaw puzzle type game method, system and equipment
CN106464773A (en) Augmented reality apparatus and method
CN114332374A (en) Virtual display method, equipment and storage medium
TW201814590A (en) Mobile electronic device and server
CN112230765A (en) AR display method, AR display device, and computer-readable storage medium
CN104615639B (en) A kind of method and apparatus for providing the presentation information of picture
TWI640952B (en) Virtual object collection method and portable electronic device
JP7315321B2 (en) Generation device, generation method and generation program
CN108932088B (en) Virtual object collection method and portable electronic device
CN114092670A (en) Virtual reality display method, equipment and storage medium
CN110036356B (en) Image processing in VR systems
CN112702643B (en) Barrage information display method and device and mobile terminal
KR20130095488A (en) Apparatus and method for providing three-dimensional model in the smart device
CN114647303A (en) Interaction method, device and computer program product
CN111640190A (en) AR effect presentation method and apparatus, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant