CN113329218A - Augmented reality combining method, device and equipment for underwater shooting and storage medium - Google Patents

Augmented reality combining method, device and equipment for underwater shooting and storage medium Download PDF

Info

Publication number
CN113329218A
CN113329218A CN202110591919.2A CN202110591919A CN113329218A CN 113329218 A CN113329218 A CN 113329218A CN 202110591919 A CN202110591919 A CN 202110591919A CN 113329218 A CN113329218 A CN 113329218A
Authority
CN
China
Prior art keywords
video
augmented reality
underwater
additional information
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110591919.2A
Other languages
Chinese (zh)
Inventor
唐俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Finyuan Innovation Technology Co ltd
Original Assignee
Qingdao Finyuan Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Finyuan Innovation Technology Co ltd filed Critical Qingdao Finyuan Innovation Technology Co ltd
Priority to CN202110591919.2A priority Critical patent/CN113329218A/en
Publication of CN113329218A publication Critical patent/CN113329218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses an augmented reality combining method and device for underwater shooting, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.

Description

Augmented reality combining method, device and equipment for underwater shooting and storage medium
Technical Field
The embodiment of the invention relates to an augmented reality technology, in particular to an augmented reality combination method, device and equipment for underwater shooting and a storage medium.
Background
The existing robot shooting scheme is based on the traditional audio and video transmission scheme, and only pictures transmitted by a robot camera can be seen in the process of software interaction used by a user. The method is only used for displaying information, secondary processing is not carried out, source data are directly sent to a user, and the experience effect is limited.
Disclosure of Invention
The invention provides an augmented reality combination method, device, equipment and storage medium for underwater shooting, so as to realize richer information transmission and the effect of enhancing interactivity.
In a first aspect, an embodiment of the present invention provides an augmented reality combining method for underwater shooting, including:
acquiring a shooting video of the underwater robot in real time;
calculating a target identification point in the shot video;
acquiring additional information of the target identification point;
and carrying out augmented reality processing on the shot video according to the additional information.
Optionally, before acquiring the shooting video of the underwater robot in real time, the method further includes:
and establishing communication connection with the underwater robot.
Optionally, after acquiring the shooting video of the underwater robot in real time, the method further includes:
and decoding the video format of the shot video to obtain a processed video.
Optionally, after the video format decoding is performed on the shot video to obtain the processed video, the method further includes:
and sampling objects and scenes in the processed video to generate sampling data.
Optionally, the calculating the target recognition point in the captured video includes:
matching and identifying objects and scenes appearing in the shot video;
and recording the position coordinates of the object and the scene.
Optionally, the acquiring additional information of the target identification point includes:
acquiring the type of the target identification point;
and matching additional information in a database according to the type of the target identification point.
Optionally, the performing augmented reality processing on the shot video according to the additional information includes:
processing the additional information according to a user preset rule;
combining the processed additional information with the captured video.
In a second aspect, an embodiment of the present invention further provides an augmented reality combining apparatus for underwater shooting, where the apparatus includes:
the data acquisition module is used for acquiring a shooting video of the underwater robot in real time;
the data matching module is used for calculating a target identification point in the shot video;
the data query module is used for acquiring additional information of the target identification point;
and the data processing module is used for carrying out augmented reality processing on the shot video according to the additional information.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an augmented reality combining method for underwater photography as described in any one of the above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, the computer program including program instructions, where the program instructions, when executed by a processor, implement the method for combining augmented reality for underwater photography as described in any one of the above.
The embodiment of the invention discloses an augmented reality combining method and device for underwater shooting, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
Drawings
Fig. 1 is a flowchart of a method for combining augmented reality for underwater photography according to an embodiment of the present invention;
fig. 2 is a flowchart of a method of combining augmented reality for underwater photography according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of an augmented reality combining device for underwater shooting according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first module may be termed a second module, and, similarly, a second module may be termed a first module, without departing from the scope of the present application. The first module and the second module are both modules, but they are not the same module. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a method for combining augmented reality for underwater photography according to an embodiment of the present invention, where the method for combining augmented reality for underwater photography according to the embodiment of the present invention is applicable to a case where an underwater robot photographs a video for augmented reality processing, and specifically, the method for combining augmented reality for underwater photography according to the embodiment of the present invention includes:
and step 100, acquiring a shooting video of the underwater robot in real time.
In this embodiment, the underwater robot, also called as an unmanned remotely operated vehicle, is an ultimate working robot working under water, and due to the severe danger of the underwater environment and the limited diving depth of the human, the underwater robot has become an important tool for developing the ocean, and the underwater photographing robot is widely used in the process of photographing the ocean documentary. The underwater robot is connected with the control end through communication for data transmission, and in the embodiment, the control end can be intelligent equipment such as a mobile phone, a tablet computer, a computer and the like. The control end obtains shooting video of the underwater robot in real time, wherein the shooting video comprises various underwater objects and scenes, such as shot fishes, corals, sunken ships and the like.
And step 110, calculating a target identification point in the shot video.
In this embodiment, the target identification point is an identification result of a plurality of objects and scenes in the captured video. Specifically, a plurality of objects and scenes in a shot video are identified through a pattern matching method, illustratively, shot fishes are accurately identified, and the fishes are judged to belong to sharks through an AI image identification technology. AI image recognition techniques are based on the main features of the images, each of which has its features, such as the letter a having a tip, P having a circle, and the center of Y having an acute angle. The study of eye movement in image recognition shows that the sight line is always focused on the main features of the image, namely, the places where the curvature of the contour of the image is maximum or the direction of the contour changes suddenly, and the information content of the places is maximum. And the scan path of the eye always goes from one feature to another in turn. Therefore, in the image recognition process, the perception mechanism must exclude the input redundant information and extract the key information.
And step 120, acquiring additional information of the target identification point.
In this embodiment, the additional information includes various data such as description, scale, background information, etc. of the object or scene, and the user can select the additional information according to the requirement. Illustratively, the final recognition result of the target recognition point is a shark, and relevant data about the shark in the existing data is obtained through data matching, for example, "the shark belongs to the subclass of the class chondria (Chondrichthyes) of the phylum vertebrata, is marine, and is a group of fast-swimming medium and large marine fishes in a small number of species entering fresh water. The shark endoskeleton is completely composed of cartilage, is calcified, but does not have any real bone tissue, the exoskeleton is not developed or degenerated very well, the body is often covered by skin teeth (scales), the teeth are diversified, and have hard muscles, but membranous bones never exist, and the brain and the cranium are seamless. The upper forehead consists of the palatine cartilage and the lower forehead consists of the meissner cartilage. "
And step 130, performing augmented reality processing on the shot video according to the additional information.
In this embodiment, Augmented Reality processing is performed on the acquired additional information and the captured video, specifically, Augmented Reality (AR), which is also called Augmented Reality, and AR is a relatively new technical content that enables real world information and virtual world information content to be integrated together, and implements analog simulation processing on the entity information that is relatively difficult to experience in the spatial range of the real world on the basis of scientific technologies such as computers, and virtual information content is effectively applied in the real world by superposition, and can be perceived by human senses in the process, thereby implementing sensory experience beyond Reality. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time. The augmented reality technology not only can effectively embody the content of the real world, but also can promote the display of virtual information content, and the fine content is mutually supplemented and superposed. In the visual augmented reality, a user needs to enable the real world to be overlapped with computer graphics on the basis of a helmet display, and the real world can be fully seen around the computer graphics after the real world is overlapped. The augmented reality technology mainly comprises new technologies and means such as multimedia, three-dimensional modeling, scene fusion and the like, and the information content provided by augmented reality and the information content which can be perceived by human beings are obviously different. Illustratively, the additional information comprises background introduction of sharks, the background introduction is generated around the sharks in the video through augmented reality, and the user can immediately know background information about the sharks while watching the video, so that the problem that the user needs to go to a website to inquire independently is avoided. In other embodiments, the user can also combine the picture with the video, for example, the user rides on a shark, which increases the interest and improves the user experience.
The embodiment of the invention discloses an augmented reality combining method for underwater shooting, which comprises the following steps: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
Example two
Fig. 2 is a flowchart of a method of an augmented reality combining method for underwater shooting according to a second embodiment of the present invention, where the augmented reality combining method for underwater shooting according to the second embodiment of the present invention is applicable to a situation where an underwater robot shoots a video for augmented reality processing, and specifically, the augmented reality combining method for underwater shooting according to the second embodiment of the present invention includes:
and 200, establishing communication connection with the underwater robot.
In this embodiment, the control end and the underwater robot can carry out real-time communication through wired connection or wireless connection, and at first the control end can enter into the processing mode, and after establishing communication connection with the underwater robot this moment, the transmitted shooting video can be processed in real-time augmented reality, and the video that has guaranteed that the user watches is the processed video, has promoted user's satisfaction.
And step 210, acquiring a shooting video of the underwater robot in real time.
In this embodiment, the underwater robot, also called as an unmanned remotely operated vehicle, is an ultimate working robot working under water, and due to the severe danger of the underwater environment and the limited diving depth of the human, the underwater robot has become an important tool for developing the ocean, and the underwater photographing robot is widely used in the process of photographing the ocean documentary. The underwater robot is connected with the control end through communication for data transmission, and in the embodiment, the control end can be intelligent equipment such as a mobile phone, a tablet computer, a computer and the like. The control end obtains shooting video of the underwater robot in real time, wherein the shooting video comprises various underwater objects and scenes, such as shot fishes, corals, sunken ships and the like.
And step 220, decoding the shot video in a video format to obtain a processed video.
In the present embodiment, the captured video is decoded in video format, wherein the video format includes but is not limited to: MPEG, AVI, nAIV, WMV and MOV etc. through carrying out video format decoding to shooting the video and be favorable to obtaining video data, promote the discernment precision of target point.
Step 230, sampling objects and scenes appearing in the processed video to generate sampling data.
In the present embodiment, within each time period during which a video is captured, objects and scenes appearing in the video content are sampled and sample data is generated, where the sample data includes the appearance, color, and the like of the various objects and scenes.
And 240, calculating a target identification point in the shot video.
In this embodiment, step 240 includes: matching and identifying objects and scenes appearing in the shot video; and recording the position coordinates of the object and the scene.
Specifically, the mode matching is used to identify objects and scenes appearing in the video, and generally, the objects and scenes can be identified by an AI identification technology, and the identification result is stored and the corresponding coordinates of the identification point are recorded. The identification matching can be inquired through a preset database or can be matched in a website in real time, and the identification result is written into the database so as to facilitate the next identification.
And step 250, acquiring additional information of the target identification point.
In this embodiment, step 250 includes: acquiring the type of the target identification point; and matching additional information in a database according to the type of the target identification point.
Specifically, the types of the objects and scenes identified before are utilized, extra information which can be matched with the objects and scenes is actively acquired from a database and other modes, wherein the sizes of the objects and the scenes are calculated in real time through a control terminal, and measurement data can be uploaded after the real-time measurement is carried out through a robot sensor. The additional information includes various data such as the description, dimensions, background information, etc. of the object or scene, which can be selected by the user according to the needs. Illustratively, the final recognition result of the target recognition point is a shark, and relevant data about the shark in the existing data is obtained through data matching, for example, "the shark belongs to the subclass of the class chondria (Chondrichthyes) of the phylum vertebrata, is marine, and is a group of fast-swimming medium and large marine fishes in a small number of species entering fresh water. The shark endoskeleton is completely composed of cartilage, is calcified, but does not have any real bone tissue, the exoskeleton is not developed or degenerated very well, the body is often covered by skin teeth (scales), the teeth are diversified, and have hard muscles, but membranous bones never exist, and the brain and the cranium are seamless. The upper forehead consists of the palatine cartilage and the lower forehead consists of the meissner cartilage. "
And step 260, performing augmented reality processing on the shot video according to the additional information.
In this embodiment, step 260 includes: processing the additional information according to a user preset rule; combining the processed additional information with the captured video.
Specifically, the preset rule can add more behaviors and interaction events to the extra information displayed in the augmented reality through the self selection of the user, and the acquired extra information is combined with the target identification point obtained through calculation. And displaying additional information by using information such as spatial position coordinates, angles and the like. The method includes the steps that Augmented Reality processing is carried out on the obtained extra information and a shot video, specifically, Augmented Reality (AR) is also called Augmented Reality, the AR is a relatively new technical content which enables real world information and virtual world information to be integrated together, and the AR implements analog simulation processing on entity information which is difficult to experience in the space range of the real world originally on the basis of scientific technologies such as computers, and virtual information content is effectively applied in the real world in an overlapping mode and can be perceived by human senses in the process, so that the sensory experience beyond Reality is achieved. After the real environment and the virtual object are overlapped, the real environment and the virtual object can exist in the same picture and space at the same time. The augmented reality technology not only can effectively embody the content of the real world, but also can promote the display of virtual information content, and the fine content is mutually supplemented and superposed. In the visual augmented reality, a user needs to enable the real world to be overlapped with computer graphics on the basis of a helmet display, and the real world can be fully seen around the computer graphics after the real world is overlapped. The augmented reality technology mainly comprises new technologies and means such as multimedia, three-dimensional modeling, scene fusion and the like, and the information content provided by augmented reality and the information content which can be perceived by human beings are obviously different. Illustratively, the additional information comprises background introduction of sharks, the background introduction is generated around the sharks in the video through augmented reality, and the user can immediately know background information about the sharks while watching the video, so that the problem that the user needs to go to a website to inquire independently is avoided. In other embodiments, the user can also combine the picture with the video, for example, the user rides on a shark, which increases the interest and improves the user experience.
The embodiment of the invention discloses an augmented reality combining method for underwater shooting, which comprises the following steps: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
EXAMPLE III
The augmented reality combining device for underwater shooting provided by the embodiment of the invention can be used for implementing the augmented reality combining method for underwater shooting provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of an execution method. Fig. 3 is a schematic structural diagram of an augmented reality combining apparatus 300 for underwater photography in an embodiment of the present invention. Referring to fig. 3, an augmented reality combining apparatus 300 for underwater shooting provided in an embodiment of the present invention may specifically include:
the data acquisition module 310 is used for acquiring a shooting video of the underwater robot in real time;
a data matching module 320, configured to calculate a target identification point in the captured video;
a data query module 330, configured to obtain additional information of the target identification point;
and the data processing module 340 is configured to perform augmented reality processing on the captured video according to the additional information.
Further, before acquiring the shooting video of the underwater robot in real time, the method further includes:
and establishing communication connection with the underwater robot.
Further, after acquiring the shooting video of the underwater robot in real time, the method further comprises:
and decoding the video format of the shot video to obtain a processed video.
Further, after the video format decoding is performed on the shot video to obtain the processed video, the method further includes:
and sampling objects and scenes in the processed video to generate sampling data.
Further, the calculating the target recognition point in the captured video includes:
matching and identifying objects and scenes appearing in the shot video;
and recording the position coordinates of the object and the scene.
Further, the acquiring additional information of the target identification point includes:
acquiring the type of the target identification point;
and matching additional information in a database according to the type of the target identification point.
Further, the performing augmented reality processing on the shot video according to the additional information includes:
processing the additional information according to a user preset rule;
combining the processed additional information with the captured video.
The embodiment of the invention discloses an augmented reality combining device for underwater shooting, which comprises: the data acquisition module is used for acquiring a shooting video of the underwater robot in real time; the data matching module is used for calculating a target identification point in the shot video; the data query module is used for acquiring additional information of the target identification point; and the data processing module is used for carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 4, the electronic device includes a memory 410 and a processor 420, where the number of the processors 420 in the device may be one or more, and one processor 420 is taken as an example in fig. 4; the memory 410 and the processor 420 in the server may be connected by a bus or other means, and fig. 4 illustrates the connection by the bus as an example.
The memory 410 is used as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the augmented reality combining method for underwater photography in the embodiment of the present invention (for example, the data acquisition module 310, the data matching module 320, the data query module 330, and the data processing module 340 in the augmented reality combining device 300 for underwater photography), and the processor 420 executes various functional applications and data processing of the server/terminal/server by running the software programs, instructions, and modules stored in the memory 410, so as to implement the augmented reality combining method for underwater photography described above.
Wherein the processor 420 is configured to run the computer program stored in the memory 410, and implement the following steps:
acquiring a shooting video of the underwater robot in real time;
calculating a target identification point in the shot video;
acquiring additional information of the target identification point;
and carrying out augmented reality processing on the shot video according to the additional information.
In one embodiment, the computer program of the electronic device provided in the embodiment of the present invention is not limited to the above method operations, and may also perform related operations in the augmented reality combining method for underwater photography provided in any embodiment of the present invention.
The memory 410 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 410 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 410 may further include memory located remotely from the processor 420, which may be connected to a server/terminal/server through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The embodiment of the invention discloses an augmented reality combined electronic device for underwater shooting, which is used for executing the following method: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform an augmented reality combining method for underwater photography, the method including:
acquiring a shooting video of the underwater robot in real time;
calculating a target identification point in the shot video;
acquiring additional information of the target identification point;
and carrying out augmented reality processing on the shot video according to the additional information.
Of course, the storage medium provided by the embodiment of the present invention contains computer executable instructions, and the computer executable instructions are not limited to the operations of the method described above, and may also perform related operations in an augmented reality combining method for underwater photography provided by any embodiment of the present invention.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The embodiment of the invention discloses an augmented reality combined storage medium for underwater shooting, which is used for executing the following method: acquiring a shooting video of the underwater robot in real time; calculating a target identification point in the shot video; acquiring additional information of the target identification point; and carrying out augmented reality processing on the shot video according to the additional information. According to the augmented reality combining method for underwater shooting provided by the embodiment of the invention, the video signal is processed for the second time after the video information of the underwater robot is received, the object or scene possibly appearing in the video is actively identified, and then the augmented reality interaction is added to the specific object or scene in an augmented reality mode, so that the problem that the underwater robot can only transmit traditional audio and video in the prior art is solved, and the effects of richer information transmission and augmented interactivity are realized.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An augmented reality combining method for underwater photography, comprising:
acquiring a shooting video of the underwater robot in real time;
calculating a target identification point in the shot video;
acquiring additional information of the target identification point;
and carrying out augmented reality processing on the shot video according to the additional information.
2. The method for integrating augmented reality of underwater photography according to claim 1, wherein the acquiring a video of the underwater robot in real time further comprises:
and establishing communication connection with the underwater robot.
3. The method for integrating augmented reality of underwater photography according to claim 1, wherein the acquiring of the photography video of the underwater robot in real time further comprises:
and decoding the video format of the shot video to obtain a processed video.
4. The method for combining augmented reality for underwater photography according to claim 3, wherein said decoding the video format of the photographed video to obtain the processed video further comprises:
and sampling objects and scenes in the processed video to generate sampling data.
5. The method of claim 1, wherein the calculating the target recognition point in the captured video comprises:
matching and identifying objects and scenes appearing in the shot video;
and recording the position coordinates of the object and the scene.
6. The method of claim 1, wherein the obtaining additional information about the target recognition point comprises:
acquiring the type of the target identification point;
and matching additional information in a database according to the type of the target identification point.
7. The method of claim 1, wherein the augmented reality processing of the captured video according to the additional information comprises:
processing the additional information according to a user preset rule;
combining the processed additional information with the captured video.
8. An augmented reality combining device for underwater photography, comprising:
the data acquisition module is used for acquiring a shooting video of the underwater robot in real time;
the data matching module is used for calculating a target identification point in the shot video;
the data query module is used for acquiring additional information of the target identification point;
and the data processing module is used for carrying out augmented reality processing on the shot video according to the additional information.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the augmented reality combining method of underwater photography of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, the computer program comprising program instructions, characterized in that the program instructions, when executed by a processor, implement the augmented reality combining method of underwater photography according to any one of claims 1 to 7.
CN202110591919.2A 2021-05-28 2021-05-28 Augmented reality combining method, device and equipment for underwater shooting and storage medium Pending CN113329218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110591919.2A CN113329218A (en) 2021-05-28 2021-05-28 Augmented reality combining method, device and equipment for underwater shooting and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110591919.2A CN113329218A (en) 2021-05-28 2021-05-28 Augmented reality combining method, device and equipment for underwater shooting and storage medium

Publications (1)

Publication Number Publication Date
CN113329218A true CN113329218A (en) 2021-08-31

Family

ID=77422164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110591919.2A Pending CN113329218A (en) 2021-05-28 2021-05-28 Augmented reality combining method, device and equipment for underwater shooting and storage medium

Country Status (1)

Country Link
CN (1) CN113329218A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193970A1 (en) * 2012-08-01 2015-07-09 Chengdu Idealsee Technology Co., Ltd. Video playing method and system based on augmented reality technology and mobile terminal
CN108038459A (en) * 2017-12-20 2018-05-15 深圳先进技术研究院 A kind of detection recognition method of aquatic organism, terminal device and storage medium
CN109089038A (en) * 2018-08-06 2018-12-25 百度在线网络技术(北京)有限公司 Augmented reality image pickup method, device, electronic equipment and storage medium
CN111818265A (en) * 2020-07-16 2020-10-23 北京字节跳动网络技术有限公司 Interaction method and device based on augmented reality model, electronic equipment and medium
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150193970A1 (en) * 2012-08-01 2015-07-09 Chengdu Idealsee Technology Co., Ltd. Video playing method and system based on augmented reality technology and mobile terminal
CN108038459A (en) * 2017-12-20 2018-05-15 深圳先进技术研究院 A kind of detection recognition method of aquatic organism, terminal device and storage medium
CN109089038A (en) * 2018-08-06 2018-12-25 百度在线网络技术(北京)有限公司 Augmented reality image pickup method, device, electronic equipment and storage medium
CN111818265A (en) * 2020-07-16 2020-10-23 北京字节跳动网络技术有限公司 Interaction method and device based on augmented reality model, electronic equipment and medium
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6515813B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
CN109840500B (en) Three-dimensional human body posture information detection method and device
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
KR101328759B1 (en) Augmented reality method and devices using a real time automatic tracking of marker-free textured planar geometrical objects in a video stream
CN108594999B (en) Control method and device for panoramic image display system
US20190228263A1 (en) Training assistance using synthetic images
WO2017169369A1 (en) Information processing device, information processing method, program
CN110336973B (en) Information processing method and device, electronic device and medium
CN111273772B (en) Augmented reality interaction method and device based on slam mapping method
US11551407B1 (en) System and method to convert two-dimensional video into three-dimensional extended reality content
CN111833457A (en) Image processing method, apparatus and storage medium
CN112165629B (en) Intelligent live broadcast method, wearable device and intelligent live broadcast system
CN112562056A (en) Control method, device, medium and equipment for virtual light in virtual studio
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
CN114694136A (en) Article display method, device, equipment and medium
CN114358112A (en) Video fusion method, computer program product, client and storage medium
CN116917949A (en) Modeling objects from monocular camera output
CN113010009B (en) Object sharing method and device
CN113329218A (en) Augmented reality combining method, device and equipment for underwater shooting and storage medium
CN111368667B (en) Data acquisition method, electronic equipment and storage medium
CN116309005A (en) Virtual reloading method and device, electronic equipment and readable medium
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN115482285A (en) Image alignment method, device, equipment and storage medium
CN112598803A (en) Scenic spot AR group photo method
CN112183271A (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210831

RJ01 Rejection of invention patent application after publication