CN117424996A - Image sharing method and device, electronic equipment and readable storage medium - Google Patents

Image sharing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN117424996A
CN117424996A CN202311249550.2A CN202311249550A CN117424996A CN 117424996 A CN117424996 A CN 117424996A CN 202311249550 A CN202311249550 A CN 202311249550A CN 117424996 A CN117424996 A CN 117424996A
Authority
CN
China
Prior art keywords
point cloud
image
image data
node
sharing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311249550.2A
Other languages
Chinese (zh)
Inventor
马文良
周奕
黎清顾
陈彦宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202311249550.2A priority Critical patent/CN117424996A/en
Publication of CN117424996A publication Critical patent/CN117424996A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the invention provides an image sharing method, an image sharing device, electronic equipment and a readable storage medium, wherein the method comprises the following steps: the method is applied to a webpage end, the webpage end is in communication connection with a robot operating system through a communication bridge, the robot operating system comprises a system node, the system node at least comprises a camera node and an image processing node, the camera node is connected with a depth camera, and the method comprises the following steps: receiving original image data sent by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node; converting the original image data into a point cloud image; and sharing the point cloud image to a target sharing object. The embodiment of the invention reduces the burden of the robot operating system, thereby improving the performance of the robot operating system.

Description

Image sharing method and device, electronic equipment and readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of Internet, in particular to an image sharing method, an image sharing device, electronic equipment and a computer readable storage medium.
Background
ROS (Robot Operating System, robot operating system platform) is widely used for development and control of robotic systems as an open source software platform. Several system nodes may be included in the ROS system, through which processing of the image may be accomplished, and then visualized by means of rviz (ROS Visualization, a visualization tool in ROS) or the like.
However, with the method of rviz visualized images, cross-end, remote access is not possible, the user cannot easily view and analyze the images, and processing the images in the ROS system also increases the burden on the ROS system, thereby affecting the overall performance of the ROS system.
Disclosure of Invention
Embodiments of the present invention provide a method, apparatus, electronic device, and computer readable storage medium for sharing images, so as to solve the problem that processing images by an ROS system increases the burden of the ROS system, thereby affecting the overall performance of the ROS system.
The embodiment of the invention discloses a sharing method of images, which is applied to a webpage end, wherein the webpage end is in communication connection with a robot operation system through a communication bridge, the robot operation system comprises a system node, the system node at least comprises a camera node and an image processing node, the camera node is connected with a depth camera, and the method comprises the following steps:
Receiving original image data sent by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node;
converting the original image data into a point cloud image;
and sharing the point cloud image to a target sharing object.
Optionally, the original image data includes original RGB image data and original depth image data, and the converting the original image data into the point cloud image includes:
traversing each pixel in the original depth image data and the original RGB image data to obtain a three-dimensional coordinate and an RGB color value corresponding to each pixel;
creating a point cloud geometry and a point cloud material, and adding the three-dimensional coordinates and RGB color values to the point cloud geometry to obtain a target point cloud geometry;
combining the target point cloud geometry and the target point cloud material into a point cloud object;
and rendering the point cloud object to obtain a point cloud image.
Optionally, the camera node generates a first subscription link for the original image data and publishes the first subscription link to the image processing node, and the image processing node is used for acquiring the original image data through the first subscription link.
Optionally, the image processing node is configured to convert the original image data into original image data in a specified format.
Optionally, the receiving the raw image data sent by the image processing node through the communication bridge includes:
acquiring a second subscription link aiming at generation of the image processing node through the communication bridge;
and acquiring the original image data through the second subscription link.
Optionally, the sharing the point cloud image to the target shared object includes:
determining a customized display mode aiming at the point cloud image;
and displaying the point cloud image for the target shared object according to the customized display mode.
Optionally, the point cloud image is generated by a three-dimensional image creation tool; the three-dimensional image creation tool includes at least three.
The embodiment of the invention also discloses an image sharing device which is applied to a webpage end, wherein the webpage end is in communication connection with a robot operation system through a communication bridge, the robot operation system comprises a system node, the system node at least comprises a camera node and an image processing node, the camera node is connected with a depth camera, and the device comprises:
A transmitting module, configured to receive original image data transmitted by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node;
the conversion module is used for converting the original image data into a point cloud image;
and the sharing module is used for sharing the point cloud image to a target sharing object.
Optionally, the conversion module is used for
Traversing each pixel in the original depth image data and the original RGB image data to obtain a three-dimensional coordinate and an RGB color value corresponding to each pixel;
creating a point cloud geometry and a point cloud material, and adding the three-dimensional coordinates and RGB color values to the point cloud geometry to obtain a target point cloud geometry;
combining the target point cloud geometry and the target point cloud material into a point cloud object;
and rendering the point cloud object to obtain a point cloud image.
Optionally, the camera node generates a first subscription link for the original image data and publishes the first subscription link to the image processing node, and the image processing node is used for acquiring the original image data through the first subscription link.
Optionally, the image processing node is configured to convert the original image data into original image data in a specified format.
Optionally, the sending module is configured to:
acquiring a second subscription link aiming at generation of the image processing node through the communication bridge;
and acquiring the original image data through the second subscription link.
Optionally, the sharing module is configured to:
determining a customized display mode aiming at the point cloud image;
and displaying the point cloud image for the target shared object according to the customized display mode.
As an optional embodiment of the invention, the point cloud image is generated by a three-dimensional image creating tool; the three-dimensional image creation tool includes at least three.
The embodiment of the invention also discloses electronic equipment, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to implement the method according to the embodiment of the present invention when executing the program stored in the memory.
The embodiment of the invention also discloses a computer program product which is stored in a storage medium and is executed by at least one processor to realize the method according to the embodiment of the invention.
Embodiments of the present invention also disclose a computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method according to the embodiments of the present invention.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, the method is applied to a webpage end, the webpage end and a robot operating system are in communication connection through a communication bridge, wherein the robot operating system can comprise a plurality of system nodes, the system nodes at least comprise camera nodes and image processing nodes, the camera nodes are connected with sensors, specifically, the camera nodes acquire original image data shot by the sensors and then issue the original image data to the image processing nodes, and then the webpage end receives the original image data sent by the image processing nodes through the communication bridge, converts the original image data into a point cloud image and then shares the point cloud image with a target sharing object. According to the robot operating system of the embodiment of the invention, the original image data is transmitted to the webpage end, so that the original image data is fused at the webpage end to generate the point cloud image and is shared to the target shared object, the process of converting the point cloud image at the robot operating system is not needed, the burden of the robot operating system is reduced, and the performance of the robot operating system is improved. In addition, the embodiment of the invention converts the image through the webpage end to obtain the point cloud image, so that a user can reduce the configuration of relevant software installation and system environment of the robot operating system, and can remotely access the point cloud image through a network by using the webpage end, thereby being not limited to running the robot operating system or a computer, having the advantages of cross-platform, no software installation, instantaneity, data security and collaborative sharing, and providing a more convenient and efficient image sharing mode of the robot operating system for the user.
Drawings
FIG. 1 is a flow chart of steps of a method for sharing an image provided in an embodiment of the present invention;
fig. 2 is a schematic view of an application scenario of a robot operating system according to an embodiment of the present invention;
FIG. 3 is a flow chart of a fusion obtained point cloud image provided in an embodiment of the present invention;
fig. 4 is a block diagram showing the structure of an image sharing apparatus provided in an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Referring to fig. 1, a step flowchart of an image sharing method provided in an embodiment of the present invention is shown and applied to a web page end, where the web page end is communicatively connected to a robot operating system through a communication bridge, the robot operating system includes a system node, where the system node includes at least a camera node and an image processing node, and the camera node is connected to a sensor, and the method specifically may include the following steps:
step 101, receiving original image data sent by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node.
The Web page end may also be called as a Web end, specifically, the Web end refers to a client end of an application program or a website developed and operated based on Web technology, the Web end refers to a user interface operated in a Web browser, and a user can access and use the application program or the website through the browser. Through the Web terminal, the user can directly interact with the application program or the website in the browser without additional installation or downloading, in addition, the development of the Web terminal can cross-platform, the user can access the Web terminal through different browsers (Chrome, firefox, safari) of different operating systems (Windows, mac, linux for example), the Web terminal has wide accessibility and convenience, and the user can use the Web terminal only by having a proper browser and stable network connection. In the embodiment of the invention, the image can be shared through the Web end, and the image can comprise a point cloud image as a specific example, when the point cloud image is displayed in the Web end, rendering and interaction of the point cloud image are further realized by means of Web technology and a Web library, such as HTML, CSS, javaScript, three.js, ros2d.js, ros3d.js and the like, and the sharing of the point cloud image is realized.
In embodiments of the present invention, the robot operating system may include a ROS (Robot Operating System, robot operating system platform) system, which is widely used for development and control of the robot system as an open source software platform, in which point cloud images are an important sensing data for describing the three-dimensional structure of the environment, and sensors in the ROS system, such as a depth camera, a laser radar, etc., wherein the depth camera may be an RGB-D camera or other type of depth sensor. Raw image data, such as RGB image data and depth image data, etc., may be obtained by these sensors in the ROS system. The communication Bridge may include an ROS Bridge, which is a software tool for connecting an ROS system and other communication protocols or platforms, and the ROS Bridge uses a WebSocket protocol to convert data in the ROS system into a format that can be transmitted, so that a non-ROS system (e.g., the Web end) may communicate data with the ROS system through the WebSocket.
As a specific example of the present invention, referring to fig. 2, an application scenario of a robot operating system provided by an embodiment of the present invention is a ROS system, where the ROS system may include a ROS master node, a camera node, and an image processing node, and the ROS master node is configured to be responsible for maintaining the system nodes in the ROS system, and may specifically include names, topic mechanisms (topics), services (services), parameters (parameters), and the like of the system nodes, and the system nodes other than the ROS master node may register at the ROS master node, and discover the existence and availability of other nodes through the ROS master node, and in addition, the ROS master node may facilitate communications between the system nodes, in particular, the system nodes may search for addresses of other system nodes through the ROS master node, and perform message publishing (publishing) and subscribing (subscribing) as an intermediary through the ROS master node, so as to implement data exchange between the system nodes, such as image data exchange. The camera node can be connected with the depth camera, so that the camera node can acquire original image data of the depth camera node, the image processing node and the Web end can be in communication connection through the ROS Bridge, and the original image data acquired from the camera node can be published to the Web end through the ROS Bridge.
Specifically, in the embodiment of the invention, the ROS Bridge is configured to ensure that the ROS Bridge is connected with the ROS system, wherein the ROS Bridge is a communication Bridge based on WebSocket and is used for realizing communication between the ROS system and a Web terminal, and a rosbridge_suite software package needs to be installed and configured in the ROS system to ensure that the ROS Bridge node is running. At the Web end, the ROS Bridge nodes are connected through a WebSocket protocol, such as image processing nodes, the roslibjs is used for establishing connection and communication with the ROS Bridge at the Web end, the Web end can subscribe and receive messages from an ROS system through the WebSocket protocol, and the image processing nodes of the original image data in the PNG format published by the ROS are subscribed.
Step 102, converting the original image data into a point cloud image.
Step 103, sharing the point cloud image to a target sharing object.
Among them, a Point Cloud image (Point Cloud) is a graphic representation method for describing a Point set in a three-dimensional space, the Point Cloud image is composed of a large number of discrete points, each Point represents a position in the three-dimensional space, visualization of the Point Cloud image is generally represented using coordinates and colors of the points in the three-dimensional space, different information can be conveyed through the size, colors, shapes, etc. of the points, and a user can observe and analyze the shape, structure, and characteristics of an object in the three-dimensional space through the Point Cloud image, thereby better understanding and processing three-dimensional data. The target sharing object may be a terminal device such as a user or a mobile phone.
In the embodiment of the invention, two issuing nodes (Image processing nodes) can be created in the ROS system, the original RGB Image data and the original depth Image data in the sensor_msgs/Image format are converted into the specified format by using an image_transport tool of the ROS system, the converted original RGB Image data, the converted original depth Image data and other original Image data are issued to the Web end, after the Web end obtains the original Image data, the original Image data are converted into the point cloud Image at the Web end, and then the point cloud Image is shared into a user or a mobile phone, so that the user can perform control operations such as rotation, scaling, translation and the like on the point cloud Image, and the interactivity and the user experience are increased. In addition, the user can add other visual elements and effects according to the requirements, such as color mapping, shading effect and the like for the point cloud image, so that the point cloud image is more vivid and attractive.
As an alternative embodiment of the present invention, after the image processing node obtains the original image data from the camera node, the original image data may be converted into original image data in a specified format, where the specified format may at least include a PNG format, and PNG is a lossless image file format, and thus is more suitable for image compression and network transmission. Of course, in practical application, the original image data may be converted into other formats according to specific application scenarios and requirements, which is not limited in the embodiment of the present invention.
The image sharing method is applied to a webpage end, the webpage end and a robot operating system are in communication connection through a communication bridge, wherein the robot operating system can comprise a plurality of system nodes, the system nodes at least comprise camera nodes and image processing nodes, the camera nodes are connected with sensors, specifically, the camera nodes acquire original image data shot by the sensors and then issue the original image data to the image processing nodes, and then the webpage end receives the original image data sent by the image processing nodes through the communication bridge, converts the original image data into a point cloud image and then shares the point cloud image to a target sharing object. According to the robot operating system of the embodiment of the invention, the original image data is transmitted to the webpage end, so that the original image data is fused at the webpage end to generate the point cloud image and is shared to the target shared object, the process of converting the point cloud image at the robot operating system is not needed, the burden of the robot operating system is reduced, and the performance of the robot operating system is improved. In addition, the embodiment of the invention converts the image through the webpage end to obtain the point cloud image, so that a user can reduce the configuration of relevant software installation and system environment of the robot operating system, and can remotely access the point cloud image through a network by using the webpage end, thereby being not limited to running the robot operating system or a computer, having the advantages of cross-platform, no software installation, instantaneity, data security and collaborative sharing, and providing a more convenient and efficient image sharing mode of the robot operating system for the user.
As an optional embodiment of the present invention, the original image data includes original RGB image data and original depth image data, and the converting the original image data into the point cloud image in step 102 may include:
traversing each pixel in the original depth image data and the original RGB image data to obtain a three-dimensional coordinate and an RGB color value corresponding to each pixel;
creating a point cloud geometry and a point cloud material, and adding the three-dimensional coordinates and RGB color values to the point cloud geometry to obtain a target point cloud geometry;
combining the target point cloud geometry and the target point cloud material into a point cloud object;
and rendering the point cloud object to obtain a point cloud image.
As an alternative embodiment of the present invention, the point cloud image may be generated by a three-dimensional image creation tool; wherein the three-dimensional image creation tool includes at least three.
Illustratively, an Image object is a built-in object for loading and manipulating images, which provides some methods and attributes for processing Image resources. Firstly, the Web terminal of the embodiment of the invention may use the original depth Image data and the original RGB Image data obtained from the Image processing node by using the Image object in the JavaScript, use the nested loop to traverse the original depth Image data and the original RGB Image data, obtain the RGB color value of the pixel point from the original RGB Image data, and obtain the three-dimensional coordinate (including the height information) of the pixel point from the original depth Image data, where the RGB color value and the three-dimensional coordinate may be stored in a point cloud data container as one object, or store the RGB color value and the three-dimensional coordinate distribution in two different arrays. In particular, two empty arrays may be created, one for storing the three-dimensional coordinates of the point and the other for storing the RGB color values of the point. Then, a point cloud geometry is created by using the buffer geometry class of the three.js, the three-dimensional coordinates and the RGB color values stored before are used as attributes of the point cloud geometry, point cloud materials are created by using the PointsMaterial class of the three.js, the attributes of the size, the color and the like of the Points can be set, and the point cloud geometry and the point cloud materials are combined into a point cloud object by using the Points class of the three.js. Finally, the point cloud object is added to the three.js scene to display the corresponding point cloud image when rendering the point cloud object.
As an optional embodiment of the present invention, the step 101 of receiving raw image data sent by the image processing node through the communication bridge may include:
acquiring a second subscription link aiming at generation of the image processing node through the communication bridge;
and acquiring the original image data through the second subscription link.
Wherein the subscription link may be a URL (Uniform Resource Locator ).
In the embodiment of the invention, after the camera node in the robot operating system acquires the original image data shot by the depth camera, a first subscription link, for example, sensor_msgs/image, can be generated for the original image data, and the first subscription link is published to the image processing node, and then the image processing node can acquire the original image data through the first subscription link.
The Web side can acquire the image processing node through the ROS Bridge and aim at generating a second subscription link, for example, sensor_msgs/compiledsedimage, then the Web side can acquire the original image data through the second subscription link, and then the original image data can be fused to obtain the point cloud image.
As an optional embodiment of the present invention, the step 103 of sharing the point cloud image to a target sharing object includes:
determining a customized display mode aiming at the point cloud image;
and displaying the point cloud image for the target shared object according to the customized display mode.
In the embodiment of the invention, interaction and visualization effects can be added for the point cloud image by means of the three-dimensional image creation tool such as the THREE.JS, specifically, a user can set a customized display mode for the point cloud image according to own requirements, for example, by using the THREE.JS library, the user can perform control operations such as rotation, scaling, translation and the like on the point cloud image, so that the point cloud image can be customized displayed according to own requirements, and interactivity and user experience are increased. In addition, the user can add other visual elements and effects according to the requirements, such as color mapping, shadow effects and the like, to be set as a customized display mode of the point cloud image, so that the point cloud image is more vivid and attractive.
In order to provide a better understanding of the embodiments of the present invention to those skilled in the art, a specific example is described below. Referring to fig. 3, a flowchart of fusing point cloud images according to an embodiment of the present invention may include:
Step 301, acquiring original RGB image data and original depth image data by a depth camera in an ROS system;
step 302, a depth camera drives ROS nodes (camera nodes) that release raw RGB image data and raw depth image data;
step 303, generating original RGB image data and original depth image data in PNG format;
step 304, configuring an ROS Bridge of the ROS system;
step 305, connecting WebSocket of the Web terminal with the ROS system through roslib. Js configuration, and subscribing ROS nodes (image processing nodes) of the original RGB image data and the original depth image data in PNG format;
step 306, the original RGB image data and the original depth image data are combined into a point cloud image and visualized using three.
In summary, the following problems can be at least solved by applying the embodiments of the present invention:
1. cross-platform properties: the point cloud image can be displayed on different operating systems and devices at the Web end, and is not limited to a robot or a computer running the ROS system, so that the point cloud image can be viewed and interactively browsed through a smart phone, a tablet computer or any device with a Web browser;
2. webpage end rendering and interaction: conventional visualization of ROS point cloud images typically relies on the visualization tools of ROS systems, such as rviz, which run locally and use the resources of the robot. And the point cloud image is displayed at the Web end, and the rendering and interaction of the point cloud image are realized by means of Web technology and libraries, such as HTML, CSS, javaScript, three.js, ros2d.js, ros3d.js and the like.
3. Data format conversion and synthesis: visualization of point cloud images of conventional ROS systems is typically receiving and displaying point cloud images directly from the ROS system. In the embodiment of the invention, the point cloud image is displayed through the Web end, and the RGB image data and the depth image data in the ROS system are required to be subjected to data format conversion and synthesis to generate the point cloud image. This involves the application of algorithms and techniques for resolving, registering, fusing, etc. the image data.
The implementation of the invention has at least the following beneficial effects:
1. performance improvement: the process of fusing the point cloud images is carried out at the Web end, so that the pressure of the ROS system is reduced, and the performance of a robot using the ROS system is improved.
2. Sharing and collaboration: by displaying the point cloud image on the Web end, the point cloud image can be more conveniently shared with other people, and cooperation and communication are promoted. Other users can access the point cloud image through the shared subscription link without installing an ROS system or a specific software package, so that wider data sharing and team cooperation are realized.
3. Remote access: the Web end displays the point cloud image, so that remote access and data sharing become more convenient. The user can be connected to a remote server or terminal equipment through the Internet, and access and browse the point cloud image through the interface of the Web end, so that the ROS system is not required to be directly contacted, and the pressure of the ROS system is reduced.
4. Viewing the point cloud image does not require installation of the ROS system: in general, the ROS system is visualized by installing the ROS environment and related software packages, and the embodiment of the invention can display the point cloud images on the Web end without installing the ROS system related software and the like, thereby reducing the installation and configuration burden of users and enabling more users to easily access and browse the point cloud images.
5. High degree of customizable: the presentation of point cloud images at the Web end allows highly customized development. By using the Web development technology, the appearance, interaction and function of the point cloud image can be customized, so that the point cloud image can be displayed and interacted according to a customized display mode set by a user. In addition, components such as notes, marks, tool bars and the like can be added to the point cloud image, so that the requirements of specific application scenes are met, and the interactive experience of a user is further enhanced.
It should be noted that, for simplicity of description, the method embodiments are shown as a series of acts, but it should be understood by those skilled in the art that the embodiments are not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the invention.
Referring to fig. 4, a block diagram of an image sharing device provided in an embodiment of the present invention is shown and applied to a web page end, where the web page end is communicatively connected to a robot operating system through a communication bridge, the robot operating system includes a system node, where the system node includes at least a camera node and an image processing node, and the camera node is connected to a depth camera, and the device specifically may include the following modules:
a transmitting module 401, configured to receive raw image data sent by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node;
a conversion module 402, configured to convert the raw image data into a point cloud image;
and the sharing module 403 is configured to share the point cloud image to a target sharing object.
As an alternative embodiment of the present invention, the conversion module 402 is configured to
Traversing each pixel in the original depth image data and the original RGB image data to obtain a three-dimensional coordinate and an RGB color value corresponding to each pixel;
creating a point cloud geometry and a point cloud material, and adding the three-dimensional coordinates and RGB color values to the point cloud geometry to obtain a target point cloud geometry;
Combining the target point cloud geometry and the target point cloud material into a point cloud object;
and rendering the point cloud object to obtain a point cloud image.
As an optional embodiment of the invention, the camera node generates a first subscription link for the original image data and publishes the first subscription link to the image processing node, and the image processing node is configured to obtain the original image data through the first subscription link.
As an alternative embodiment of the present invention, the image processing node is configured to convert the raw image data into raw image data in a specified format.
As an optional embodiment of the present invention, the sending module 401 is configured to:
acquiring a second subscription link aiming at generation of the image processing node through the communication bridge;
and acquiring the original image data through the second subscription link.
As an alternative embodiment of the present invention, the sharing module 403 is configured to:
determining a customized display mode aiming at the point cloud image;
and displaying the point cloud image for the target shared object according to the customized display mode.
As an optional embodiment of the invention, the point cloud image is generated by a three-dimensional image creating tool; the three-dimensional image creation tool includes at least three.
For the device embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points.
In addition, the embodiment of the invention also provides electronic equipment, which comprises: the processor, the memory, store the computer program on the memory and can run on the processor, this computer program realizes the above-mentioned each procedure of the sharing method embodiment of the picture when being carried out by the processor, and can reach the same technical result, in order to avoid repetition, will not be repeated here.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, realizes the processes of the image sharing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The embodiment of the present invention further provides a computer program product, which is stored in a storage medium, and the program product is executed by at least one processor to implement the respective processes of the above-mentioned image sharing method embodiment, and achieve the same technical effects, so that repetition is avoided, and a detailed description is omitted herein.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
The electronic device 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power source 511. It will be appreciated by those skilled in the art that the electronic device structure shown in fig. 5 is not limiting of the electronic device and that the electronic device may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the electronic equipment comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 510; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 may also communicate with networks and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user through the network module 502, such as helping the user to send and receive e-mail, browse web pages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the electronic device 500. The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used for receiving an audio or video signal. The input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. Microphone 5042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 501 in case of a phone call mode.
The electronic device 500 also includes at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 5061 and/or the backlight when the electronic device 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 505 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 506 is used to display information input by a user or information provided to the user. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 is operable to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 5071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). Touch panel 5071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 510 to determine a type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components for implementing the input and output functions of the electronic device, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the electronic device, which is not limited herein.
The interface unit 508 is an interface for connecting an external device to the electronic apparatus 500. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 500 or may be used to transmit data between the electronic apparatus 500 and an external device.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 509, and calling data stored in the memory 509, thereby performing overall monitoring of the electronic device. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The electronic device 500 may also include a power supply 511 (e.g., a battery) for powering the various components, and preferably the power supply 511 may be logically connected to the processor 510 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the electronic device 500 includes some functional modules, which are not shown, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. The image sharing method is characterized by being applied to a webpage end, wherein the webpage end is in communication connection with a robot operating system through a communication bridge, the robot operating system comprises a system node, the system node at least comprises a camera node and an image processing node, and the camera node is connected with a depth camera, and the method comprises the following steps:
receiving original image data sent by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node;
converting the original image data into a point cloud image;
and sharing the point cloud image to a target sharing object.
2. The method of claim 1, wherein the raw image data comprises raw RGB image data and raw depth image data, the converting the raw image data into a point cloud image comprising:
traversing each pixel in the original depth image data and the original RGB image data to obtain a three-dimensional coordinate and an RGB color value corresponding to each pixel;
creating a point cloud geometry and a point cloud material, and adding the three-dimensional coordinates and RGB color values to the point cloud geometry to obtain a target point cloud geometry;
Combining the target point cloud geometry and the target point cloud material into a point cloud object;
and rendering the point cloud object to obtain a point cloud image.
3. The method of claim 1, wherein the camera node generates a first subscription link for the raw image data and publishes to the image processing node, the image processing node to obtain the raw image data via the first subscription link.
4. A method according to claim 3, wherein the image processing node is adapted to convert the raw image data into raw image data of a specified format.
5. The method according to claim 3 or 4, wherein said receiving raw image data sent by said image processing node via said communication bridge comprises:
acquiring a second subscription link aiming at generation of the image processing node through the communication bridge;
and acquiring the original image data through the second subscription link.
6. The method of claim 1, wherein the sharing the point cloud image to a target shared object comprises:
determining a customized display mode aiming at the point cloud image;
And displaying the point cloud image for the target shared object according to the customized display mode.
7. The method of claim 1, wherein the point cloud image is generated by a three-dimensional image creation tool; the three-dimensional image creation tool includes at least three.
8. The utility model provides a sharing device of image, its characterized in that is applied to the webpage end, the webpage end carries out communication connection through communication bridge with the robot operating system, the robot operating system includes the system node, the system node includes camera node and image processing node at least, camera node is connected with the degree of depth camera, the device includes:
a transmitting module, configured to receive original image data transmitted by the image processing node through the communication bridge; the original image data is obtained by the camera node and then distributed to the image processing node;
the conversion module is used for converting the original image data into a point cloud image;
and the sharing module is used for sharing the point cloud image to a target sharing object.
9. An electronic device comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory communicate with each other via the communication bus;
The memory is used for storing a computer program;
the processor is configured to implement the method according to any one of claims 1-7 when executing a program stored on a memory.
10. A computer-readable storage medium having instructions stored thereon, which when executed by one or more processors, cause the processors to perform the method of any of claims 1-7.
CN202311249550.2A 2023-09-25 2023-09-25 Image sharing method and device, electronic equipment and readable storage medium Pending CN117424996A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311249550.2A CN117424996A (en) 2023-09-25 2023-09-25 Image sharing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311249550.2A CN117424996A (en) 2023-09-25 2023-09-25 Image sharing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN117424996A true CN117424996A (en) 2024-01-19

Family

ID=89525486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311249550.2A Pending CN117424996A (en) 2023-09-25 2023-09-25 Image sharing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN117424996A (en)

Similar Documents

Publication Publication Date Title
CN111049979B (en) Application sharing method, electronic equipment and computer readable storage medium
CN110674662B (en) Scanning method and terminal equipment
CN109614061B (en) Display method and terminal
CN109451141B (en) Operation control method and related terminal
CN108513671B (en) Display method and terminal for 2D application in VR equipment
CN107734172B (en) Information display method and mobile terminal
CN110719319B (en) Resource sharing method, device, terminal equipment and storage medium
CN110990172A (en) Application sharing method, first electronic device and computer-readable storage medium
CN111158817A (en) Information processing method and electronic equipment
WO2020211692A1 (en) Information interaction method and terminal device
CN111383175A (en) Picture acquisition method and electronic equipment
CN111061404A (en) Control method and first electronic device
CN111163449B (en) Application sharing method, first electronic device and computer-readable storage medium
CN109491964B (en) File sharing method and terminal
CN109063079B (en) Webpage labeling method and electronic equipment
CN111131607A (en) Information sharing method, electronic equipment and computer readable storage medium
CN108881742B (en) Video generation method and terminal equipment
CN107913519B (en) Rendering method of 2D game and mobile terminal
CN111447598B (en) Interaction method and display device
CN111178306B (en) Display control method and electronic equipment
CN111142759B (en) Information sending method and electronic equipment
CN111093033B (en) Information processing method and device
CN110471895B (en) Sharing method and terminal device
CN115904514B (en) Method for realizing cloud rendering pixel flow based on three-dimensional scene and terminal equipment
CN109189735B (en) Preview image display method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination