CN113590878A - Method and device for planning path on video picture and terminal equipment - Google Patents

Method and device for planning path on video picture and terminal equipment Download PDF

Info

Publication number
CN113590878A
CN113590878A CN202110857070.9A CN202110857070A CN113590878A CN 113590878 A CN113590878 A CN 113590878A CN 202110857070 A CN202110857070 A CN 202110857070A CN 113590878 A CN113590878 A CN 113590878A
Authority
CN
China
Prior art keywords
target
coordinate
image
path
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110857070.9A
Other languages
Chinese (zh)
Other versions
CN113590878B (en
Inventor
郑佳栋
劳健斌
王小惠
水慧丽
张永吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yibin Zhongxing Technology Intelligent System Co ltd
Zhongxing Micro Technology Co ltd
Vimicro Corp
Original Assignee
Yibin Zhongxing Technology Intelligent System Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yibin Zhongxing Technology Intelligent System Co ltd filed Critical Yibin Zhongxing Technology Intelligent System Co ltd
Priority to CN202110857070.9A priority Critical patent/CN113590878B/en
Publication of CN113590878A publication Critical patent/CN113590878A/en
Application granted granted Critical
Publication of CN113590878B publication Critical patent/CN113590878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Abstract

The embodiment of the disclosure discloses a method, a device and terminal equipment for planning a path on a video picture. One embodiment of the method comprises: acquiring a target image to be processed, a start point coordinate and an end point coordinate, wherein the target image is a camera picture image; generating a target starting point coordinate and a target ending point coordinate based on the starting point coordinate and the ending point coordinate; generating a target path coordinate sequence; generating an image path coordinate sequence based on the target path coordinate sequence; and sending the target image and the image path coordinate sequence to a target device with a display function. The method comprises the steps of generating a target path coordinate sequence according to a starting point coordinate and an ending point coordinate in a target image, converting the target path coordinate sequence into an image path coordinate sequence, and directly displaying the target image and the image path coordinate sequence by using target equipment so as to directly plan a path in a camera picture image to be processed and improve path planning efficiency.

Description

Method and device for planning path on video picture and terminal equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a path planning method, a path planning device and terminal equipment.
Background
The shortest path problem is a classical algorithm problem in graph theory research and aims to find the shortest path between two nodes in a graph (composed of nodes and paths). The shortest path problem in the graph has been relatively mature through long-term research. However, in practical applications, the shortest path problem is not simply the shortest path between two nodes calculated in a graph, but the shortest path between two specified points needs to be determined in real time by tracking the changing state of a video or an image.
However, when planning and specifying the shortest path between two points in a video, the following technical problems often exist:
first, in the prior art, only path planning is performed in a two-dimensional/three-dimensional map model, that is, a start point and an end point are set, a driving route, a riding route and a walking route are planned automatically, but the start point and the end point cannot be set directly on a real-time video picture, and the driving route, the riding route and the walking route are calculated respectively, that is, an optimal path cannot be planned on a real-time video picture and a route track cannot be displayed on a video picture layer in an overlapping manner.
Secondly, the starting point and the end point are directly defined in the video to obtain the node positions in the video/image, and the application requirement of the shortest path in the real world cannot be met.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, an apparatus, and a terminal device for planning a path on a video screen to solve one or more of the technical problems mentioned in the above background.
In a first aspect, some embodiments of the present disclosure provide a method of planning a path on a video screen, the method comprising: acquiring a target image to be processed, a start point coordinate and an end point coordinate, wherein the target image is a camera picture image, and the start point coordinate and the end point coordinate are pixel coordinates in the target image; generating a target starting point coordinate and a target ending point coordinate based on the starting point coordinate and the ending point coordinate, wherein the target starting point coordinate and the target ending point coordinate are earth coordinates in a physical space; generating a target path coordinate sequence based on the target starting point coordinate and the target ending point coordinate; generating an image path coordinate sequence based on the target path coordinate sequence; and sending the target image and the image path coordinate sequence to target equipment with a display function, wherein the target equipment displays the target image and the image path coordinate sequence.
In some embodiments, said generating target start point coordinates and target end point coordinates based on said start point coordinates and said end point coordinates comprises:
generating target starting point coordinates based on the starting point coordinates using the following equation:
Figure BDA0003184511780000021
wherein (u)1,v1) For the start point coordinates, the lower right hand corner mark 1 characterizes the start point, (X)1,Y1) The coordinates of the target starting point, R is a rotation matrix, t is a translation matrix, R and t are predetermined matrices, s is a predetermined depth value of the camera, and M is a predetermined parameter matrix of the camera;
generating target end point coordinates based on the end point coordinates using the following equation:
Figure BDA0003184511780000022
wherein (A), (B), (C), (D), (C), (B), (C)u2,v2) For the coordinates of the termination point, the lower right corner mark 2 characterizes the termination point, (X)2,Y2) And (3) regarding the coordinates of the target termination point, wherein R is a rotation matrix, t is a translation matrix, R and t are predetermined matrices, s is a predetermined camera depth value, and M is a predetermined camera parameter matrix.
In some embodiments, said converting the target path coordinates to image path coordinates comprises:
the target path coordinates are converted to image path coordinates using the following equation:
Figure BDA0003184511780000031
wherein (u)i,vi) For the image path coordinates, the lower right hand corner mark i characterizes the target path, (X)i,Yi) For the target path coordinates, R is a rotation matrix, t is a translation matrix, R and t are predetermined matrices, s is a predetermined camera depth value, and M is a predetermined camera parameter matrix.
In a second aspect, some embodiments of the present disclosure provide an apparatus for planning an optimal path on a video screen, the apparatus comprising: the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is configured to acquire a target image to be processed, a starting point coordinate and an ending point coordinate, wherein the target image is a camera picture image, and the starting point coordinate and the ending point coordinate are pixel coordinates in the target image; a first generating unit configured to generate a target start point coordinate and a target end point coordinate based on a start point coordinate and an end point coordinate, wherein the target start point coordinate and the target end point coordinate are earth coordinates in a physical space; a second generation unit configured to generate a target path coordinate sequence based on the target start point coordinates and the target end point coordinates; a third generating unit configured to generate an image path coordinate sequence based on the target path coordinate sequence; a display unit configured to transmit the target image and the image path coordinate sequence to a target device having a display function, wherein the target device displays the target image and the image path coordinate sequence.
In a third aspect, some embodiments of the present disclosure provide a terminal device, including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, some embodiments of the disclosure provide a computer readable storage medium having a computer program stored thereon, wherein the program when executed by a processor implements the method as in any one of the first aspect.
The above embodiments of the present disclosure have the following advantages: according to the method for planning the optimal path on the video picture, the target path coordinate sequence can be generated according to the starting point coordinate and the ending point coordinate in the target image, the target path coordinate sequence is converted into the image path coordinate sequence, the target image and the image path coordinate sequence are directly displayed by the target equipment, the path can be directly planned in the camera picture image to be processed, and the path planning efficiency is improved. Specifically, the inventor finds that the reason why the optimal path cannot be planned on the real-time video and the route track is displayed on the video layer in an overlapping manner at present is as follows: in the prior art, only path planning is carried out in a two-dimensional/three-dimensional map model, and a starting point and an end point cannot be directly set on a real-time video picture. Based on this, first, some embodiments of the present disclosure acquire a target image to be processed, start point coordinates, and end point coordinates. The target image is a camera picture image captured in real time, and the starting point coordinate and the ending point coordinate are pixel coordinates in the target image. And secondly, generating target starting point coordinates and target end point coordinates based on the starting point coordinates and the end point coordinates. And the target starting point coordinate and the target end point coordinate are earth coordinates in a physical space. And thirdly, generating a target path coordinate sequence based on the target starting point coordinate and the target end point coordinate, so that the starting point coordinate and the end point coordinate in the two-dimensional/three-dimensional map model are converted into the earth coordinate in the physical space, and then the target path coordinate sequence in the earth space is generated. Then, based on the target path coordinate sequence, an image path coordinate sequence is generated. The shortest path in earth space is translated back to the shortest path in the two-dimensional/three-dimensional map model. And finally, sending the target image and the image path coordinate sequence to target equipment with a display function. Wherein the target device displays the target image and the image path coordinate sequence. The target device superposes and displays the route track on the video layer, thereby solving the problem of directly determining and displaying the shortest path in the video and improving the user experience.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is an architectural diagram of an exemplary system in which some embodiments of the present disclosure may be applied;
fig. 2 is a flow diagram of some embodiments of a method of planning a path on a video screen according to the present disclosure;
FIG. 3 is a flow diagram of some embodiments of an apparatus to plan a path on a video picture according to the present disclosure;
fig. 4 is a schematic block diagram of a terminal device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the method of planning a path on a video screen of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a data processing application, an information generation application, a path planning application, and the like.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various terminal devices having a display screen, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the above-listed terminal apparatuses. It may be implemented as a plurality of software or software modules (e.g. to provide the initial information sequence and the set of candidate information, etc.) or as a single software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a server that stores target images to be processed, start point coordinates, end point coordinates, and the like, which are input by the terminal apparatuses 101, 102, 103. The server may process the received target image to be processed, the start point coordinates, and the end point coordinates, and feed back a processing result (e.g., a target path coordinate sequence) to the terminal device.
It should be noted that the method for planning a path on a video screen provided by the embodiment of the present disclosure may be executed by the server 105, or may be executed by the terminal device.
It should be noted that the local area of the server 105 may also directly store the target image to be processed, the start point coordinates, and the end point coordinates, and the server 105 may directly extract the local target image to be processed, the start point coordinates, and the end point coordinates to obtain the image path coordinate sequence after processing, in this case, the exemplary system architecture 100 may not include the terminal devices 101, 102, 103 and the network 104.
It should be noted that the terminal apparatuses 101, 102, and 103 may also have a path planning application installed thereon, and in this case, the processing method may also be executed by the terminal apparatuses 101, 102, and 103. At this point, the exemplary system architecture 100 may also not include the server 105 and the network 104.
The server 105 may be hardware or software. When the server 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (for example, for providing a service of planning a path on a video screen), or may be implemented as a single software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method of planning a path on a video screen in accordance with the present disclosure is shown. The method for planning the path on the video picture comprises the following steps:
step 201, acquiring a target image to be processed, a start point coordinate and an end point coordinate.
In some embodiments, an executing body (e.g., a server shown in fig. 1) of the method of planning a path on a video screen acquires a target image to be processed, start point coordinates, and end point coordinates. The target image is a camera picture image, and the starting point coordinate and the ending point coordinate are pixel coordinates in the target image. Specifically, the target image may be a frame image captured in real time from a video played by the camera. The start point coordinates and the end point coordinates may be coordinates in an image coordinate system.
Step 202, generating a target starting point coordinate and a target ending point coordinate based on the starting point coordinate and the ending point coordinate.
In some embodiments, the execution subject generates the target start point coordinates and the target end point coordinates based on the start point coordinates and the end point coordinates. Specifically, the start point coordinate and the end point coordinate are used for calculating the angle offset of the start point coordinate and the end point coordinate in the camera mirror surface according to the optical path tracking mode, and the three-dimensional penetration calculation of a three-dimensional emission ray and an earth model is realized through the position, the height, the posture and the angle deviation of the camera, so that the earth coordinate position of the intersection point is determined. Specifically, the target start point coordinates and the target end point coordinates may be coordinates in an earth coordinate system.
Specifically, the target start point coordinates may be generated based on the start point coordinates using the following equation:
Figure BDA0003184511780000071
wherein (u)1,v1) For the coordinates of the starting point, the lower right hand corner 1 characterizes the starting point. (X)1,Y1) Is the coordinate of the target starting point, R is the rotation matrix, t is the translation momentThe matrix, R and t are predetermined matrices, and s is a predetermined camera depth value. M is a predetermined camera parameter matrix. Specifically, R, t, s, and M are all camera-related predetermined parameters, determined by the particular camera that captured the video.
Specifically, the target termination point coordinate may be generated based on the termination point coordinate using the following formula:
Figure BDA0003184511780000072
wherein (u)2,v2) For the coordinates of the termination point, the lower right corner mark 2 characterizes the termination point. (X)2,Y2) The coordinates of the target end point, R is a rotation matrix, t is a translation matrix, R and t are predetermined matrices, and s is a predetermined camera depth value. M is a predetermined camera parameter matrix. Specifically, R, t, s, and M are all camera-related predetermined parameters, determined by the particular camera that captured the video.
Specifically, the target start point and the target end point may be generated by using a coordinate transformation method in an image space and a physical space in consideration of a camera distortion effect.
And step 203, generating a target path coordinate sequence based on the target starting point coordinate and the target ending point coordinate.
In some embodiments, the execution subject generates the target path coordinate sequence based on the target start point coordinates and the target end point coordinates.
Optionally, for each pixel point in the target image, the target pixel coordinate of the pixel point is determined, so as to obtain a target pixel set. The target start point coordinates are deleted from the target set of pixels. Specifically, the target pixel set is obtained by deleting the target starting point. The target pixel set includes pixels of the target image other than the target start point.
Optionally, a first coordinate sequence is generated. Wherein the first coordinate sequence is an empty set. And putting the target starting point coordinates into the first coordinate sequence. The target set of pixels is determined as a second sequence of coordinates. Selecting a first coordinate from the first coordinate sequence, and executing the following steps:
the method comprises the following steps: and determining candidate first coordinates according to the first coordinate and the second coordinate sequence. And the candidate first coordinate is the coordinate with the shortest distance from the first coordinate, and the candidate first coordinate is the coordinate in the second coordinate sequence. And deleting the candidate first coordinate from the second coordinate sequence. And putting the candidate first coordinate into the first coordinate sequence. And determining the first coordinate sequence as a target path coordinate sequence in response to the target end point coordinate being placed in the first coordinate sequence. Specifically, the candidate first coordinate may be determined by a dixotera algorithm, or may be determined by a verloede algorithm. Specifically, the candidate first coordinate may be a coordinate that meets a driving route requirement, and the candidate first coordinate may also be a coordinate that meets a riding route requirement. The candidate first coordinates may also be coordinates that satisfy the pedestrian-link requirement.
And step 204, generating an image path coordinate sequence based on the target path coordinate sequence.
In some embodiments, the execution subject generates the image path coordinate sequence based on the target path coordinate sequence. Optionally, for each target path coordinate in the target path coordinate sequence, the target path coordinate is converted into an image path coordinate to obtain an image path coordinate sequence. For each target path coordinate in the target path coordinate sequence, converting the target path coordinate to an image path coordinate using:
Figure BDA0003184511780000091
wherein (u)i,vi) For the image path coordinates, the lower right hand corner mark i characterizes the target path. (X)i,Yi) For the target path coordinates, R is a rotation matrix, t is a translation matrix, and R and t are predetermined matrices. s is a predetermined camera depth value and M is a predetermined camera parameter matrix. In particular, R, t, s and M are camera-dependent predeterminationsThe specific parameters are determined by a camera which specifically shoots the video.
The optional contents in the above step 202 and step 204 are: the technical content of generating the image path coordinate sequence according to the starting point and the ending point directly defined in the video is taken as an invention point of the embodiment of the disclosure, and the technical problem mentioned in the background technology is solved. ". The factors that result in failing to meet the shortest path application requirements in the real world are often as follows: in the prior art, only path planning is performed in a two-dimensional/three-dimensional map model, and the path planning application in the video playing process cannot be met. If the above factors are solved, the effect of improving the application level of the path planning can be achieved. To achieve this effect, the present disclosure introduces a way of converting an image coordinate system and a terrestrial coordinate system to perform path planning. First, the start point coordinates and the end point coordinates in the image coordinate system are converted into target start point coordinates and target end point coordinates in the terrestrial coordinate system. And then, determining a target path coordinate sequence according to the target starting point coordinate and the target end point coordinate. The target path coordinate sequence is the shortest path in the earth coordinate system. And finally, converting the target path coordinate sequence into an image path coordinate sequence. The image path coordinate sequence is the shortest path in the image coordinate system and can be subsequently used for overlaying display directly in the video. The shortest path can be directly determined in the image captured in real time in the video of the camera by converting the coordinates in the image coordinate system and the earth coordinate system, so that the path planning in the video playing process is met, the application level of the path planning is improved, and the technical problem II is solved.
Step 205, the target image and the image path coordinate sequence are sent to a target device with a display function.
Optionally, the execution body sends the target image and the image path coordinate sequence to a target device with a display function. Wherein the target device displays the target image and the image path coordinate sequence. Optionally, the target device displays, in the target image, a line corresponding to each image path coordinate in the image path coordinate sequence in a layer-by-layer manner. Specifically, the image layer superposition may be to superimpose the image path coordinate sequence according to the brightness of the target image to form a superimposed presentation effect. Specifically, the image path coordinate sequence and the target image may be merged into one image layer by an intersection, union, or erasure method for displaying.
One embodiment presented in fig. 2 has the following beneficial effects: acquiring a target image to be processed, a start point coordinate and an end point coordinate, wherein the target image is a camera picture image; generating a target starting point coordinate and a target ending point coordinate based on the starting point coordinate and the ending point coordinate; generating a target path coordinate sequence; generating an image path coordinate sequence based on the target path coordinate sequence; and sending the target image and the image path coordinate sequence to a target device with a display function. The method comprises the steps of generating a target path coordinate sequence according to a starting point coordinate and an ending point coordinate in a target image, converting the target path coordinate sequence into an image path coordinate sequence, and directly displaying the target image and the image path coordinate sequence by using target equipment so as to directly plan a path in a camera picture image to be processed and improve path planning efficiency.
With further reference to fig. 3, as an implementation of the above-described methods for the above-described figures, the present disclosure provides some embodiments of an apparatus for planning an optimal path on a video screen, which correspond to those of the method embodiments described above for fig. 2, and which may be applied in various terminal devices in particular.
As shown in fig. 3, the apparatus 300 for planning an optimal path on a video screen of some embodiments includes: a receiving unit 301, a first generating unit 302, a second generating unit 303, a third generating unit 304, and a display unit 305. The receiving unit 301 is configured to acquire a target image to be processed, a start point coordinate, and an end point coordinate. The target image is a camera picture image, and the starting point coordinate and the ending point coordinate are pixel coordinates in the target image. A first generating unit 302 configured to generate a target start point coordinate and a target end point coordinate based on the start point coordinate and the end point coordinate. And the target starting point coordinate and the target end point coordinate are earth coordinates in a physical space. A second generating unit 303 configured to generate a target path coordinate sequence based on the target start point coordinates and the target end point coordinates. A third generating unit 304 configured to generate an image path coordinate sequence based on the target path coordinate sequence. A display unit 305 configured to transmit the target image and the image path coordinate sequence to a target device having a display function. Wherein the target device displays the target image and the image path coordinate sequence.
It will be understood that the units described in the apparatus 300 correspond to the various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 300 and the units included therein, and are not described herein again.
Referring now to FIG. 4, shown is a block diagram of a computer system 400 suitable for use in implementing a terminal device of an embodiment of the present disclosure. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, the computer system 400 includes a Central Processing Unit (CPU)401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 406 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the system 400 are also stored. The CPU 401, ROM 402, and RAM403 are connected to each other via a bus 404. An Input/Output (I/O) interface 405 is also connected to the bus 404.
The following components are connected to the I/O interface 405: a storage section 406 including a hard disk and the like; and a communication section 407 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 407 performs communication processing via a network such as the internet. A drive 408 is also connected to the I/O interface 405 as needed. A removable medium 409 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted as necessary on the drive 408, so that a computer program read out therefrom is mounted as necessary in the storage section 406.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 407 and/or installed from the removable medium 409. The above-described functions defined in the method of the present disclosure are performed when the computer program is executed by a Central Processing Unit (CPU) 401. It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (8)

1. A method of planning a path on a video picture, comprising:
acquiring a target image to be processed, a start point coordinate and an end point coordinate, wherein the target image is a real-time video image of a camera, and the start point coordinate and the end point coordinate are pixel coordinates in the target image;
generating a target starting point coordinate and a target end point coordinate based on the starting point coordinate and the end point coordinate, wherein the target starting point coordinate and the target end point coordinate are earth coordinates in a physical space;
generating a target path coordinate sequence based on the target starting point coordinates and the target ending point coordinates;
generating an image path coordinate sequence based on the target path coordinate sequence;
and sending the target image and the image path coordinate sequence to target equipment with a display function, wherein the target equipment displays the target image and the image path coordinate sequence.
2. The method of claim 1, wherein generating target start point coordinates and target end point coordinates based on the start point coordinates and the end point coordinates comprises:
generating a target starting point coordinate based on the starting point coordinate;
and generating target end point coordinates based on the end point coordinates.
3. The method of claim 2, wherein prior to generating a target path coordinate sequence based on the target start point coordinates and the target end point coordinates, further comprising:
for each pixel point in the target image, determining a target pixel coordinate of the pixel point to obtain a target pixel set;
deleting the target start point coordinates from the target set of pixels.
4. The method of claim 3, wherein the generating an image path coordinate sequence based on the target path coordinate sequence comprises:
and for each target path coordinate in the target path coordinate sequence, converting the target path coordinate into an image path coordinate to obtain the image path coordinate sequence.
5. The method of claim 4, wherein the target device displays the target image and the sequence of image path coordinates, comprising:
and the target equipment displays lines corresponding to all image path coordinates in the image path coordinate sequence in the target image.
6. An apparatus for planning an optimal path on a video picture, comprising:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is configured to acquire a target image to be processed, a starting point coordinate and an ending point coordinate, the target image is a camera picture image, and the starting point coordinate and the ending point coordinate are pixel coordinates in the target image;
a first generating unit configured to generate a target start point coordinate and a target end point coordinate based on the start point coordinate and the end point coordinate, wherein the target start point coordinate and the target end point coordinate are earth coordinates in a physical space;
a second generation unit configured to generate a target path coordinate sequence based on the target start point coordinates and the target end point coordinates;
a third generating unit configured to generate an image path coordinate sequence based on the target path coordinate sequence;
a display unit configured to transmit the target image and the image path coordinate sequence to a target device having a display function, wherein the target device displays the target image and the image path coordinate sequence.
7. A first terminal device comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
8. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-6.
CN202110857070.9A 2021-07-28 2021-07-28 Method, device and terminal equipment for planning path on video picture Active CN113590878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857070.9A CN113590878B (en) 2021-07-28 2021-07-28 Method, device and terminal equipment for planning path on video picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857070.9A CN113590878B (en) 2021-07-28 2021-07-28 Method, device and terminal equipment for planning path on video picture

Publications (2)

Publication Number Publication Date
CN113590878A true CN113590878A (en) 2021-11-02
CN113590878B CN113590878B (en) 2023-11-17

Family

ID=78251030

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857070.9A Active CN113590878B (en) 2021-07-28 2021-07-28 Method, device and terminal equipment for planning path on video picture

Country Status (1)

Country Link
CN (1) CN113590878B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792319A (en) * 2022-06-23 2022-07-26 国网浙江省电力有限公司电力科学研究院 Transformer substation inspection method and system based on transformer substation image
CN116341161A (en) * 2023-05-26 2023-06-27 广州一链通互联网科技有限公司 Digital twinning-based cross-border logistics transportation line simulation method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130162838A1 (en) * 2011-12-22 2013-06-27 Pelco, Inc. Transformation between Image and Map Coordinates
CN106303417A (en) * 2016-08-12 2017-01-04 长沙冰眼电子科技有限公司 Enhancing overall view monitoring method for unmanned platform
CN110309236A (en) * 2018-02-28 2019-10-08 深圳市萌蛋互动网络有限公司 The method, apparatus, computer equipment and storage medium of pathfinding in map
CN110458884A (en) * 2019-08-16 2019-11-15 北京茵沃汽车科技有限公司 Method, apparatus, the medium of vehicle operation state trajectory line are generated in panorama sketch
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method
CN110547867A (en) * 2018-05-31 2019-12-10 上海联影医疗科技有限公司 control method, device, equipment, storage medium and system of mechanical arm
CN111014995A (en) * 2018-10-09 2020-04-17 中冶赛迪工程技术股份有限公司 Robot welding method and system for nonstandard unstructured operation environment
CN112585659A (en) * 2020-11-27 2021-03-30 华为技术有限公司 Navigation method, device and system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130162838A1 (en) * 2011-12-22 2013-06-27 Pelco, Inc. Transformation between Image and Map Coordinates
CN104081433A (en) * 2011-12-22 2014-10-01 派尔高公司 Transformation between image and map coordinates
CN106303417A (en) * 2016-08-12 2017-01-04 长沙冰眼电子科技有限公司 Enhancing overall view monitoring method for unmanned platform
CN110309236A (en) * 2018-02-28 2019-10-08 深圳市萌蛋互动网络有限公司 The method, apparatus, computer equipment and storage medium of pathfinding in map
CN110547867A (en) * 2018-05-31 2019-12-10 上海联影医疗科技有限公司 control method, device, equipment, storage medium and system of mechanical arm
CN111014995A (en) * 2018-10-09 2020-04-17 中冶赛迪工程技术股份有限公司 Robot welding method and system for nonstandard unstructured operation environment
CN110458884A (en) * 2019-08-16 2019-11-15 北京茵沃汽车科技有限公司 Method, apparatus, the medium of vehicle operation state trajectory line are generated in panorama sketch
CN110524580A (en) * 2019-09-16 2019-12-03 西安中科光电精密工程有限公司 A kind of welding robot visual component and its measurement method
CN112585659A (en) * 2020-11-27 2021-03-30 华为技术有限公司 Navigation method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郝亮亮;林海华;: "面向三维动画自动生成的路径规划", 计算机系统应用 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114792319A (en) * 2022-06-23 2022-07-26 国网浙江省电力有限公司电力科学研究院 Transformer substation inspection method and system based on transformer substation image
CN114792319B (en) * 2022-06-23 2022-09-20 国网浙江省电力有限公司电力科学研究院 Transformer substation inspection method and system based on transformer substation image
CN116341161A (en) * 2023-05-26 2023-06-27 广州一链通互联网科技有限公司 Digital twinning-based cross-border logistics transportation line simulation method and system
CN116341161B (en) * 2023-05-26 2023-08-15 广州一链通互联网科技有限公司 Digital twinning-based cross-border logistics transportation line simulation method and system

Also Published As

Publication number Publication date
CN113590878B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN110058685B (en) Virtual object display method and device, electronic equipment and computer-readable storage medium
US10699431B2 (en) Method and apparatus for generating image generative model
CN110517214B (en) Method and apparatus for generating image
WO2018133692A1 (en) Method for achieving augmented reality, computer device and storage medium
CN112073748B (en) Panoramic video processing method and device and storage medium
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
CN112929582A (en) Special effect display method, device, equipment and medium
CN113590878B (en) Method, device and terminal equipment for planning path on video picture
CN108985421B (en) Method for generating and identifying coded information
CN109754464B (en) Method and apparatus for generating information
CN112927273A (en) Three-dimensional video processing method, equipment and storage medium
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN111882634A (en) Image rendering method, device and equipment and storage medium
CN110675465A (en) Method and apparatus for generating image
WO2023207379A1 (en) Image processing method and apparatus, device and storage medium
CN111783662B (en) Attitude estimation method, estimation model training method, device, medium and equipment
CN110111241A (en) Method and apparatus for generating dynamic image
CN114445269A (en) Image special effect processing method, device, equipment and medium
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN110288523B (en) Image generation method and device
CN109816791B (en) Method and apparatus for generating information
CN111915532B (en) Image tracking method and device, electronic equipment and computer readable medium
CN111866548A (en) Marking method applied to medical video
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
US11651529B2 (en) Image processing method, apparatus, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220823

Address after: Room 819, Enterprise Service Center, No. 17, Section 3, West Section of Changjiang North Road, Yibin Lingang Economic and Technological Development Zone, Yibin City, Sichuan Province, 644005

Applicant after: Yibin Zhongxing Technology Intelligent System Co.,Ltd.

Applicant after: Vimicro Co.,Ltd.

Address before: 644000 building 7-8, d2-b, Lingang District scientific research center, Yibin City, Sichuan Province

Applicant before: Yibin Zhongxing Technology Intelligent System Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230328

Address after: 644000 building 7-8, d2-b, Lingang District scientific research center, Yibin City, Sichuan Province

Applicant after: Yibin Zhongxing Technology Intelligent System Co.,Ltd.

Applicant after: Vimicro Co.,Ltd.

Applicant after: Zhongxing Micro Technology Co.,Ltd.

Address before: Room 819, Enterprise Service Center, No. 17, Section 3, West Section of Changjiang North Road, Yibin Lingang Economic and Technological Development Zone, Yibin City, Sichuan Province, 644005

Applicant before: Yibin Zhongxing Technology Intelligent System Co.,Ltd.

Applicant before: Vimicro Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant