CN109272538B - Picture transmission method and device - Google Patents

Picture transmission method and device Download PDF

Info

Publication number
CN109272538B
CN109272538B CN201710581355.8A CN201710581355A CN109272538B CN 109272538 B CN109272538 B CN 109272538B CN 201710581355 A CN201710581355 A CN 201710581355A CN 109272538 B CN109272538 B CN 109272538B
Authority
CN
China
Prior art keywords
picture
pixel point
point
condition
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710581355.8A
Other languages
Chinese (zh)
Other versions
CN109272538A (en
Inventor
庞英明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201710581355.8A priority Critical patent/CN109272538B/en
Publication of CN109272538A publication Critical patent/CN109272538A/en
Application granted granted Critical
Publication of CN109272538B publication Critical patent/CN109272538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention discloses a method and a device for transmitting pictures of cloud server and cloud processing related technologies, and relates to the technical field of cloud computing. Wherein, the method comprises the following steps: receiving a first instruction, wherein the first instruction is used for instructing to upload a picture of a target object to a server; responding to the first instruction, and acquiring at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; and uploading the first picture to a server under the condition that the first picture has the texture features. The invention solves the technical problem that more flow is needed to be consumed for transmitting and acquiring the acquired pictures in the related technology.

Description

Picture transmission method and device
Technical Field
The invention relates to the field of internet, in particular to a picture transmission method and device.
Background
The technical scheme for transmitting the picture to the server in the related art is as follows: after a camera of the mobile phone terminal is opened, the camera of the mobile phone is triggered to focus, when the camera finishes focusing, frame pictures in a video stream are obtained, the pictures are continuously transmitted to a cloud terminal to be identified (specifically, an identified object is identified), and after a result of successful identification is returned by the cloud terminal, sending is stopped. Or the user actively takes a picture and then uploads the picture, but this solution introduces operational costs.
The technical scheme has the following problems:
(1) The camera cannot complete focusing due to the stability of the equipment or the self-equipment, for example, when an object in the car is shot on the car, focusing callback cannot be received.
(2) The picture quality can not be guaranteed, and after focusing is completed, the shot picture can not enable the cloud to identify the shot object.
(3) Pictures are continuously sent to the background server, user flow consumption is serious, and background load pressure is too high.
(4) The user initiatively triggers to shoot, experiences not well, and the operating cost is high.
Aiming at the technical problem that a large amount of flow is consumed for transmitting and acquiring pictures in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a picture transmission method and a picture transmission device, which are used for at least solving the technical problem that more flow needs to be consumed for transmitting and acquiring pictures in the related technology.
According to an aspect of an embodiment of the present invention, there is provided a transmission method of a picture, the transmission method including: receiving a first instruction, wherein the first instruction is used for instructing to upload a picture of a target object to a server; responding to the first instruction, and acquiring at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; and uploading the first picture to a server under the condition that the first picture has the texture features.
According to another aspect of the embodiments of the present invention, there is also provided a transmission apparatus of a picture, the transmission apparatus including: the receiving unit is used for receiving a first instruction, and the first instruction is used for instructing to upload the picture of the target object to the server; the response unit is used for responding to the first instruction and acquiring at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment; the detection unit is used for detecting whether a first picture in the at least two pictures has texture features or not under the condition that the image acquisition equipment is determined to be in a static state on the basis of the at least two pictures, wherein the texture features comprise features used for describing the target object; and the uploading unit is used for uploading the first picture to the server under the condition that the first picture has the texture characteristics.
In the embodiment of the invention, when a first instruction is received, at least two pictures obtained by continuously acquiring a target object by image acquisition equipment are obtained; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; under the condition that the first picture has the texture characteristics, the first picture is uploaded to the server, and only one picture needs to be uploaded, so that the technical problem that more flow needs to be consumed for transmitting and acquiring the pictures in the related technology can be solved, and the technical effect of reducing the flow consumed for transmitting and acquiring the pictures is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a schematic diagram of a hardware environment of a picture transmission method according to an embodiment of the present invention;
fig. 2 is a flow chart of an alternative picture transmission method according to an embodiment of the present invention;
FIG. 3 is a flow chart of an alternative method for transmitting pictures according to an embodiment of the present invention;
FIG. 4 is a flow chart of an alternative method for transmitting pictures according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an alternative pixel point of a picture according to an embodiment of the invention;
FIG. 6 is a schematic diagram of an alternative picture transmission apparatus according to an embodiment of the present invention; and
fig. 7 is a block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, partial terms or terms appearing in the description of the embodiments of the present invention are applied to the following explanations:
OPENCV: open Source Computer Vision Library is an Open Source Computer Vision Library.
An optical flow method: the method is a method for calculating the motion information of an object between adjacent frames by using the change of pixels in an image sequence in a time domain and the correlation between the adjacent frames to find the corresponding relation between the previous frame and the current frame.
Example 1
According to an embodiment of the present invention, a method embodiment of a method for transmitting a picture is provided.
Alternatively, in this embodiment, the above-described picture transmission method may be applied to a hardware environment formed by the server 102 and the terminal 104 as shown in fig. 1. As shown in fig. 1, a server 102 is connected to a terminal 104 via a network including, but not limited to: the terminal 104 is not limited to a PC, a mobile phone, a tablet computer, etc. in a wide area network, a metropolitan area network, or a local area network. The picture transmission method according to the embodiment of the present invention may be executed by the terminal 104, or may be executed by both the server 102 and the terminal 104. The terminal 104 may execute the method for transmitting the picture according to the embodiment of the present invention by a client installed thereon.
When the picture transmission method of the embodiment of the invention is executed by the terminal alone, the program code corresponding to the method of the application can be executed on the terminal directly.
When the image transmission method provided by the embodiment of the invention is executed by the server and the terminal together, the terminal executes the program code corresponding to the method of the application, and feeds the obtained image back to the terminal for identification.
Fig. 2 is a flowchart of an optional picture transmission method according to an embodiment of the present invention, and as shown in fig. 2, the method may include the following steps:
step S202, a first instruction is received, and the first instruction is used for instructing to upload the picture of the target object to the server.
The first instruction here includes two triggering modes: the first is triggering on a terminal executing the method of the application (such as terminal automatic triggering, user operation triggering and the like), and the second is triggering on other devices in communication connection with the terminal (such as triggering on a server performing picture recognition, triggering on other intelligent devices and the like).
The server is used for identifying the target object according to the uploaded icon, and the purpose of the server includes but is not limited to: the method is used for monitoring the target object and rendering the identified target object in the augmented virtual reality (AR).
And step S204, responding to the first instruction, and acquiring at least two pictures acquired by continuously acquiring the target object by the image acquisition equipment.
Under the trigger of the first instruction, the terminal acquires an image of the target object through the image acquisition device, and the image acquisition device can be an acquisition module (such as a camera) on the terminal and can also be a device in communication connection with the terminal.
The at least two pictures are pictures adjacent in acquisition time when the target object is subjected to image acquisition.
Step S206, under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, whether a first picture in the at least two pictures has texture features or not is detected, and the texture features are used for the server to identify the target object.
Determining that the image capturing device is in a stationary state based on the at least two pictures is mainly based on image features in the two pictures.
Textural features include texture in the general sense of the surface of an object, even if the surface of the object exhibits uneven grooves. In the case of the grooves, it is also necessary to draw a color pattern or design on the surface and to visually give a sense of unevenness. The texture features belong to global features of the picture, such as Local Binary Patterns (LBPs), uniform LBPs, rotationally invariant LBPs, local Ternary Patterns (LTPs), CLBP features, and Local gradient information addition.
The texture feature of the first picture is a global feature of the first picture, including all features (possibly including the target object) within the captured region. In the case that the first picture has sufficient local features, the first picture is considered to have global features.
Step S208, uploading the first picture to a server under the condition that the first picture has the texture feature, so that the server can identify the target object in the first picture.
In the embodiment of the invention, whether the acquisition equipment is static or not is directly judged through the continuously acquired pictures, so that the problems that the camera cannot finish focusing (for example, objects in a shooting car cannot receive focusing callback information on the car, partial models cannot receive focusing callback when focusing problems exist due to the model self-reason, partial mobile phones cannot always focus due to the model self-reason) and further cannot acquire acquired frame pictures (pictures) and the like due to the equipment stability or the equipment self-reason and the like can be avoided.
The first picture is uploaded under the condition that the first picture has global characteristics, so that the recognition rate of the cloud server to a shot object (a target object) can be improved, only one picture needs to be uploaded, the flow consumption of picture transmission can be reduced, and the load pressure of a background is reduced.
Through the steps S202 to S208, when the first instruction is received, at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment are obtained; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; under the condition that the first picture has the texture characteristics, the first picture is uploaded to the server, and only one picture needs to be uploaded, so that the technical problem that more flow needs to be consumed for transmitting and acquiring the pictures in the related technology can be solved, and the technical effect of reducing the flow consumed for transmitting and acquiring the pictures is achieved.
The method of the present application may be directly run on the terminal or run on a client integrated into the terminal.
The following describes an embodiment of the present application in detail with reference to steps S202 to S208:
in the technical solution provided in step S202, the terminal receives the first instruction, and uploads the picture of the target object to the server according to the instruction of the first instruction.
In the technical solution provided in step S204, at least two pictures obtained by continuously acquiring the target object by the image acquisition device are acquired in response to the first instruction. And determining whether the image capturing device is in a still state based on the at least two pictures. The following description will be given taking an example of determining whether the image capturing apparatus is in a stationary state based on two pictures:
step S2042, detecting whether there is an acquisition point (for identifying a moving object) that moves in the same acquisition region of the first picture and the second picture (i.e. the region of the acquisition region of the first picture that overlaps or intersects with the acquisition region of the second picture), where determining whether there is an acquisition point that moves actually determines whether there is a moving object, and the second picture is a picture whose acquisition time is adjacent to the acquisition time of the second picture in at least two pictures.
Optionally, in the step S2042, the detecting whether there is an acquisition point in the first picture and the second picture where motion occurs includes: determining the motion speed of the target acquisition point (namely dividing the position variation corresponding to the position information by the time interval) according to the position information of the pixel point corresponding to the target acquisition point in the first picture, the position information of the pixel point corresponding to the target acquisition point in the second picture and the time interval between the acquisition time of the first picture and the acquisition time of the second picture, wherein the target acquisition point at least comprises two acquisition points; under the condition that the motion speeds of the two acquisition points are different, determining that the acquisition points which move exist; in the case where the moving speeds of the two acquisition points are the same, it is determined that there is no acquisition point where the movement occurs.
Specifically, the idea of detecting whether there is an acquisition point in the first picture and the second picture where motion occurs is as follows: and detecting whether the speeds of the acquisition point of the background in the picture and the acquisition point of any object (both included in the target acquisition point) are consistent. Ideally, only two acquisition points (the acquisition point of the background and the acquisition point of any object) are needed to judge whether the object moves relatively.
Alternatively, the target acquisition point is a collection of acquisition points, wherein the acquisition points are determined according to a predetermined rule (e.g., equal spacing between acquisition points).
In this way, when detecting whether the first picture and the second picture have the collection points which move, the speed comparison between every two collection points in the collection of collection points can be carried out, and under the condition that the movement speeds of the two collection points are different, the collection points which move are determined to exist; in the case where the moving speeds of the two acquisition points are the same, it is determined that there is no acquisition point where the movement occurs.
Preferably, the target acquisition point is an acquisition point set, where the acquisition points include two types, the first type is an acquisition point corresponding to a pixel point where the texture feature is located (because a general moving object has a distinct contour and has a distinct texture), and the second type is an acquisition point corresponding to a pixel point where the non-texture feature is located (i.e., an acquisition point where the background is located, which may be selected according to a larger distance). By adopting the mode, the number of the acquisition points can be reduced, and the moving object or the moving acquisition points can be more conveniently identified.
In this case, when detecting whether there is an acquisition point where motion occurs in the first picture and the second picture, the speed at which one acquisition point of the first type or the second type of acquisition points and the other type of acquisition points are sequentially taken to be compared, and in the case where the motion speeds of the two acquisition points are not the same, it is determined that there is an acquisition point where motion occurs; and under the condition that the motion speeds of the two acquisition points are the same, determining that no acquisition point with motion exists.
In step S2044, in the case where it is detected that there is no acquisition point where motion occurs in the first picture and the second picture, it is determined that the image acquisition apparatus is in a still state (the still state here is a relatively still state).
Step S2046, in the case where it is detected that there is an acquisition point where motion occurs in the first picture and the second picture, determines that the image acquisition apparatus is not in a still state.
It should be noted that, the above uses two pictures as an example to determine whether the image capturing device is in a static state. For the case of three images, the method may be used to detect two images (e.g., the first image and the second image) that are earlier in time, and then use the two images (e.g., the second image and the third image), and if both the detection results are still, the device is considered to be still, otherwise, the device is not still. For four or more pictures, the detection method is the same as that for three pictures, and the description is omitted here.
In the technical solution provided in step S206, in a case that it is determined that the image capturing apparatus is in a still state based on the at least two pictures, it is detected whether a first picture of the at least two pictures has a texture feature, where the texture feature includes a feature for describing a target object. Specifically, the detecting whether the first picture of the at least two pictures has texture features comprises the following sub-steps:
step S2062 is to identify a feature pixel in the first picture, where the feature pixel is a pixel used for describing a local feature in the first picture (e.g., a feature from accessed Segment Test). The FAST feature points are a set of points representing the image grammatical features, the calculation speed of feature values (feature values) of the FAST feature points is FAST and is many times faster than that of other known feature point detection algorithms, and the FAST feature points can be used for real-time scenes of computer vision application. The following description will be made by taking FAST characteristic points as examples:
optionally, identifying the characteristic pixel point in the first picture may be implemented by: acquiring a characteristic value of each pixel point in a first area in a first picture, wherein the first area takes the first pixel point as a center and is an area formed by pixel points which are less than or equal to N pixel points away from the first pixel point; detecting whether the first pixel point meets the following conditions: the number of second pixel points which are larger than a second threshold value exist in the pixel points which are away from the first pixel points by N pixel points, and the absolute value of the difference value between the gray value of the second pixel point and the gray value of the first pixel point is larger than a third threshold value; detecting whether the characteristic value of the first pixel point is not less than the characteristic values of all pixel points in the first area or not under the condition that the first pixel point meets the condition; and under the condition that the characteristic value of the first pixel point is not less than the characteristic values of all the pixel points in the first area, setting the first pixel point as a characteristic pixel point.
In the process of obtaining the feature value of each pixel point in the first region in the first picture, the following operations may be performed on each pixel point (each pixel point is regarded as a current pixel point when the following operations are performed): detecting whether the current pixel point meets the following conditions: the number of third pixel points which are larger than a second threshold exist in pixel points which are away from the current pixel point by N pixel points, and the absolute value of the difference value between the gray value of the third pixel point and the gray value of the current pixel point is larger than the third threshold; setting the sum of absolute values of differences between the gray value of the current pixel and the gray values of all third pixels as the characteristic value of the current pixel under the condition that the current pixel meets the condition; and under the condition that the current pixel point does not meet the condition, setting the characteristic value of the current pixel point to be 0.
Step S2064, determining that the first picture has a texture feature when the number of all the feature pixel points in the identified first picture reaches the first threshold.
Step S2066, determining that the first picture has no texture feature when the number of all the feature pixel points in the identified first picture is smaller than the first threshold.
In the technical solution provided in step S208, when the first picture has a texture feature, the first picture is uploaded to the server, so that the server can identify the target object in the first picture.
Optionally, in the real-time scene, after the first picture is uploaded to the server, the server identifies the target object according to the texture feature of the first picture, and renders the three-dimensional model of the target object in the augmented reality scene.
Optionally, in the monitoring scene, after the first picture is uploaded to the server, the server identifies the target object according to the texture feature of the first picture, and performs an identification display on the target object in the display interface (for example, there is an identification pattern around the target object all the time).
Optionally, after uploading the first picture to the server, in a case where the server fails to identify the target object according to the texture feature (since the texture of the picture is clear, if the texture is not identified, it indicates that the target object is not in the acquisition area), prompt information is generated to prompt the image acquisition device to adjust the direction of the acquisition window to capture an acquired picture including the target object.
By implementing the picture transmission through the trial tests, the following effects can be achieved:
(1) Focusing is not dependent on a lens, so that the problem of focusing compatibility of part of machine types can be solved;
(2) The problem that the user cannot complete scanning when in an absolute motion state is solved;
(3) And excessive picture transmission to the background is avoided, the user flow consumption is reduced, and the background load cost is reduced.
As an alternative embodiment, the following details an embodiment of the present application by taking an AR scenario as an example:
scene introduction: in the AR game, a doll image (i.e., a target object such as a superman, a minibus, an animal, etc.) needs to be uploaded at the beginning of the game to render a game character in the AR game with the doll as a model. At this time, the method of the present application executes a flow as shown in fig. 3:
step S301, a camera is opened, and at the moment, the interior of the mobile phone receives the transmitted video stream, so that a first frame of picture is obtained and stored;
in step S302, a second frame picture is acquired, and a moving object (corresponding to an acquisition point of motion) is detected by an optical flow method.
The optical flow method is a method for calculating motion information of an object between adjacent frames by using the change of pixels in an image sequence (continuous images) in a time domain and the correlation between the adjacent frames to find the corresponding relation between a previous frame and a current frame. In general, optical flow is due to movement of the foreground objects themselves in the scene, motion of the camera, or both.
The premise assumptions of the optical flow method include: (1) the luminance between adjacent frames is constant; (2) The frame taking time of adjacent picture frames is continuous, or the motion of an object between the adjacent frames is small; (3) And the spatial consistency is kept, namely the pixel points of the same sub-image have the same motion.
It should be noted that a motion field is a motion of an object in a three-dimensional real world; the optical flow field is a projection of the motion field on a two-dimensional image plane.
Each pixel point in the image is endowed with a velocity vector, so that a motion vector field is formed, at a certain specific moment, the points on the image correspond to the points on the three-dimensional object one by one, and the corresponding relation can be obtained by projection calculation. According to the speed vector characteristics of each pixel point, the image can be dynamically analyzed. If there is no moving object in the image, the optical flow vector is continuously varied over the entire image area. When moving objects exist in the image, the target and the background move relatively. The velocity vector formed by the moving object is different from the velocity vector of the background, so that the position of the moving object can be calculated. The mobile phone of the user is in a static state, and the user is presumed to scan the picture, and at the moment, the mobile phone and the target object are in a relatively static state.
Specifically, the method can be realized by the following steps:
step S401 processes a continuous picture frame sequence.
Step S402, aiming at each picture frame sequence, a certain target detection method is utilized to detect a foreground moving target which may appear, for example, if an area with the same speed (namely an acquisition point area) is used as a background, and if the speed changes, the moving target is the moving target.
In step S403, if a moving object appears in a frame, a representative key feature point is found (which may be randomly generated, or feature points may be made by using corner points).
Step S404, for any two picture frames (which are the picture frames in which the moving target appears), finding the optimal position (i.e. the same collection point) of the key feature point appearing in the previous frame in the current frame, so as to obtain the position coordinates of the moving target in the current frame. And determining the moving speed of the key feature point according to the position coordinate of the key feature point in the previous picture frame, the position coordinate of the key feature point in the current picture frame and the shooting time interval.
Step S303, acquiring a current frame picture at this time, and invoking FAST feature point detection in Opencv, where the feature point detection step is as follows (specifically, the feature points refer to fig. 5):
in step S3031, a circle with a radius of 3 and a pixel p (corresponding to the first pixel point) as the center has 16 pixel points (p 1, p2,.. And p 16).
Step S3032, defining a threshold (corresponding to a third threshold), calculating pixel differences (difference between gray values) between p1, p9 and the center p, and if their absolute values are smaller than the threshold, then the p point cannot be a feature point; otherwise, the next step is executed as the candidate point.
Step S3033, if p is a candidate point, calculating pixel differences between p1, p9, p5, p13 and the center p, and if at least 3 of their absolute values exceed a threshold, then performing the next step; otherwise p points cannot be feature points.
Step S3034, if p is a candidate point, calculating pixel differences between 16 points p1 to p16 and the center p, and if at least 9 of them (corresponding to a second threshold) exceed the threshold, it is a feature point; otherwise p points cannot be feature points.
Step S3035, performing non-maximum suppression on the image: calculating FAST score values (namely feature values) of the feature points, judging that a neighborhood (such as 3x3 or 5x 5) with the feature point p as the center, if a plurality of feature points exist, judging the s value (the sum of absolute values of the difference values between 16 points and the center) of each feature point, and if p is the maximum response value of all feature points in the neighborhood, reserving the value; otherwise, the suppression is performed. If there is only one feature point in the neighborhood, then it is retained.
In step S304, after the number of feature points is obtained, a threshold M (corresponding to the first threshold) is set, and it is determined whether the feature value is smaller than the threshold M. If so, step S305 is performed, otherwise, step S306 is performed.
Step S305, if the characteristic point is less than M, the picture is confirmed to be fuzzy and not clear enough (no texture characteristic is considered), the picture is abandoned, and the next picture is continuously identified.
Step S306, if the feature point is not less than M, it is determined that the picture is clear enough (has texture features), and the cloud recognition condition is satisfied. Then the picture is uploaded.
After receiving the picture, the server identifies the doll in the doll image, renders a game role by taking the doll as a model in the AR game, and the user starts the game by using the game role.
In the embodiment of the application, the mobile phone is not required to be used for focusing back to decide whether to transmit the picture or not. Because the focusing mobile phone has compatibility problem, part of mobile phones can not return focusing success information, so that the picture can not be uploaded all the time, a motion estimation algorithm (optical flow method) is used for judging whether the current mobile phone is in a motion state, if the current mobile phone is in a static state, a user is considered to aim at an identified object by the mobile phone, then whether the picture is clear enough is judged through characteristic point detection (FAST characteristic point), and the picture is transmitted to a background if the picture is clear enough and meets the requirement of identifying the picture. The process can get rid of the dependence on the focusing of the mobile phone. Meanwhile, the number of picture transmission frames can be reduced, the load pressure of the background is reduced, and the recognition rate of the background is improved after the characteristic points are judged.
The invention also provides a preferred embodiment, which is detailed below by taking a monitoring scene as an example:
scene description: in some scenes, a designated object (such as a machine and a driving line in industrial operation) is monitored, and at the moment, an image acquisition device can be arranged around the object, and the image acquisition device acquires and uploads an image to a server.
Specifically, in this scenario, the executed method is the same as the method executed in the AR scenario, and is not described herein again.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art will appreciate that the embodiments described in this specification are presently preferred and that no acts or modules are required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 2
According to the embodiment of the invention, the invention also provides a picture transmission device for implementing the picture transmission method. Fig. 6 is a schematic diagram of an alternative picture transmission apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus may include: a receiving unit 62, a response unit 64, a detection unit 66 and an upload unit 68.
The receiving unit 62 is configured to receive a first instruction, where the first instruction is used to instruct to upload a picture of a target object to a server.
The first instruction here includes two triggering modes: the first is triggering on a terminal executing the method of the application (such as terminal automatic triggering, user operation triggering and the like), and the second is triggering on other devices in communication connection with the terminal (such as triggering on a server performing picture recognition, triggering on other intelligent devices and the like).
The server is used for identifying the target object according to the uploaded icon, and the server has the following purposes, including but not limited to: the method is used for monitoring the target object and rendering the identified target object in the augmented virtual reality (AR).
And the response unit 64 is configured to, in response to the first instruction, acquire at least two pictures obtained by continuously acquiring the target object by the image acquisition device.
Under the trigger of the first instruction, the terminal acquires an image of the target object through the image acquisition device, and the image acquisition device can be an acquisition module (such as a camera) on the terminal and can also be a device in communication connection with the terminal.
The at least two pictures are pictures adjacent in acquisition time when the target object is subjected to image acquisition.
A detecting unit 66, configured to detect whether a first picture of the at least two pictures has a texture feature in a case that it is determined that the image capturing device is in a stationary state based on the at least two pictures, where the texture feature includes a feature for describing the target object.
Determining that the image capturing device is in a stationary state based on the at least two pictures is mainly based on image features in the two pictures.
Textural features include texture in the general sense of the surface of an object, even if the surface of the object exhibits uneven grooves. In the case of the grooves, it is also necessary to draw a color pattern or design on the surface and to visually give a sense of unevenness. The texture features belong to global features of the picture, such as Local Binary Patterns (LBPs), uniform LBPs, rotationally invariant LBPs, local Ternary Patterns (LTPs), CLBP features, and Local gradient information addition.
The texture features of the first picture are global features of the first picture, including all features within the captured region (possibly including the target object). In the case where the first picture has sufficient local features, the first picture is considered to have global features.
An uploading unit 68, configured to upload the first picture to the server if the first picture has a texture feature.
In the embodiment of the invention, whether the acquisition equipment is static or not is directly judged through the continuously acquired pictures, so that the problems that the camera cannot finish focusing (for example, objects in a shooting car cannot receive focusing callback information on the car, partial models cannot receive focusing callback when focusing problems exist due to the model self-reason, partial mobile phones cannot always focus due to the model self-reason) and further cannot acquire acquired frame pictures (pictures) and the like due to the equipment stability or the equipment self-reason and the like can be avoided.
The first picture is uploaded under the condition that the first picture has global characteristics, the recognition rate of the cloud server to a shot object (target object) can be improved, only one picture needs to be uploaded, the flow consumption of picture transmission can be reduced, and the load pressure of a background is reduced.
It should be noted that the receiving unit 62 in this embodiment may be configured to execute step S202 in embodiment 1 of this application, the responding unit 64 in this embodiment may be configured to execute step S204 in embodiment 1 of this application, the detecting unit 66 in this embodiment may be configured to execute step S206 in embodiment 1 of this application, and the uploading unit 68 in this embodiment may be configured to execute step S208 in embodiment 1 of this application.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
Through the module, when a first instruction is received, at least two pictures obtained by continuously acquiring a target object by image acquisition equipment are obtained; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; under the condition that the first picture has the texture characteristics, the first picture is uploaded to the server, and only one picture needs to be uploaded, so that the technical problem that more flow needs to be consumed for transmitting and acquiring the pictures in the related technology can be solved, and the technical effect of reducing the flow consumed for transmitting and acquiring the pictures is achieved.
In the above-described embodiment, in order to determine whether the image pickup apparatus is in a stationary state, the detection unit includes: the detection module is used for detecting whether an acquisition point which moves exists in the same acquisition area of the first picture and the second picture, wherein the second picture is a picture of which the acquisition time is adjacent to that of the second picture in at least two pictures; the third determining module is used for determining that the image acquisition equipment is in a static state under the condition that the first picture and the second picture are detected to have no acquisition points which are in motion; and the fourth determining module is used for determining that the image acquisition equipment is not in a static state under the condition that the acquisition points with motion are detected to exist in the first picture and the second picture.
Optionally, when detecting whether there are collection points that move in the first picture and the second picture, the detection module determines the movement speed of the target collection point according to the position information of the pixel point corresponding to the target collection point in the first picture, the position information of the pixel point corresponding to the target collection point in the second picture, and the time interval between the collection time of the first picture and the collection time of the second picture, where the target collection point includes at least two collection points; under the condition that the motion speeds of the two acquisition points are different, determining that the acquisition points which move exist; and under the condition that the motion speeds of the two acquisition points are the same, determining that no acquisition point with motion exists.
In the above embodiment, to detect whether a first picture of the at least two pictures has a texture feature, the detecting unit includes: the identification module is used for identifying characteristic pixel points in the first picture, wherein the characteristic pixel points are pixel points used for describing local characteristics in the first picture; the first determining module is used for determining that the first picture has texture features under the condition that the number of all the feature pixel points in the identified first picture reaches a first threshold value; and the second determining module is used for determining that the first picture does not have the texture feature under the condition that the number of all the characteristic pixel points in the identified first picture is smaller than the first threshold value.
Optionally, the identification module comprises: the obtaining submodule is used for obtaining the characteristic value of each pixel point in a first area in the first picture, wherein the first area takes the first pixel point as the center, and the first area is an area formed by pixel points which are less than or equal to N pixel points away from the first pixel point; the first detection submodule is used for detecting whether the first pixel point meets the following conditions: the number of second pixel points which are larger than a second threshold value exist in the pixel points which are away from the first pixel points by N pixel points, and the absolute value of the difference value between the gray value of the second pixel point and the gray value of the first pixel point is larger than a third threshold value; the second detection submodule is used for detecting whether the characteristic value of the first pixel point is not less than the characteristic values of all the pixel points in the first area or not under the condition that the first pixel point meets the condition; and the setting submodule is used for setting the first pixel point as the characteristic pixel point under the condition that the characteristic value of the first pixel point is not less than the characteristic values of all the pixel points in the first area.
The above-mentioned obtaining submodule is further configured to: executing the following operations on each pixel point, wherein each pixel point is regarded as a current pixel point when the following operations are executed: detecting whether the current pixel point meets the following conditions: the number of third pixel points which are larger than the second threshold exist in pixel points which are away from the current pixel point by N pixel points, and the absolute value of the difference value between the gray value of the third pixel point and the gray value of the current pixel point is larger than the third threshold; setting the sum of absolute values of differences between the gray value of the current pixel and the gray values of all third pixels as the characteristic value of the current pixel under the condition that the current pixel meets the condition; and under the condition that the current pixel point does not meet the condition, setting the characteristic value of the current pixel point to be 0.
In an embodiment of the application, after the uploading unit uploads the first picture to the server, the server identifies the target object according to a texture feature of the first picture, and renders a three-dimensional model of the target object in the augmented reality scene.
Through the embodiment of the application, whether the picture is transmitted or not is determined without depending on the mobile phone focusing adjustment. Because the focusing mobile phone has compatibility problem, part of mobile phones can not return focusing success information, so that the picture can not be uploaded all the time, a motion estimation algorithm (optical flow method) is used for judging whether the current mobile phone is in a motion state, if the current mobile phone is in a static state, a user is considered to aim at an identified object by the mobile phone, then whether the picture is clear enough is judged through characteristic point detection (FAST characteristic point), and the picture is transmitted to a background if the picture is clear enough and meets the requirement of identifying the picture. The process can get rid of the dependence on the focusing of the mobile phone. Meanwhile, the number of picture transmission frames can be reduced, the load pressure of the background is reduced, and the background recognition rate is improved after the characteristic points are judged.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of embodiment 1 described above. It should be noted that the modules described above as part of the apparatus may run in a hardware environment as shown in fig. 1, may be implemented by software, and may also be implemented by hardware, where the hardware environment includes a network environment.
Example 3
According to an embodiment of the present invention, a server or a terminal (i.e., an electronic device of the present application) for implementing the above-described picture transmission method is also provided.
Fig. 7 is a block diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 7, the terminal may include: one or more processors 701 (only one shown in fig. 7), a memory 703, and a transmission means 705 (such as the sending means in the above embodiments), as shown in fig. 7, the terminal may further include an input-output device 707.
The memory 703 may be used to store software programs and modules, such as program instructions/modules corresponding to the image transmission method and apparatus in the embodiments of the present invention, and the processor 701 executes various functional applications and data processing by running the software programs and modules stored in the memory 703, that is, implements the image transmission method described above. The memory 703 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 703 may further include memory located remotely from the processor 701, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 705 is used for receiving or transmitting data via a network, and may also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 705 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 705 is a Radio Frequency (RF) module used to communicate with the internet in a wireless manner.
Among other things, the memory 703 is used to store application programs.
The processor 701 may invoke the application program stored in the memory 703 via the transmission means 705 to perform the following steps: receiving a first instruction, wherein the first instruction is used for instructing to upload a picture of a target object to a server; responding to the first instruction, and acquiring at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; and uploading the first picture to a server under the condition that the first picture has the texture features.
The processor 701 is further configured to perform the following steps: identifying characteristic pixel points in the first picture, wherein the characteristic pixel points are pixel points used for describing local characteristics in the first picture; determining that the first picture has texture features under the condition that the number of all feature pixel points in the identified first picture reaches a first threshold value; and under the condition that the number of all the characteristic pixel points in the identified first picture is smaller than a first threshold value, determining that the first picture does not have texture characteristics.
By adopting the embodiment of the invention, when the first instruction is received, at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment are obtained; under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features, wherein the texture features comprise features for describing a target object; under the condition that the first picture has the texture characteristics, the first picture is uploaded to the server, and only one picture needs to be uploaded, so that the technical problem that more flow needs to be consumed for transmitting and acquiring the pictures in the related technology can be solved, and the technical effect of reducing the flow consumed for transmitting and acquiring the pictures is achieved.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
It should be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), PAD, etc. Fig. 7 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 4
The embodiment of the invention also provides a storage medium. Alternatively, in this embodiment, the storage medium may be a program code for executing a picture transmission method.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
s11, receiving a first instruction, wherein the first instruction is used for instructing to upload a picture of a target object to a server;
s12, responding to the first instruction, and acquiring at least two pictures obtained by continuously acquiring the target object by the image acquisition equipment;
s13, detecting whether a first picture in the at least two pictures has texture features under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, wherein the texture features comprise features for describing a target object;
and S14, uploading the first picture to a server under the condition that the first picture has the texture feature.
Optionally, the storage medium is further arranged to store program code for performing the steps of:
s21, identifying characteristic pixel points in the first picture, wherein the characteristic pixel points are pixel points used for describing local characteristics in the first picture;
s22, determining that the first picture has texture features under the condition that the number of all feature pixel points in the identified first picture reaches a first threshold value;
and S23, determining that the first picture does not have texture features under the condition that the number of all the feature pixel points in the identified first picture is smaller than a first threshold value.
Optionally, the specific examples in this embodiment may refer to the examples described in embodiment 1 and embodiment 2, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described in detail in a certain embodiment.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be an indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or in other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A method for transmitting pictures, comprising:
receiving a first instruction, wherein the first instruction is used for instructing to upload a picture of a target object to a server;
responding to the first instruction, and acquiring at least two pictures obtained by continuously acquiring the target object by image acquisition equipment;
under the condition that the image acquisition device is determined to be in a static state based on the at least two pictures, detecting whether a first picture in the at least two pictures has texture features or not, wherein the detecting comprises the following steps: identifying characteristic pixel points in the first picture, wherein the characteristic pixel points are pixel points used for describing local characteristics in the first picture; determining that the first picture has texture features under the condition that the number of all the feature pixel points in the identified first picture reaches a first threshold value; determining that the first picture does not have texture features under the condition that the number of all the feature pixel points in the identified first picture is smaller than the first threshold value, wherein the texture features are used for the server to identify the target object;
the identifying of the characteristic pixel point in the first picture comprises: acquiring a characteristic value of each pixel point in a first region in the first picture, wherein the first region takes a first pixel point as a center, and the first region is a region formed by pixel points which are less than or equal to N pixel points away from the first pixel point; detecting whether the first pixel point meets the following conditions: the number of second pixel points which are larger than a second threshold exist in pixel points which are away from the first pixel points by N pixel points, and the absolute value of the difference value between the gray value of the second pixel point and the gray value of the first pixel point is larger than a third threshold; detecting whether the characteristic value of the first pixel point is not less than the characteristic values of all pixel points in the first area or not under the condition that the first pixel point meets the condition; under the condition that the characteristic value of the first pixel point is not smaller than the characteristic values of all pixel points in the first area, setting the first pixel point as the characteristic pixel point;
and uploading the first picture to the server under the condition that the first picture has the texture features.
2. The method of claim 1, wherein obtaining the feature value of each pixel point in the first region in the first picture comprises:
executing the following operations on each pixel point, wherein each pixel point is regarded as a current pixel point when the following operations are executed:
detecting whether the current pixel point meets the following conditions: third pixel points with the number larger than the second threshold exist in pixel points which are away from the current pixel point by N pixel points, and the absolute value of the difference value between the gray value of the third pixel point and the gray value of the current pixel point is larger than the third threshold;
setting the sum of absolute values of differences between the gray value of the current pixel point and the gray values of the third pixel points as the characteristic value of the current pixel point under the condition that the current pixel point meets the condition;
and under the condition that the current pixel point does not meet the condition, setting the characteristic value of the current pixel point to be 0.
3. The method of claim 1, wherein determining whether the image capture device is in a stationary state based on the at least two pictures comprises:
detecting whether an acquisition point which moves exists in the same acquisition area of the first picture and a second picture or not, wherein the second picture is a picture of which the acquisition time is adjacent to that of the second picture in the at least two pictures;
under the condition that no acquisition point with motion exists in the first picture and the second picture, determining that the image acquisition equipment is in a static state;
determining that the image capturing device is not in a still state in a case where it is detected that there is an capturing point where motion occurs in the first picture and the second picture.
4. The method according to claim 3, wherein detecting whether there is an acquisition point in the first picture and the second picture where motion occurs comprises:
determining the motion speed of the target acquisition point according to the position information of the pixel point corresponding to the target acquisition point in the first picture, the position information of the pixel point corresponding to the target acquisition point in the second picture and the time interval between the acquisition time of the first picture and the acquisition time of the first picture, wherein the target acquisition point at least comprises two acquisition points;
under the condition that the motion speeds of the two acquisition points are different, determining that the acquisition point which moves exists;
and under the condition that the motion speeds of the two acquisition points are the same, determining that no acquisition point with motion exists.
5. The method of claim 1, wherein after uploading the first picture to the server, the method further comprises:
and the server identifies the target object according to the texture features of the first picture, and renders a three-dimensional model of the target object in an augmented reality scene.
6. A picture transmission apparatus, comprising:
the device comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving a first instruction, and the first instruction is used for instructing to upload a picture of a target object to a server;
the response unit is used for responding to the first instruction and acquiring at least two pictures acquired by continuously acquiring the target object by the image acquisition equipment;
the detection unit is used for detecting whether a first picture in the at least two pictures has texture features or not under the condition that the image acquisition equipment is determined to be in a static state based on the at least two pictures, wherein the texture features are used for the server to identify the target object;
the uploading unit is used for uploading the first picture to the server under the condition that the first picture has texture features;
the detection unit includes: the identification module is used for identifying characteristic pixel points in the first picture, wherein the characteristic pixel points are pixel points used for describing local characteristics in the first picture; the first determining module is used for determining that the first picture has texture features under the condition that the number of all the feature pixel points in the identified first picture reaches a first threshold value; a second determining module, configured to determine that the first picture does not have texture features when the number of all the feature pixel points in the identified first picture is smaller than the first threshold;
the identification module comprises: the obtaining submodule is used for obtaining a characteristic value of each pixel point in a first area in the first picture, wherein the first area takes a first pixel point as a center, and the first area is an area formed by pixel points which are less than or equal to N pixel points away from the first pixel point; the first detection submodule is used for detecting whether the first pixel point meets the following conditions: the number of second pixel points which are larger than a second threshold exist in pixel points which are away from the first pixel points by N pixel points, and the absolute value of the difference value between the gray value of the second pixel point and the gray value of the first pixel point is larger than a third threshold; the second detection submodule is used for detecting whether the characteristic value of the first pixel point is not less than the characteristic values of all the pixel points in the first area or not under the condition that the first pixel point meets the condition; and the setting submodule is used for setting the first pixel point as the characteristic pixel point under the condition that the characteristic value of the first pixel point is not less than the characteristic values of all the pixel points in the first area.
7. The apparatus of claim 6, wherein the acquisition sub-module is further configured to:
executing the following operations on each pixel point, wherein each pixel point is regarded as a current pixel point when the following operations are executed:
detecting whether the current pixel point meets the following conditions: third pixel points with the number larger than the second threshold exist in pixel points which are away from the current pixel point by N pixel points, and the absolute value of the difference value between the gray value of the third pixel point and the gray value of the current pixel point is larger than the third threshold;
setting the sum of absolute values of differences between the gray value of the current pixel point and the gray values of the third pixel points as the characteristic value of the current pixel point under the condition that the current pixel point meets the condition;
and under the condition that the current pixel point does not meet the condition, setting the characteristic value of the current pixel point to be 0.
8. The apparatus of claim 6, wherein the detection unit comprises:
the detection module is used for detecting whether an acquisition point which moves exists in the same acquisition area of the first picture and a second picture, wherein the second picture is a picture of which the acquisition time is adjacent to that of the first picture in the at least two pictures;
a third determining module, configured to determine that the image capturing device is in a still state when it is detected that there is no capturing point in the first picture and the second picture that moves;
and the fourth determining module is used for determining that the image acquisition equipment is not in a static state under the condition that the acquisition points with motion are detected to exist in the first picture and the second picture.
9. A storage medium, comprising a stored program, wherein the program when executed performs the method of any of claims 1 to 5.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor executes the method of any of the preceding claims 1 to 5 by means of the computer program.
CN201710581355.8A 2017-07-17 2017-07-17 Picture transmission method and device Active CN109272538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710581355.8A CN109272538B (en) 2017-07-17 2017-07-17 Picture transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710581355.8A CN109272538B (en) 2017-07-17 2017-07-17 Picture transmission method and device

Publications (2)

Publication Number Publication Date
CN109272538A CN109272538A (en) 2019-01-25
CN109272538B true CN109272538B (en) 2023-04-07

Family

ID=65152376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710581355.8A Active CN109272538B (en) 2017-07-17 2017-07-17 Picture transmission method and device

Country Status (1)

Country Link
CN (1) CN109272538B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766718B (en) * 2019-09-09 2024-04-16 北京美院帮网络科技有限公司 Picture acquisition method, device and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196292A (en) * 2011-06-24 2011-09-21 清华大学 Human-computer-interaction-based video depth map sequence generation method and system
CN102306275A (en) * 2011-06-29 2012-01-04 西安电子科技大学 Method for extracting video texture characteristics based on fuzzy concept lattice
CN103065347A (en) * 2011-10-24 2013-04-24 中国科学院软件研究所 Video dyeing method based on Gabor feature space
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4044069B2 (en) * 2004-04-23 2008-02-06 株式会社ソニー・コンピュータエンタテインメント Texture processing apparatus, texture processing method, and image processing apparatus
JP5159844B2 (en) * 2010-09-03 2013-03-13 株式会社東芝 Image processing device
TW201241794A (en) * 2011-04-08 2012-10-16 Hon Hai Prec Ind Co Ltd System and method for detecting damages of image capturing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102196292A (en) * 2011-06-24 2011-09-21 清华大学 Human-computer-interaction-based video depth map sequence generation method and system
CN102306275A (en) * 2011-06-29 2012-01-04 西安电子科技大学 Method for extracting video texture characteristics based on fuzzy concept lattice
CN103065347A (en) * 2011-10-24 2013-04-24 中国科学院软件研究所 Video dyeing method based on Gabor feature space
CN105469383A (en) * 2014-12-30 2016-04-06 北京大学深圳研究生院 Wireless capsule endoscopy redundant image screening method based on multi-feature fusion

Also Published As

Publication number Publication date
CN109272538A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109325933B (en) Method and device for recognizing copied image
US9760791B2 (en) Method and system for object tracking
US10769798B2 (en) Moving object detection apparatus, moving object detection method and program
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
CA2910965A1 (en) Tracker assisted image capture
US20180336440A1 (en) Unconstrained event monitoring via a network of drones
KR102572986B1 (en) Object Tracking Based on Custom Initialization Points
CN109033985B (en) Commodity identification processing method, device, equipment, system and storage medium
CN110610150A (en) Tracking method, device, computing equipment and medium of target moving object
CN107209556B (en) System and method for processing depth images capturing interaction of an object relative to an interaction plane
CN111880664A (en) AR interaction method, electronic device and readable storage medium
EP2506562B1 (en) Adaptive object tracking method, system, and computer readable recording medium
CN113194253A (en) Shooting method and device for removing image reflection and electronic equipment
CN110737414A (en) Interactive display method, device, terminal equipment and storage medium
CN110266953B (en) Image processing method, image processing apparatus, server, and storage medium
KR102165128B1 (en) Automated tracking and retaining of an articulated object in a sequence of image frames
CN109272538B (en) Picture transmission method and device
KR101586071B1 (en) Apparatus for providing marker-less augmented reality service and photographing postion estimating method therefor
US10237530B1 (en) Depth-map augmentation techniques
KR101460317B1 (en) Detection apparatus of moving object in unstable camera environment and method thereof
CN109509261B (en) Augmented reality method, device and computer storage medium
Mohamed et al. Asynchronous corner tracking algorithm based on lifetime of events for DAVIS cameras
CN111988520B (en) Picture switching method and device, electronic equipment and storage medium
Kushwaha et al. 3d target tracking in distributed smart camera networks with in-network aggregation
CN112947748A (en) Augmented reality AR remote interaction method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant