CN103824308A - Image processing method in wireless multimedia sensor network - Google Patents
Image processing method in wireless multimedia sensor network Download PDFInfo
- Publication number
- CN103824308A CN103824308A CN201410048417.5A CN201410048417A CN103824308A CN 103824308 A CN103824308 A CN 103824308A CN 201410048417 A CN201410048417 A CN 201410048417A CN 103824308 A CN103824308 A CN 103824308A
- Authority
- CN
- China
- Prior art keywords
- interest
- single goal
- image processing
- sensor network
- processing method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention discloses an image processing method in a wireless multimedia sensor network. The method comprises the following steps: A, reading at least two history frames; B, determining simple target interest areas of the history frames; C, predicting a simple target interest area of a current frame according to the simple target interest areas of the history frames; and D, on the basis of the simple target interest area of the current frame, encoding the current frame. According to the invention, a range of an interest area of a current frame is predicted based on processing of images of history frames; and according the interest area, compression and encoding are carried out on the images. Therefore, low energy consumption is maintained, the compression ratio is substantially improved, and the reliable image reconstruction quality is ensured; and especially, the method can be applied to a wireless multimedia sensor network that is restricted in the aspects of energy, calculation, and storage. The image processing method can be widely applied to the image processing field.
Description
Technical field
The present invention relates to image processing field, the image processing method in especially a kind of wireless multimedia sensor network.
Background technology
How the research of Motion-changed Region detection technique completes " accurately location " problem of interested target area in research object (image sequence).In recent decades, people have done a large amount of and deep research to the Motion-changed Region detection technique in image sequence, and except traditional Motion-changed Region detection method, many new methods also continue to bring out.The people such as such as Haritaoglu utilize minimum, maximum intensity value and maximum time difference value carry out statistical modeling for each pixel in scene, and carry out periodically context update; VSAM has developed the hybrid algorithm that a kind of adaptive background subtraction combines with three-frame difference, and it can fast and effeciently detect moving target from background.The optical flow computation method that uses Giachetti detects the vehicle in front, uses optical flow analysis, can detect equally the vehicle at rear.The location that some JPEG coding based on region-of-interest coding improves region-of-interest in algorithm adopts difference detection algorithm, such as determining region-of-interest scope by the difference of calculating DC coefficient.But the in the situation that of environmental catastrophe, probably there is large-area variation in the gray-scale value of picture, causes a lot of erroneous judgements.
JPEG compression algorithm is first international digital image compression standard of setting up for still image, is also that always using so far, most widely used Standard of image compression.At present, in wireless multimedia sensor network, have some to the improved research of JPEG algorithm.The people such as Feng use the compress mode of JPEG, difference JPEG and conditional replenishment to design a video sensor node platform, in the time that image pixel is 640*480, can reach the speed of 10 frames per second.The people such as Mammeri have studied the cutting optimization of 8*8 DCT matrix of coefficients in JPEG, while carrying out dct transform, a square region in selection processing 8*8 matrix and the coefficient of a delta-shaped region, analyze energy consumption and the picture quality of after cutting, encoding, and proposed the method for the overall situation and local two kinds of selection coefficient region sizes.
Much more existing Motion-changed Region detection techniques are for PC, because its energy consumption and storage space of having relatively high expectations seldom can be applied directly in wireless multimedia sensor network.Due to the restriction that wireless multimedia sensor network is subject at aspects such as energy, calculating and storages, particularly before view data, need it to carry out compressed encoding at transmitting multimedia data, reduce volume of transmitted data with conserve energy.But lossy compression method can make image fault, and compressibility is higher, and image reconstruction quality is poorer, and the computation complexity of Image Coding is higher, can consume the energy consumption of many computings aspect, causes total energy consumption to increase.
Summary of the invention
In order to solve the problems of the technologies described above, the object of the invention is: the image processing method of realizing low energy consumption, high compression rate and reliable in quality in a kind of wireless multimedia sensor network is provided.
The technical solution adopted in the present invention is: the image processing method in a kind of wireless multimedia sensor network, includes following steps:
A, read at least two historical frames;
B, determine the single goal region-of-interest of above-mentioned historical frames;
C, according to the single goal region-of-interest of the single goal region-of-interest prediction present frame of above-mentioned historical frames;
D, single goal region-of-interest based on present frame are encoded to present frame.
Further, described step B comprises following sub-step:
B1, above-mentioned historical frames is carried out to binary conversion treatment;
B2, determine the connection target area of the historical frames after binary conversion treatment;
B3, determine single goal region-of-interest according to above-mentioned connection target area.
Further, described step C is specially:
Calculate the movement tendency of single goal region-of-interest according to the single goal region-of-interest of above-mentioned historical frames, and according to the single goal region-of-interest of movement tendency prediction present frame.
Further, described step D comprises following sub-step:
D1, present frame is divided into the image block of 8*8;
D2, image block is carried out to dct transform;
D3, above-mentioned result is carried out to quantization and Zigzag sequence successively;
D4, to the above results through generating the image after compression after entropy coding.
Further, in described sub-step D2, the single goal region-of-interest to present frame in the time of dct transform arranges cutting coefficient, retains upper left corner low frequency DCT coefficient, part beyond the single goal region-of-interest of present frame is carried out to cutting to AC coefficient, by the AC coefficient part zero setting of DCT coefficient.
Further, described dct transform is specially based on foursquare method of cutting out.
Further, described dct transform is specially based on leg-of-mutton method of cutting out.
The invention has the beneficial effects as follows: the present invention is by predicting the scope of the region-of-interest of present frame to the processing of historical frames picture, and then according to region-of-interest, image is carried out to compressed encoding, not only keep lower energy consumption, greatly improve compressibility simultaneously, guaranteed reliable image reconstruction quality, especially made it to be applied to the wireless multimedia sensor network all receiving restriction aspect energy, calculating and storage.
Accompanying drawing explanation
Fig. 1 is key step process flow diagram of the present invention;
Fig. 2 is the historical frames 1 after binary conversion treatment;
Fig. 3 is the historical frames 2 after binary conversion treatment;
Fig. 4 is the distance of swimming 4 connected relation figure;
Fig. 5 is the distance of swimming 8 connected relation figure;
Fig. 6 is the connective schematic diagram a of the distance of swimming and upstream data;
Fig. 7 is the connective schematic diagram b of the distance of swimming and upstream data;
Fig. 8 is the connective schematic diagram c of the distance of swimming and upstream data;
Fig. 9 is the historical frames 1 of determining single goal region-of-interest;
Figure 10 is the historical frames 2 of determining single goal region-of-interest;
Figure 11 is the region-of-interest scope of prediction;
Figure 12 is JPEG cataloged procedure block diagram;
Figure 13 is DCT coefficient 1 schematic diagram that the corresponding cutting coefficient of S-DCT retains;
Figure 14 is DCT coefficient 2 schematic diagram that the corresponding cutting coefficient of S-DCT retains;
Figure 15 is DCT coefficient 1 schematic diagram that the corresponding cutting coefficient of T-DCT retains;
Figure 16 is DCT coefficient 2 schematic diagram that the corresponding cutting coefficient of T-DCT retains.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further:
With reference to Fig. 1, the image processing method in a kind of wireless multimedia sensor network, includes following steps:
A, read at least two historical frames;
B, determine the single goal region-of-interest of above-mentioned historical frames;
C, according to the single goal region-of-interest of the single goal region-of-interest prediction present frame of above-mentioned historical frames;
D, single goal region-of-interest based on present frame are encoded to present frame.
The present invention is directed to the scene of environmental catastrophe, predict the scope of region-of-interest with the historical frames picture of having processed.Its embodiment is: first, to select two two field pictures that close in the image library of processed mistake, they are carried out to binary conversion treatment.Then, use the connected component labeling algorithm based on Run-Length Coding, calculate the area of bianry image connected region.Then according to connected region size, determine the scope of single goal region-of-interest.Then, infer the region-of-interest scope of present frame by the two frame single goal region-of-interests of determining.Finally use the improved JPEG method based on region-of-interest coding to carry out compressed encoding.
Be further used as preferred embodiment, described step B comprises following sub-step:
B1, above-mentioned historical frames is carried out to binary conversion treatment;
In embodiment, adopt the test pattern under the scene of parking lot, in the image library of processed mistake, find out two two field pictures that close on present frame of having processed, for example current need are to be processed is 0130 frame, from historical frames, extract 0128 frame and 0129 frame, here we are that 0129 frame calls historical frames 1 the previous frame of present frame, and the frame second from the bottom of processing is called to historical frames 2.For the ease of calculating connected region and the area thereof of region-of-interest in historical frames image, here adopt binary conversion treatment, historical frames 1 after variation is detected and the region-of-interest assignment of historical frames 2 are " 1 ", be " 0 " background area assignment, so to historical frames 2(0128 frame) and historical frames 1(0129 frame) image after binaryzation is respectively as shown in Fig. 3 and Fig. 2, remove beyond indivedual noise spots, body portion mainly concentrates on position, lower right, and vehicle body is moving to upper left side, although displacement is little, but still can find out movement tendency from Fig. 3 and Fig. 2 contrast.
B2, determine the connection target area of the historical frames after binary conversion treatment;
To having carried out the historical frames of binary conversion treatment, use the connected component labeling algorithm based on Run-Length Coding, calculate the area of bianry image connected region.In run length encoding method, using the distance of swimming as handling object, by the distance of swimming connected region obtaining that represents to line by line scan.General Run-Length Coding adopts three kinds of elements (starting coordinate, terminal point coordinate, run length) to represent.Need between run-length data structure and target volume data structure, transmit identification number herein, therefore identification number is added in run-length data structure as an element, and record current scanning row value, to record the extreme value of ordinate, so define new run-length data structure
rLE (code, start, end, length, Y).
Bianry image 4 connections refer to current pixel point
(x, y)centered by, it is around up and down
(x, y-1), (x, y+1), (x-1, y), (x+1, y)four adjacent pixels have identical value, 8 be communicated with refer to its around up and down and two diagonal line
(x, y-1), (x, y+1), (x-1, y), (x+1, y), (x-1, y-1), (x-1, y+1), (x+1, y-1), (x+1, y+1)8 pixels there is identical value.And 4 or 8 connections of the distance of swimming refer to that at least comprising a pixel energy in the current distance of swimming has 4 or 8 relations that are communicated with the pixel in the another one distance of swimming.In Fig. 4,5, represent background parts with white grid, black box represents distance of swimming part.Fig. 4 and Fig. 5 represent respectively the distance of swimming 4 connected sum 8 connected relations so.
Be provided with the distance of swimming
rLE1, RLE2, 4 connections and 8 are communicated with judgment rules respectively suc as formula shown in (1) and formula (2).
(
RLE1.start≥
RLE2.end)∪(
RLE1.end≤
B.start) (1)
(
RLE1.start≥
RLE2.end+1)∪(
RLE1.end≤
RLE2.start-1) (2)
This embodiment will adopt the mode of 8 connections to discuss.
The area of bianry image connected region is number of pixels sum in connected region, therefore, and definition
t (root, area, x_min, x_max, y_min, y_max)for target volume data structure, wherein
rootrepresent the radical sign code of this connected region, in algorithm, can need continuous correction according to this value of actual conditions,
arearepresent this connected region size, the number of pixels being communicated with,
x_min, x_max, y_min, y_maxbe respectively minimum value and the maximal value of horizontal ordinate and ordinate, for determining the border of connected region.The algorithm steps of determining connected region area is as follows:
Step 1: initialization.Set up run-length data structure chained list and target volume data structure chained list, be initialized as sky.
Step 2: image is lined by line scan, often scan a distance of swimming
rLEabe filled in run-length data structure chained list, then carry out the connectivity analysis of the distance of swimming according to connective rule formula (2) and lastrow data, the situation that at this moment may occur has following three kinds, as shown in Fig. 6-8, its middle twill identified areas represents the current distance of swimming monitoring, and black region represents respectively the independent connected component having detected, is kept in the chained list of target volume data structure.
1) do not have be communicated with rider section
With reference to Fig. 6, the connective schematic diagram a of the distance of swimming and upstream data, does not have and the distance of swimming
rLEathe distance of swimming being connected, thinks that this distance of swimming is an emerging objective body, sets up a new node in target volume data structure chained list
ti, will
rLEain identification number
codevalue passes to
tiin
root, to propagate radical sign code in the time of next line inspection and this row connectedness.Will
tiin
areavalue is defined as
rLEain
length, now size is the number of pixels in the distance of swimming,
x_min, x_maxcorresponding respectively
rLEain
start, end.Now
y_min, y_maxassignment is simultaneously
y.
2) be communicated with a rider section
With reference to Fig. 7, the connective schematic diagram b of the distance of swimming and upstream data, illustrates the distance of swimming
rLEabe a part for some objective bodies, suppose that the distance of swimming structure being communicated with is with it
rLEb.At this moment basis
rLEbin
codevalue is found corresponding with it target volume data structure, finds
rootvalue is
codenode
ti, will
rLEain relevant information add to
tiin, more new node
tiin each several part numerical value, and will
rootvalue passes to
rLEain
code.
3) be communicated with multiple rider sections
With reference to Fig. 8, the connective schematic diagram c of the distance of swimming and upstream data, when
rLEafind multiple rider structures that are communicated with it
rLEc1, RLEc2 ... RLEcntime, at this moment travel through all rider structures
rLEc1, RLEc2 ... RLEcn, determine minimum identification number
code_min, will
code_minpass to
rLEa's
code, basis simultaneously
rLEc1, RLEc2 ... RLEcnin
code, find out corresponding target volume data structure, by it
rootvalue all changes into
code_min.
Step 3: merge target volume data structure, will
rootall nodes that value equates merge.
B3, determine single goal region-of-interest according to above-mentioned connection target area.
Merging behind connection target area, in target volume data structure chained list, search
areabe worth maximum target information structure
t (root, area, x_min, x_max, y_min, y_max), according to
rootvalue can be determined connected region part, according to
areadetermine connected region number of pixels, and according to
x_min, x_max, y_min and y_maxdetermine the coordinate extreme value of this connected region, so determined single goal region-of-interest is just included in the definite rectangle of these four extreme values.Fig. 9 and Figure 10 are respectively after the connected component labeling algorithm using based on Run-Length Coding, and historical frames 1 and historical frames 2 select maximal value to be communicated with the result after area.Like this, the noise spot in bianry image will be removed, and retain body portion as single goal region-of-interest.
Be further used as preferred embodiment, described step C is specially:
Calculate the movement tendency of single goal region-of-interest according to the single goal region-of-interest of above-mentioned historical frames, and according to the single goal region-of-interest of movement tendency prediction present frame.
Can be determined respectively the target information structure of historical frames 1 and historical frames 2 by above method, determine single goal region-of-interest, suppose historical frames 2(0128 frame) four coordinate extreme values be
x_min2, x_max2, y_min2, y_max2, suppose historical frames 1(0129 frame) four coordinate extreme values be
x_min1, x_max1, y_min1, y_max1, calculate the difference of corresponding coordinate extreme value, select and differ maximum
xcoordinate and
ycoordinate, as predictor.
x_min=x_min1-x_min2, x_max=x_max1-x_max2, y_min=y_min1-y_min2, y_max=y_max1-y_max2,if
| x_min|>|x_max|,so
x=x_min, otherwise x=x_max.in like manner, if
| y_min|>|y_max|,so
y=y_min,otherwise
y=y_max.wherein
x, ybe respectively the predictor of x coordinate and y coordinate.
By the object construction information of historical frames 1, its four coordinate extreme values are determined
x_min1, x_max1, y_min1, y_max1, determined the predictor of x coordinate and y coordinate by the coordinate extreme value of historical frames 1 and historical frames 2
xwith
y, add that by the object construction information of historical frames 1 corresponding predictor can draw the region-of-interest scope of prediction,
pre_x_min=x_min1+x, pre_x_max=x_max1+x, pre_y_min=y_min1+y, pre_y_max=y_max1+y.
So the single goal region-of-interest bianry image of the present frame (0130 frame) being gone out by historical frames 0128 frame and 0129 frame coordinate prediction of extremum as shown in figure 11, the border up and down of Figure 11 region-of-interest is by historical frames 1 and the common decision of predictor.
With reference to Figure 12, be further used as preferred embodiment, described step D comprises following sub-step:
D1, present frame is divided into the image block of 8*8;
D2, image block is carried out to dct transform;
D3, above-mentioned result is carried out to quantization and Zigzag sequence successively;
D4, to the above results through generating the image after compression after entropy coding.
Be further used as preferred embodiment, in described sub-step D2, single goal region-of-interest to present frame in the time of dct transform arranges cutting coefficient, retain upper left corner low frequency DCT coefficient, to the part beyond the single goal region-of-interest of present frame to AC(Alternating Current) coefficient carries out cutting, by the AC coefficient part zero setting of DCT coefficient.In the each 8*8 image block of background area part, only retain DC(direct current) coefficient.So to the DCT coefficient cropping, can omit quantization and Zigzag sequence part, can directly carry out entropy coding, therefore can omit step D3 to the processing of background parts.
Be further used as preferred embodiment, described dct transform is specially based on foursquare method of cutting out.
After dct transform, energy signal is at DCT territory skewness, and DC coefficient and some medium and low frequency AC coefficients occupy most signal energy.Therefore, the AC coefficient of many high frequencies can abandon, and can not bring too much information dropout.Utilize this characteristic, can reduce the energy consumption of the basic operation in each stage in compression process, thereby reduce to greatest extent the energy consumption of each node.
S-DCT is the improvement algorithm of having introduced cutting DCT coefficient in the dct transform of JPEG, is a kind of based on foursquare method of cutting out.This improvement is that the number in order to make the operation that is compressed in each stage reduces, thereby when receiving end keeps each picture quality, has reduced the consumption of energy.Therefore, in S-DCT, only upper left DCT coefficient part is retained, cutting coefficient is
w, wherein
kthe size of piece, as Figure 13 (
w=3) and Figure 14 (
w=5).
Be further used as preferred embodiment, described dct transform is specially based on leg-of-mutton method of cutting out.
On the basis of S-DCT method of cutting out, propose based on leg-of-mutton method of cutting out T-DCT.The method is improved S-DCT method of cutting out, equally block size is cut, and the method no longer retains upper left square area, and it is only processed in Given Graph picture
k × k(the right-angle side length of reserve part is the upper left corner part of individual DCT coefficient
w), as Figure 15 (
w=3) and Figure 16 (
w=5).Other coefficient is not considered, also need not participate in the calculating of other steps, reduces so to greatest extent the energy consumption of source node.Therefore, in T-DCT, only the DCT coefficient part in the upper left corner is retained, cutting coefficient is
w, wherein
kit is the size of piece.
To sum up, in embodiment, adopt the prediction single goal region-of-interest method based on historical frames, the in the situation that of environmental catastrophe, can effectively lock the scope of region-of-interest; Adopt DCT coefficient tailoring technique, to the AC coefficient zero setting of non-region-of-interest, reduce the energy consumption of compression process, therefore in resource-constrained wireless multimedia sensor network, for wireless multimedia sensor network computing feature limited in one's ability, greatly save processing and the transmitting energy consumption of sensor node, can meet wireless multimedia sensor network and calculate the limited situation of storage capacity.
More than that better enforcement of the present invention is illustrated, but the invention is not limited to described embodiment, those of ordinary skill in the art can also make all equivalents or replacement under the prerequisite without prejudice to spirit of the present invention, and the distortion that these are equal to or replacement are all included in the application's claim limited range.
Claims (7)
1. the image processing method in wireless multimedia sensor network, is characterized in that: include following steps:
A, read at least two historical frames;
B, determine the single goal region-of-interest of above-mentioned historical frames;
C, according to the single goal region-of-interest of the single goal region-of-interest prediction present frame of above-mentioned historical frames;
D, single goal region-of-interest based on present frame are encoded to present frame.
2. the image processing method in a kind of wireless multimedia sensor network according to claim 1, is characterized in that: described step B comprises following sub-step:
B1, above-mentioned historical frames is carried out to binary conversion treatment;
B2, determine the connection target area of the historical frames after binary conversion treatment;
B3, determine single goal region-of-interest according to above-mentioned connection target area.
3. the image processing method in a kind of wireless multimedia sensor network according to claim 1 and 2, is characterized in that: described step C is specially:
Calculate the movement tendency of single goal region-of-interest according to the single goal region-of-interest of above-mentioned historical frames, and according to the single goal region-of-interest of movement tendency prediction present frame.
4. the image processing method in a kind of wireless multimedia sensor network according to claim 1, is characterized in that: described step D comprises following sub-step:
D1, present frame is divided into 8 × 8 image block;
D2, image block is carried out to dct transform;
D3, above-mentioned result is carried out to quantization and Zigzag sequence successively;
D4, to the above results through generating the image after compression after entropy coding.
5. the image processing method in a kind of wireless multimedia sensor network according to claim 4, it is characterized in that: in described sub-step D2, single goal region-of-interest to present frame in the time of dct transform arranges cutting coefficient, retain upper left corner low frequency DCT coefficient, part beyond the single goal region-of-interest of present frame is carried out to cutting to AC coefficient, by the AC coefficient part zero setting of DCT coefficient.
6. the image processing method in a kind of wireless multimedia sensor network according to claim 4, is characterized in that: described dct transform is specially based on foursquare method of cutting out.
7. the image processing method in a kind of wireless multimedia sensor network according to claim 4, is characterized in that: described dct transform is specially based on leg-of-mutton method of cutting out.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410048417.5A CN103824308A (en) | 2014-02-11 | 2014-02-11 | Image processing method in wireless multimedia sensor network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410048417.5A CN103824308A (en) | 2014-02-11 | 2014-02-11 | Image processing method in wireless multimedia sensor network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103824308A true CN103824308A (en) | 2014-05-28 |
Family
ID=50759349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410048417.5A Pending CN103824308A (en) | 2014-02-11 | 2014-02-11 | Image processing method in wireless multimedia sensor network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103824308A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021856A (en) * | 2016-10-31 | 2018-05-11 | 比亚迪股份有限公司 | Light for vehicle recognition methods, device and vehicle |
CN112164090A (en) * | 2020-09-04 | 2021-01-01 | 杭州海康威视系统技术有限公司 | Data processing method and device, electronic equipment and machine-readable storage medium |
CN114556188A (en) * | 2019-08-29 | 2022-05-27 | 索尼互动娱乐股份有限公司 | Personal device assisted point of regard optimization for TV streaming and rendered content |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101697593A (en) * | 2009-09-08 | 2010-04-21 | 武汉大学 | Time domain prediction-based saliency extraction method |
US20110069762A1 (en) * | 2008-05-29 | 2011-03-24 | Olympus Corporation | Image processing apparatus, electronic device, image processing method, and storage medium storing image processing program |
CN102710939A (en) * | 2012-05-14 | 2012-10-03 | 南京邮电大学 | Implementation method of wireless image sensor system |
-
2014
- 2014-02-11 CN CN201410048417.5A patent/CN103824308A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110069762A1 (en) * | 2008-05-29 | 2011-03-24 | Olympus Corporation | Image processing apparatus, electronic device, image processing method, and storage medium storing image processing program |
CN101697593A (en) * | 2009-09-08 | 2010-04-21 | 武汉大学 | Time domain prediction-based saliency extraction method |
CN102710939A (en) * | 2012-05-14 | 2012-10-03 | 南京邮电大学 | Implementation method of wireless image sensor system |
Non-Patent Citations (2)
Title |
---|
张欢等: "WMSN中基于兴趣区域检测的JPEG编码算法", 《计算机应用研究》 * |
范娜: "视觉注意模型及其在目标检测中的应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108021856A (en) * | 2016-10-31 | 2018-05-11 | 比亚迪股份有限公司 | Light for vehicle recognition methods, device and vehicle |
CN108021856B (en) * | 2016-10-31 | 2020-09-15 | 比亚迪股份有限公司 | Vehicle tail lamp identification method and device and vehicle |
CN114556188A (en) * | 2019-08-29 | 2022-05-27 | 索尼互动娱乐股份有限公司 | Personal device assisted point of regard optimization for TV streaming and rendered content |
CN112164090A (en) * | 2020-09-04 | 2021-01-01 | 杭州海康威视系统技术有限公司 | Data processing method and device, electronic equipment and machine-readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115914634A (en) | Environmental security engineering monitoring data management method and system | |
US20150016521A1 (en) | Video encoder for images | |
CN106231214A (en) | High-speed cmos sensor image based on adjustable macro block approximation lossless compression method | |
CN104394409B (en) | HEVC predictive mode fast selecting methods based on spatial correlation | |
KR102261669B1 (en) | Artificial Neural Network Based Object Region Detection Method, Device and Computer Program Thereof | |
US9967565B2 (en) | Image compression method based on local dynamic quantizing | |
Wige et al. | Pixel-based averaging predictor for HEVC lossless coding | |
CN112738511B (en) | Fast mode decision method and device combined with video analysis | |
CN104982035A (en) | Method for coding sequence of digital images | |
Yeo et al. | CNN-based fast split mode decision algorithm for versatile video coding (VVC) inter prediction | |
CN114900691B (en) | Encoding method, encoder, and computer-readable storage medium | |
Wang et al. | Semantic-aware video compression for automotive cameras | |
CN103824308A (en) | Image processing method in wireless multimedia sensor network | |
KR20230040287A (en) | Method and system for detecting an object falling based on bitstream information of image information | |
WO2016189404A1 (en) | Foreground motion detection in compressed video data | |
CN107682699B (en) | A kind of nearly Lossless Image Compression method | |
Wang et al. | Enhancing HEVC spatial prediction by context-based learning | |
CN103747257A (en) | Video data effective coding method | |
CN102592130A (en) | Target identification system aimed at underwater microscopic video and video coding method thereof | |
CN114040210B (en) | AVS 3-based intra-frame CU rapid partitioning method | |
Chen et al. | CNN-based fast HEVC quantization parameter mode decision | |
CN104580825A (en) | Method for identifying and tracking object in video | |
CN105357494A (en) | Video encoding and decoding method and apparatus, and computer program product | |
US20140233648A1 (en) | Methods and systems for detection of block based video dropouts | |
EP2658255A1 (en) | Methods and devices for object detection in coded video data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140528 |