<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
On 6/04/2011 8:08 AM, Leon Wright wrote:
<blockquote
cite="mid:BANLkTik+_3SatZPB6VgZBMSN2fky+By+xg@mail.gmail.com"
type="cite">Just thinking out aloud, is there anyway to take
advantage of the GPU for encoding? With say CUDA or OpenCL?<br>
</blockquote>
Just looking around I came across <i>Badaboom </i>on Windows, a
commercial product, and a comparison of it on Anandtech:<br>
<a class="moz-txt-link-freetext" href="http://www.anandtech.com/show/2586/5">http://www.anandtech.com/show/2586/5</a><br>
<br>
Kind of shows that for average Graphics cards, its no better than an
average CPU. <br>
<br>
I also came across a thread that the <i>x264</i> project (look on
videolan.org) had someone spend some time on the idea (through <i>Anvil
Studios</i>), and they gave up.<br>
<br>
Also a comment on whirlpool said:<br>
<blockquote><font color="#cc0000">PS there is no doubt it will be
useful for decoding (which is what Cyberlink, et al are using it
for), but for encoding you have to realise encoding is not a
completely "parallel" process (in other words, at some point,
500 GPU/CPU's won't be any faster then 100GPU/CPU's), current
macroblock data relying on previous macroblock data, etc </font><br>
</blockquote>
And there were comments from an Intel engineer saying something
similar - its not truly parallel because some parts of the scene
being compressed depend upon other parts. But of course, Intel want
to sell another CPU... :)<br>
<br>
Hm. <br>
<br>
<pre class="moz-signature" cols="72">--
Mobile: +61 422 166 708, Email: james_AT_rcpt.to</pre>
</body>
</html>