Thursday, March 22, 2012

TweetDeck Updates with Inline Image Previews, Twitter Lists, and More [Video]

Source: http://lifehacker.com/5895647/tweetdeck-updates-with-inline-image-previews-twitter-lists-and-more

TweetDeck's new non-AIR client was a little lacking in features when it first came out, but today it received a major update that makes it much more tempting. New features include full support for Twitter lists, inline media previews, a new format for retweets, and more.

The inline image previews are arguably the best part of this new update, showing you pictures and videos inside the stream, without requiring you to click on a link. The new lists feature is nice if you're a fan of Twitter lists, though, and the new Activity and Interactions columns are an interesting way to keep up with your Twitter friends. They've also updated the retweet feature, now allowing you to use the "RT @username tweet" format rather than the less popular quote format.

Its filters still aren't as advanced as the old TweetDeck, and it's still missing a few of the AIR client's other great features, but if you've been waffling between the two, these new features might be enough to tip the scales. Hit the link to read more about the new version, or check out the video above for a quick tour.

TweetDeck Updated: Lists, Activity, Media and more | TweetDeck Blog

Read More...

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype [Guts]

Source: http://gizmodo.com/5895491/nvidias-gtx-680-benchmarked-the-beast-lives-up-to-the-hype

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the HypeJohannes Kepler once wrote, "Nature uses as little as possible of anything." Nvidia's latest GPU, code-named Kepler after the German mathematician, looks to be inspired by that quote, as much as by the original Kepler's mathematical prowess. The new GPU-the GTX 680- offers superb graphics horsepower, but requires only two 6-pin PCI Express power connectors. It's a big departure from the last-generation GTX 580, which was fast, but power hungry.

We'll talk about performance shortly, but let's first look at Kepler's underlying architecture.

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype

Smaller Equals Bigger

Kepler GPUs are built using a 28nm manufacturing process, allowing Nvidia to build in more circuits in less die area. 

Like Fermi, Kepler is a modular architecture, allowing Nvidia to scale the design up or down by adding or subtracting functional blocks. In Fermi, Streaming Multiprocessors, or SMs for short, are the basic building blocks from which the GTX 500 line of GPUs were built. The CUDA core counts inside the SMs could vary. For example, each SM block in the GTX 560 Ti contained 48 CUDA cores, while the GTX 580 SM was built with 32 cores. The GTX 580, on the other hand, had a total of 16 SMs of 32 cores each, for a total of 512 CUDA cores.

Kepler's functional block is the SMX. Kepler GPUs are built on 28nm, which allowed Nvidia's architects to scale things a bit differently. So Nvidia increased the number of cores inside a Kepler SMX to a stunning 192 CUDA cores each.

The GTX 680 GPU is built from eight SMX blocks, arranged in paired groups called GPCs (graphics performance clusters). This gives the GTX 680 a whopping 1,536 CUDA cores.

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype

The SMX doesn't just house the CUDA cores, however. Built into each SMX is the new Polymorph engine, which contains the hardware-tessellation engine, setup, and related features. Also included are 16 texture units. This gives the GTX 680 a total of 128 texture units (compared with the 64 texture units built into the GTX 580). Interestingly, the cache has changed a bit-each SMX still has 64KB of L1 cache, part of which can be used as shared memory for GPU compute. However, that means the total L1 cache has shrunk a bit, since there are only eight SMX units in the GTX 680, not 16 as with GTX 580. The L2 cache is also smaller, at 512KB rather than the 768KB of Fermi.

Another interesting change is that pre-decoding and dependency checking has been offloaded to software, whereas Fermi handled it in hardware. What Nvidia got in return was better instruction efficiency and more die space. Interestingly, the transistor count of the GTX 680 GPU is 3.5 billion, up only a little from the 3 billion of the GTX 580. The die size has shrunk, however, to a much more manageable 294mm2-by contrast, Intel's Sandy Bridge 32nm quad-core CPU die is 216mm2.

Textures, Antialiasing, and More

One of the cooler new features from an actual application perspective is bindless textures. Prior to Kepler, Nvidia GPUs were limited to 128 simultaneous textures; Kepler boosts that by allowing textures to be allocated as needed within the shader program, with up to 1 million simultaneous textures available. It's doubtful whether games will use that many textures, but certain types of architectural rendering might benefit. 

Nvidia continues to incorporate its proprietary FXAA antialiasing mode, but has added a new mode that it's calling TXAA. The "T" stands for "temporal." TXAA in its standard mode is actually a variant of 2x multisampling AA, but varies the sampling pattern over time (i.e., over multiple frames.) The result is better edge quality than even 8x MSAA, but the performance hit is more like 2x multisampling. 

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype

Another cool new feature that will also eventually be supported in older Nvidia GPUs is Adaptive Vsync. Currently, if you lock vertical sync to your monitor's refresh rate (typically 60Hz, but as high as 120Hz on some displays), you'll get smoother gameplay. However, you might see a stutter as the frame rate drops to 30fps or below, due to the output frames being locked to vsync. On the other hand, if you run with vsync off, you may see frame tearing, as new frames are sent to the display before the old one is complete.

Adaptive Vsync locks the frame rate to the vertical refresh rate, until the driver detects the frame rate dropping below the refresh rate. Vsync is then disabled temporarily, until the frame rate climbs above the monitor refresh rate. The overall result is much smoother performance from the user's point of view.

Finally, Nvidia has beefed up the video engine, building in a dedicated encode engine capable of encoding H.264 high-profile video at 4x – 8x real time. Power usage is low in this mode, consuming single-digit watts, rather than the shader-driven tens of watts of past GPUs.

The GTX 680 Graphics Card

Nvidia built an improved circuit board to host the GTX 680 GPU. The board will ship with 2GB of GDDR5, with the default memory clock running at 6008MHz-the first board to ship with 6GHz GDDR5. The GTX 680 also introduces GPU Boost, an idea borrowed from the world of x86 CPUs. GPU Boost increases the core clock speed if the internal thermal environment permits. This allows games that offer lighter overall load to get additional performance as needed. In another departure, the GTX 680 offers a single clock-the shader clocks are now the same as the core clock frequency. Product boxes will likely show both the base and boost clocks on the box. As with recently released AMD products, the GTX 680 is fully PCI 3.0 compliant.

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype

A few notable things spring to mind when examining the specs. First, this is a 256-bit wide memory interface, as opposed to the 384-bit interface of AMD's Radeon HD 7970. Nvidia makes up for this with both improved memory-controller efficiency plus higher clocked GDDR5. The frame buffer is "only" 2GB, but that was enough to run our most demanding benchmarks at 2560x1600 with all detail levels maxed out and 4x MSAA enabled. 

Also worth calling out is Nvidia's new devotion to power efficiency. The GTX 680 is substantially more power efficient than its predecessor, with a maximum TDP of just 195W. Idle power is about 15W. We saw the power savings in our benchmarking.

The GTX 680 is also the first single-GPU card from Nvidia to support more than two displays. Users can add up to four displays using all four ports. Nvidia was strangely reticent about discussing its DisplayPort 1.2 implementation, which should allow for even more monitors once 1.2 capable monitors and hubs arrive on the scene later this year. 

The GTX 680 cooling system is a complete redesign, using a tapered fin stack, acoustic dampening, and a high-efficiency heat pipe. The card was very quiet under load, though perceptually about the same as the XFX Radeon HD 7970's twin-cooling-fan design. Of course, having a more power efficient GPU design is a big help. The GTX 680 is no DustBuster.

How Does It Perform?

We pitted the GTX 680 against two previous GTX 580 designs, the slightly overclocked EVGA GTX 580 SC and the more heavily overclocked EVGA GTX 580 Classified. The XFX Radeon HD 7970 Black Edition was also included. We ran our usual benchmark suite at 2560x1600 with 4x MSAA enabled, along with the FutureMark and Unigine synthetic tests.

Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the Hype

The GTX 680 clearly takes most of the benchmarks, though the XFX HD 7970 eked out a couple of wins. Note that it's possible some of these benchmarks are actually becoming CPU limited, even with 4x MSAA, but it's hard to say for certain. That's very likely the case with HAWX 2, where the older GTX 580 Classified-albeit a heavily overclocked GTX 580-manages a 1fps advantage.

The GTX 680's idle power ratings are impressive, too. The total system power at idle was just 116W, 8W better than the XFX card. However, Nvidia doesn't incorporate anything like AMD's ZeroCore technology, which reduces power to a bare 3W when the display is turned off (as when Windows 7 blanks the screen.) Even better is the power under load-the GTX 680 is the only GPU to run at under 300W at full load.

The GTX 680 we tested is Nvidia's reference card, and it's likely that some manufacturers will ship retail cards at higher core clock speeds. Retail cards will be available upon launch (March 22). Nvidia is pricing the card at $500, but prices may vary a bit depending on manufacturer. That $500 price tag substantially undercuts AMD's Radeon HD 7970 pricing by as much as $100, which makes the GTX 680 look even better for high-end gamers.

The GTX 680 looks to regain Nvidia the performance crown briefly held by AMD, and is priced lower, to boot. What's most intriguing, however, is that Kepler likely has some headroom for even greater power consumption, which may allow Nvidia to ship an even higher-end GPU when needed. The performance horserace continues, and while the top spot now belongs to Nvidia, the company also needs to deliver midrange GPUs to compete with AMD's more recent product moves. In the long run, gamers will benefit from more choices and competition. It's a win all around.  [Maximum PC]


Nvidia's GTX 680 Benchmarked: The Beast Lives Up to the HypeMaximum PC brings you the latest in PC news, reviews, and how-tos.

Read More...

CamOne Infinity Is the First Action Cam With Interchangeable Lenses [Cameras]

Source: http://gizmodo.com/5895545/camone-infinity-is-the-first-action-cam-with-interchangeable-lenses

CamOne Infinity Is the First Action Cam With Interchangeable LensesThe CamOne Infinity is an action cam with a key difference that could help set it apart from the GoPro if it arrives to market: it has interchangeable lenses.

In addition to the standard lens, which shoots at two fields of view: 170 degrees and 127 degrees, CamOne is reportedly going to release 142-degree and 96-degree lenses. It's also a tiny bit lighter and smaller than the GoPro, and it comes with an integrated LCD, but the real difference is the interchangeable lens system.

But does the world really need another tiny action camera? It's hard to say whether lenses will actually make the footage better or more interesting. The GoPro HD Hero pro is already spectacular—even if it's really only good for one very specific kind of shot—and achieves a 96-degree field of view without changing lenses. The difference in the footage might very be negligible.

If CamOne got a little more adventurous and designed some more interesting lenses for the camera there's a lot of potential for shocking and experimental video. CamOne says it's still trying to iron out North American distribution, and there's no word one how much the camera will cost when it gets here. [CamOne via GizMag]

Read More...

Acer Iconia Tab A510 with Tegra 3, Android 4.0 arriving in the US and Canada for $450

Source: http://www.engadget.com/2012/03/22/acer-iconia-tab-a510-official/

Image
The curious thing about the Acer Iconia Tab A510 is that it's been out in the open for months -- we've even handled it -- but for whatever reason, Acer's never publicly acknowledged it as the successor to last year's A500. When we got hands-on at CES, for example, it wasn't at Acer's suite, but NVIDIA's booth (this is Acer's first Tegra 3 tablet, don'tcha know). Well, the company's finally ready to come out and say, "Yes, we made this thing." The A510 is up for pre-order today in the US and Canada, with a price of $450. Though you can get it in black or white, it's available in one 32GB configuration for now. To recap, this is a quad-core slate with 1GB of RAM, a 10.1-inch (1280 x 800) display, 5-megapixel auto-focusing rear camera and a single-megapixel shooter up front. And though it loses the USB 2.0 port that made the A500 fairly distinctive, it gains a battery rated for 12 hours of video playback -- a good thing, since it'll have stiff competition from ASUS, Apple and Samsung in the endurance department. Acer also confirmed the tablet will ship with Android 4.0, with the company's usual light OS tweaks in tow. Still no word on when, exactly, it'll ship, but if you want to get a feel for it in the meantime be sure to hit up our hands-on from CES if you missed it the first time around.

Continue reading Acer Iconia Tab A510 with Tegra 3, Android 4.0 arriving in the US and Canada for $450

Acer Iconia Tab A510 with Tegra 3, Android 4.0 arriving in the US and Canada for $450 originally appeared on Engadget on Thu, 22 Mar 2012 08:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Read More...

Lovefilm now streams more content than it mails

Source: http://www.engadget.com/2012/03/22/lovefilm-streaming/

Image
Lovefilm has announced that for the first time, more content was viewed via its instant service than its DVD, Blu-Ray and Games rental divisions combined. The Amazon-owned company surpassed two million members in January, making it the biggest of its kind in Europe. In just 12 months, internet-viewers have increased by a whopping 400 percent, but let's not take that as a sign of the death of physical media just yet. Whilst the company itself sees its future in the streaming realm, the postal-arm of the business also grew by 25 percent in the same period.

Continue reading Lovefilm now streams more content than it mails

Lovefilm now streams more content than it mails originally appeared on Engadget on Thu, 22 Mar 2012 09:35:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

Read More...