Wednesday, December 07, 2011

drag2share: Intel springs another leak, mobile Ivy Bridge CPUs abound

Source: http://www.engadget.com/2011/12/07/intel-springs-another-leak-mobile-ivy-bridge-cpus-abound/

Just yesterday, we caught a glimpse of what Intel has in store for Ivy Bridge, and it seems those details were but a prelude to a bevy of details that leaked out today. It seems the folks over at VR Zone got their hands on some of Chipzilla's internal documents showing a host of changes for its post-Sandy Bridge mobile CPUs. Apparently, we can expect quite a few new full-power models, including a 2.9GHz Core i7-3920XM -- clocked at 200MHz faster than the Core i7-2960XM that's Intel's presiding mobile chipset champion -- along with two other quad-core Core i7s and a couple of Core i5 chips as well. For those who cherish battery life above all else, there's a dual-core Core i7-3667U clocked at 2.0 GHz and a 1.8GHz Core i5-3427U coming down the pipe. All the speedy new silicon comes with upgraded Intel HD 4000 graphics, and is slated for release in April and May of next year. If you can't wait until then for your next-gen CPU fix, head on over to the source for a heaping helping of Ivy Bridge charts and specs.

Intel springs another leak, mobile Ivy Bridge CPUs abound originally appeared on Engadget on Wed, 07 Dec 2011 00:03:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)

Read More...

drag2share: Add Telephoto Powers to Your Lens for Cheap [Photography]

Source: http://gizmodo.com/5865588/add-telephoto-powers-to-your-lens-for-cheap

Add Telephoto Powers to Your Lens for CheapZoom lenses are expensive. Telephotos lenses are really expensive. But if you want to shove a little extra optical oomph into you camera, this 2x $50 telephoto adapter is a killer deal. Just screw 'er on and shoot.

It's as easy as it is cheap—each lens adapter is made for whichever lens you're seating it on—77 through 49 mm threads—and about doubles the focal length of your cam. A 50mm lens becomes a 105mm, pretty much. And for $50, that ain't bad! The thing is pretty, cheap, and portable—three wonderful adjectives, and rare when it comes to photography. [Photojojo]

Add Telephoto Powers to Your Lens for Cheap

---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)

Read More...

drag2share: 128Gb NAND Chips Promise SD Cards with Terabytes of Storage [Memory]

Source: http://gizmodo.com/5865748/128gb-nand-chips-promise-phones-with-terabytes-of-storage

128Gb NAND Chips Promise SD Cards with Terabytes of StorageCell phones have taken another step towards becoming full-fledged pocket computers with an announcement by Micron and Intel. Get ready to carry even more of your digital life on your phone.

The 128Gb NAND device, a world's first, is the result of a multi-year collaboration between the memory and chip manufacturers. It uses MLC technology and has to potential to store as much as 2 terabytes of data on a 2.5-inch SSD drive—128GB per chip—and perform as many as 33 megatransfers per second on an eight die form. It goes on sale in January and is expected to quickly outpace the 64Gb version that is already in production.

You'll find NAND flash memory in most SD card formats and SSD's as well as many USB drives as it offers superior densities and greater fault tolerances than NOR memory. So expect to see huge capacity gains in phones, cameras, thumb drives—just about anything with non-volatile memory. [Slashgear - ArsTechnica]

---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)

Read More...

Tuesday, December 06, 2011

drag2share: CMU Researchers One-Up Google Image Search And Photosynth With Visual Similarity Engine

Source: http://techcrunch.com/2011/12/06/cmu-researchers-one-up-google-image-search-and-photosynth-with-visual-similarity-engine/

head

To search these days is really an incredibly service-intensive process. Whereas before, to search something meant you had to through its drawers or folders by hand and inspect things by eye, now it means simply to produce a query and allow the vast computational engines of cloud services to exert themselves in parallel, sifting through petabytes of data and instantly presenting you with your results, ordered and arranged like snacks on a platter. We’re spoiled, to say the least.

It’s not enough, however, to have computers blindly compare 1s and 0s; when humans search, they search intelligently. We’ve seen incredible leaps in the ability to do this, and in the area of visual search, we’ve seen some interesting and practical technologies in (respectively) Photosynth and Google’s search by image function. And now some researchers at CMU have taken another step in the education of our tools. Their work, being presented at SIGGRAPH Asia, cleaves even closer to human visual cognition, though there’s still a long way to go on that front.

The challenge, when comparing images for similarity, is how to determine the parts of the image that make it unique. For us this is child’s play, literally: we learn the basics of visual distinction when we are toddlers, and have decades of practice. Computer vision, on the other hand, has no such biological library to draw on and must work algorithmically.

To this end, the researchers at Carnegie Mellon have determined an interesting way of comparing images. Instead of comparing a given image head to head with other images and trying to determine a degree of similarity, they turned the problem around. They compared the target image with a great number of random images and recorded the ways in which it differed the most from them. If another image differs in similar ways, chances are it’s similar to the first image. Ingenious, isn’t it?

The results speak for themselves: not only are they, like Google’s search tools, able to find images with similar shapes or, like Photosynth, able to find images of the same object or location with variations in color or angle, but they are able to reliably match very different versions of an image, like sketches, paintings, or images from totally different seasons or what have you.

Their video explains it pretty well:

Essentially, it’s an image comparison tool that acts more like a human: identifying not the ways in which a scene is like other scenes, but how it is different from everything else in the world. It recognizes the dome of St. Peter’s whether it’s Summer or Winter, ball point pen or photo.

Naturally there are limitations. The process is not very efficient and is extremely CPU-intensive; while Google may have reasonably similar images returned to you in half a second, the CMU approach would take much longer due to the way it must sift through countless images and do complicated zone-based comparisons. But the results are much more accurate and reliable, it seems, and calculation time will only decrease.

What will happen next? The research will almost certainly continue, and as this is a hot space right now, I wouldn’t be surprised to see these guys snapped up by one of the majors (Google, Microsoft, Flickr) in a bid to outpace the others at visual search. Update: Google is in fact one of the funders of the project, though in what capacity and at what level is not disclosed.

The research team consists of Abhinav Shrivastava, Tomasz Malisiewicz, Abhinav Gupta, and Alexei A. Efros, who is leading the project. The full paper can be downloaded here (PDF) and there is some supplementary info and video at the project site if you’re interested.



---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)

Read More...

drag2share: LG DoublePlay review

Source: http://www.engadget.com/2011/12/06/lg-doubleplay-review/

It's no secret that Android's dominance of the smartphone world is due in part to the sheer number of models available running the OS. This abundance of choice, while undoubtedly good for consumers, presents a challenge for OEMs as they design and build handsets: how to craft a device that stands out from the crowd? At this point, we've seen slabs of all sizes, a legion of landscape sliders, and a dual-screen oddity join the Android family. Now, LG has created the DoublePlay, giving users both a hint of the Echo's dual screen experience along with a split physical keyboard for tactile typing. In doing so, the company has accomplished something we weren't sure was possible by building a unique Android phone. The question is, does this unusual form factor provide an improved user experience, or is it destined to go down in gadget history as a gimmick?

Continue reading LG DoublePlay review

LG DoublePlay review originally appeared on Engadget on Tue, 06 Dec 2011 16:00:00 EDT. Please see our terms for use of feeds.

Permalink   |   | Email this | Comments

---
drag2share - drag and drop RSS news items on your email contacts to share (click SEE DEMO)

Read More...