Friday, January 30, 2009

Solar power a ventilation system that can cool the car without help from the engine.

Solar cars still a way off

Toyota's third-generation Prius, due at dealerships this spring, will have an optional solar panel on its roof. The panel will power a ventilation system that can cool the car without help from the engine, Toyota says.
But it's a long way from the 2010 Prius to a solar-powered car, experts told CNN. Most agree that there just isn't enough space on a production car to get full power from solar panels.

"Being able to power a car entirely with solar is a pretty far-reaching goal," said Tony Markel, a senior engineer at the federal government's National Renewable Energy Lab in Golden, Colorado.

In the new Prius, the solar panel will provide energy for a ventilation fan that will help cool the parked car on sunny, hot days. The driver can start the fan remotely before stepping into the car. Once the car is started, the air conditioning won't need as much energy from a battery to do the rest of the cooling.

"The best thing about using solar is that regardless of what you end up using it for, you're trying to use it to displace gasoline," added Markel.

The question is, how much gasoline can solar power offset? Markel said his lab has modified a Prius to use electricity from the grid for its main batteries and a solar panel for the auxiliary systems. He believes the car gets an additional 5 miles of electric range from the panel.
According to recent articles in Japan's Nikkei newspaper, Toyota has bigger plans for harnessing power from the sun. Nikkei reports that Toyota hopes to develop a vehicle powered entirely by solar panels. The project will take years, the paper reported.

When contacted by CNN, however, a Toyota spokeswoman denied the existence of the project.

"At this time there are no plans that we know of to produce a concept or production version of a solar-powered car," said Amy K. Taylor, a communications administrator in Toyota's Environmental, Safety & Quality division.

Motorists don't have to wait for a 2010 Prius to drive a solar-enhanced car, however. Greg Johanson, president of Solar Electric Vehicles in Westlake Village, California, said his company makes a roof-mounted panel for a standard Prius that enables the car to travel up to 15 additional miles a day.

The system costs $3,500, and it takes about a week to make one, Johanson said. Billy Bautista, a project coordinator at the company, said Solar Electric Vehicles gets so many requests for the system that there is a backlog of several months.

The company's Web site says motorists can install the panels themselves, although it recommends finding a "qualified technician."

The system delivers about 165 watts of power per hour to an added battery, which helps powers the electric motor, Johanson said.

But others said it would take a lot more power than that to replace an internal combustion engine.

Eric Leonhardt, director of the Vehicle Research Institute at Western Washington University, said that even if solar cells worked far better than they do today, they wouldn't generate enough power for driving substantial distances. The best cells operate at about 33 percent efficiency, but the ones used on vehicles are only about 18 percent efficient, he said.

Leonhardt said it would be more practical to use solar power to help charge a car's battery and use the more efficient panels mounted on a roof or over a parking area to supply the rest of the electricity needed to drive the engine.

"Solar panels really need a lot of area," he said.

Leonhardt thinks Toyota's new Prius is a good first step toward using renewable energy. Some cars get hotter than 150 degrees inside when parked in the sun, so reducing the temperature could mean Toyota could use a smaller AC unit, he added.

Johanson of Solar Electric Vehicles said he'd like to see Toyota bring the weight of a Prius down from 3,000 pounds to 2,000. He also hopes for a small gasoline engine and a larger electric motor. That will probably come in the future, when Toyota unveils a plug-in engine.

In the meantime, Solar Electric Vehicles sells its version of a plug-in Prius, with a solar panel installed, for $25,000, Bautista said.

Toyota is the largest automaker to incorporate solar power into a mass-produced car. But its solar panel is not the first for a car company. Audi uses one on its upscale A8 model, and Mazda tried one on its 929 in the 1990s.

In addition, a French motor company, Venturi, has produced an electric-solar hybrid. The Eclectic model costs $30,000, looks like a souped-up golf cart and uses roof-mounted solar panels to help power an electric engine. It has a range of about 30 miles and has a top speed of about 30 mph.

ABOUT Solar vehicle

Borealis III leads the way during the 2005 North American Solar Challenge passing by Lake Benton, Minnesota.A solar vehicle is an electric vehicle powered by a type of renewable energy, by solar energy obtained from solar panels on the surface (generally, the roof) of the vehicle. Photovoltaic (PV) cells convert the Sun's energy directly into electrical energy. Solar vehicles are not practical day-to-day transportation devices at present, but are primarily demonstration vehicles and engineering exercises, often sponsored by government agencies.

Solar cars
Solar cars combine technology typically used in the aerospace, bicycle, alternative energy and automotive industries. The design of a solar vehicle is severely limited by the energy input into the car (batteries and power from the sun). Virtually all solar cars ever built have been for the purpose of solar car races (with notable exceptions).

Like many race cars, the driver's cockpit usually only contains room for one person, although a few cars do contain room for a second passenger. They contain some of the features available to drivers of traditional vehicles such as brakes, accelerator, turn signals, rear view mirrors (or camera), ventilation, and sometimes cruise control. A radio for communication with their support crews is almost always included.

Solar cars are often fitted with gauges as seen in conventional cars. Aside from keeping the car on the road, the driver's main priority is to keep an eye on these gauges to spot possible problems. Cars without gauges available for the driver will almost always feature wireless telemetry. Wireless telemetry allows the driver's team to monitor the car's energy consumption, solar energy capture and other parameters and free the driver to concentrate on just driving.

Electrical and mechanical systems
The electrical system is the most important part of the car's systems as it controls all of the power that comes into and leaves the system. The battery pack plays the same role in a solar car that a petrol tank plays in a normal car in storing power for future use. Solar cars use a range of batteries including lead-acid batteries, nickel-metal hydride batteries (NiMH), Nickel-Cadmium batteries (NiCd), Lithium ion batteries and Lithium polymer batteries.

Many solar race cars have complex data acquisition systems that monitor the whole electrical system while even the most basic cars have systems that provide information on battery voltage and current to the driver.

The mechanical systems of a solar car are designed to keep friction and weight to a minimum while maintaining strength. Designers normally use titanium and composites to ensure a good strength-to-weight ratio.

Solar cars usually have three wheels, but some have four. Three wheelers usually have two front wheels and one rear wheel: the front wheels steer and the rear wheel follows. Four wheel vehicles are set up like normal cars or similarly to three wheeled vehicles with the two rear wheels close together.

Technorati : , , , , , , ,

Thursday, January 29, 2009

Google's new online tools that will diagnose your network connection & performance.l

Google and a group of partners have released a set of tools designed to help broadband customers and researchers measure performance of Internet connections.
The set of tools, at, includes a network diagnostic tool, a network path diagnostic tool and a tool to measure whether the user's broadband provider is slowing BitTorrent peer-to-peer (P-to-P) traffic. Coming soon to the M-Lab applications is a tool to determine whether a broadband provider is giving some traffic a lower priority than other traffic, and a tool to determine whether a provider is degrading certain users or applications.

Think your Internet Service Provider (ISP) is messing with your connection performance? Now you can find out, with Google's new online tools that will diagnose your network connection.
Here's a quick walkthrough on how to make the best of them.
Google's broadband test tools are located at On that page, you'll see an first icon that says "Users: Test Your Internet Connection". Click that, and then you'll be taken to a page where there are three tests available, and two more listed as coming soon. However, out of the three available tests, only one of them is fully automated and easy to use.
Glasnost , second on the list, will check whether your ISP is slowing down (like Comcast) or blocking Peer2Peer (P2P) downloads from software such as BitTorrent. P2P apps are commonly used for downloading illegal software and media content like movies and music, but also are used for legal purposes as well, such as distributing large software packages to many users at once.
To use the measurement tool, you will be redirected to the Glasnost site. You'll need the latest version of Java installed, and you should stop any large downloads that you may have running before you begin the test. If you're on a Mac, a popup message will prompt you to trust the site's Java applet.
When you're ready to start, you can choose whether you want to run a full test (approximately 7 minutes long) or a simple test (4 minutes long). When I tried to test my connection, Glasnost's measurement servers were overloaded and an alternative server was offered, but that was overloaded as well. After a short while I was able to run the test.
In the tests of my connection (my provider is Vodafone At Home, in the UK) all results indicated that BitTorrent traffic is not blocked or throttled. But I'm looking forward to hearing from you in the comments how your ISP performed in Glasnost's diagnostics. Meanwhile, make sure you keep an eye on the other tests that will be available soon from

Wednesday, January 28, 2009

Britannica reaches uncovered to the web

The Encyclopaedia Britannica has unveiled a plan to let readers help keep the reference work up to date.
Under the plan, readers and contributing experts will help expand and maintain entries online.
Experts will also be enrolled in a reward scheme and given help to promote their command of a subject.
However, Britannica said it would not follow Wikipedia in letting a wide range of people make contributions to its encyclopaedia.
User choice
"We are not abdicating our responsibility as publishers or burying it under the now-fashionable 'wisdom of the crowds'," wrote Jorge Cauz, president of Encyclopaedia Britannica in a blog entry about the changes.
He added: "We believe that the creation and documentation of knowledge is a collaborative process but not a democratic one."
Britannica plans to do more with the experts that have already made contributions. They will be encouraged to keep articles up to date and be given a chance to promote their own expertise.
Selected readers will also be invited to contribute and many readers will be able to use Britannica materials to create their own works that will be featured on the site.
However, it warned these would sit alongside the encyclopaedia entries and the official material would carry a "Britannica Checked" stamp, to distinguish it from the user-generated content.
Alongside the move towards more openness, will be a re-design of the Britannica site and the creation of the web-based tools that visitors can use to put together their own reference materials.

Tech analysts expect (AMZN) to open the cover on Kindle 2

Get ready for the next chapter. Tech analysts expect (AMZN) to open the cover on Kindle 2, the second generation of its groundbreaking electronic reader. On Tuesday, Amazon invited members of the media to "an important" news conference Feb. 9 at New York City's Morgan Library & Museum.
Amazon CEO Jeff Bezos is expected to attend. But Amazon won't reveal the plot. "We are not sharing details," Amazon director of communications Drew Herdener wrote in an e-mail. The company has said there will be a new version of Kindle sometime this year.
Paperback-size e-book readers such as the Kindle or rival Sony Reader let bookworms cart a boatload of titles — more than 200 in the case of the first Kindle. But Kindle's real advance was in its wireless Whispernet network (built on top of Sprint's speedy EV-DO wireless network). Readers could search for and sample books, blogs and periodicals (including USA TODAY) right on the device and purchase new content in under a minute. Best sellers typically cost $9.99.
Amazon won't disclose Kindle sales. Mark Mahaney, director of Internet research at Citigroup Investment Research, estimates Amazon sold about 400,000 units last year and that Kindle hardware and book sales will contribute about $1 billion to Amazon's revenue in 2010. "It's pretty clear this is the iPod of the book world," he says. Mahaney also expects the new Kindle to drop to around $300, from $359. Minor design glitches will likely also be addressed. Pundits have criticized Kindle for its clumsy button layout and homely appearance.
Amazon underestimated demand for the first Kindle, which is still difficult to come by. Amazon's website says Kindle is sold out due to "heavy customer demand." Orders are expected to be shipped in four to six weeks, the website indicates.
What isn't clear, of course, is whether buyers will receive the first Kindle or the sequel. Whatever Amazon trots out, Tim Bajarin, president of the Creative Strategies consulting firm, doesn't expect shortages to be a major issue. "This time they at least know what the sales cycles have looked like," Bajarin says. "I have to believe they're going to be smarter about building and managing inventory."
Amazon's Kindle 2.0 could have color screen, longer battery life, sleeker design..

Is a Kindle 2.0 on the way? today set the stage for fingers to start tapping out online rumors about a new version of its e-book reader. The Seattle company sent out invitations for a Feb. 9 press event at the Morgan Library & Museum in New York. The last time Amazon held such an event was in 2007 to introduce the Kindle.
"We're fairly sure that it will be a new Kindle, one that will feature a color screen and a better battery life," said Richard Doherty, a consumer electronics analyst with the Envisioneering Group.
Doherty, who keeps close tabs on companies that supply parts for the Kindle and other devices, said Amazon had been working for much of 2008 on a successor to its unexpectedly popular reading device. But Amazon's plans to release the product in time for Christmas were derailed when the online merchant was overwhelmed with orders, Doherty said. As a result, those who ordered a Kindle in December were told to wait until February or March for the device.
The Boy Genius Report has some photos it says are of the next version of the Kindle.
Another possible change: a sleeker design that relocates the page-forward and page-back buttons so users would be less likely to hit them accidentally. That's a major complaint about the current Kindle, said Tim Bajarin, electronics analyst with Creative Strategies.

You ready for Kindle 2.0?

Amazon has sent out word of a press event Feb. 9 at the Morgan Library & Museum in New York. The last time it did something like this, it was for the release of the original Kindle in Nov. 2007.
The new device is expected to update the Kindle's rather clunky looks and add some design touches aimed at making it easier to use. It should probably get the color screen treatment but it's unclear if it will go to a touch screen. One of the gripes has been inadvertent page turns, which most observers expect will get addressed.
The Boy Genius Report has
some pictures (featured above) from last fall that show a new Kindle with rounded edges and buttons.
Despite its awkward looks, the Kindle has sold well even at its $359 price, down from its original $399 price. Amazon sold more than 250,000 units in the first year and the device is still shipping with a 4-6 week delay.
People have enjoyed the way the Kindle offers easy access to 225,000 books, which can be downloaded wirelessly over a cellular connection. The Kindle, however, faces competition from Sony's eReader and also down the road from devices like the iPhone and iPod Touch.

Hidalgo County, Texas is considering $500,000 project that would blanket the city with a wireless Internet system

Pharr is considering a $500,000 project that would blanket the city with a wireless Internet system geared toward serving city workers and emergency responders.
Negotiations are still in extremely preliminary stages — and both the city and contractor say a timetable isn't set — but leaders have expressed intrigue at the prospect of a system that can seemingly meet their wildest high-tech fantasies.
"The possibilities for the future are really interesting," Pharr City Manager Fred Sandoval said.
Bobby Vassallo, a wireless Internet consultant, has met with the City Commission twice over the last six weeks to help pitch the concept of a wireless Internet "clothesline" that could help the city handle everything from police video surveillance to wireless water meter-reading.
Behind the pitch is Brownsville businessman Oscar Garza, who leads the corporation Valley Wireless Internet Holdings.
Sandoval emphasized that the city hasn't made any decisions yet.
"It's a very interesting concept," he said. "We definitely want to be at the forefront."
Pharr isn't alone in its consideration of wireless systems.
While wireless Internet is already the standard in some large cities, the technology now seems to be taking root in the Rio Grande Valley.
Cities across the region are pursuing high-tech, wireless Internet options that have the potential to promote efficiency in virtually all municipal departments by keeping workers in the field connected to City Hall.
Using wireless "mesh" systems, cities can provide Internet access over a large area to their employees through a series of nodes attached to structures like water towers or streetlights.
That means building inspectors could send reports back to City Hall from a work site, traffic citations could appear in court computers almost instantly, and police could set up surveillance cameras without fear of their cables being cut.
McAllen is already moving forward with plans to install up to 120 surveillance cameras throughout the city, which will be connected wirelessly to a fiber-optic cable running through the city.
The cameras would be served by a downtown wireless network, which could also provide support to other city workers in the area.
Last summer, a pilot program provided wireless to city workers in Bill Schupp Park. McAllen is currently soliciting proposals from vendors and is scheduled to meet with them today.
The focus of McAllen's project would be city usage, but eventually it could be opened up to residents, said Belinda Mercado, McAllen's information technology director.
Meanwhile, Hidalgo leaders are examining the possibility of creating a citywide blanket of wireless Internet similar to the one Pharr is examining. The system would provide access to emergency responders and residents on two separate networks, explained Rick Mendoza, Hidalgo's information technology director.
He said the talks are in preliminary stages and price estimates aren't available. But the city would like to offer Internet service to residents at no cost.
"We want to offer Internet service to members of our community who don't have the means of getting either DSL or cable," Mendoza said.
He added that a citywide wireless network would help Hidalgo compete with neighboring cities.
Edinburg leaders have also discussed the possibility of creating some sort of wireless system that would include various hot spots throughout the city, though they are only in discussions and the city hasn't started talks with any specific vendors.
Brownsville officials, meanwhile, expect their $6.6 million wireless project to be operational within four months, Mayor Pat Ahumada said.
The city is erecting signal towers, which will provide wireless access to city employees, utility workers and emergency responders, though it remains to be seen how much access the general public will have.
The systems don't come cheap, however.
The network being pitched to Pharr could cost as much as $500,000 for the initial infrastructure, $25,000 a month to operate and even more for cameras, wireless water meters and other high-tech equipment needed to actually take advantage of the system.
At a time when cities across the region are struggling financially, at least some have questioned whether the cost of such an ambitious undertaking can be justified.
Pharr is just starting to climb out from under its financial woes after it wiped out its reserves last year.
"I believe the No. 1 question we should be asking, besides ‘Can we afford this?' is ‘Do we need it?'" said Pharr Finance Director Juan Guerra at a city workshop earlier this month. "From what I'm hearing ... I'm not sure if we do."
Interestingly, the Valley's pursuit of wireless comes as cities elsewhere are struggling with their Wi-Fi projects.
Internet service provider Earthlink, which has partnered with Philadelphia, Houston and other large cities on wireless programs, announced layoffs within its municipal division in November. The company told shareholders it no longer makes sense for Earthlink to invest in municipal wireless.
As a result, some community wireless projects have been put on hiatus.
Earlier in the decade, companies like Earthlink offered to provide wireless systems at virtually no cost to cities. In exchange, the networks were privately owned, and the companies could charge subscription fees to consumers or hit them with advertising.
That model is changing, as it has become apparent that broadband access is becoming more readily available and affordable to consumers.
Today, cities are designing the systems for themselves to meet their own needs, such as giving support to emergency workers or keeping public works employees connected while in the field.
Those purpose-driven networks — as opposed to ones that are simply designed to give residents Internet access — are the ones that are now poised to succeed, writes Governing magazine's Christopher Swope, an expert on municipal wireless systems.
Vassallo, the wireless Internet consultant, emphasized to Pharr leaders that the city could create some public hot spots, but providing all-encompassing Internet service to residents isn't worth the cost or stress to the city.
Regardless of how, exactly, Pharr and other cities' projects takes shape, advocates say it's high time the Valley embraces wireless.

Tuesday, January 27, 2009

Google will begin to offer browser-based offline contact to its Gmail Webmail application

Google announced the release of a new system which allows users to access their accounts offline.
Google Delivers Offline admittance for Gmail
Google will begin to offer browser-based offline access to its Gmail Webmail application, a much-awaited feature.
This functionality, which will allow people to use the Gmail interface when disconnected from the Internet, has been expected since mid-2007.
That's when Google introduced Gears, a browser plug-in designed to provide offline access to Web-hosted applications like Gmail.
Gears is currently used for offline access to several Web applications from Google, like the Reader RSS manager and the Docs word processor, and from other providers like Zoho, which uses it for offline access to its e-mail and word processing browser-based applications.
Rajen Sheth, senior product manager for Google Apps, said that applying Gears to Gmail has been a very complex task, primarily because of the high volume of messages accounts can store. "Gmail was a tough hurdle," he said.
Google ruled out the option of letting users replicate their entire Gmail inboxes to their PCs, which in many cases would translate into gigabytes of data flowing to people's hard drives. It instead developed algorithms that will automatically determine which messages should be downloaded to PCs, taking into consideration a variety of factors that reflect their level of importance to the user, he said. At this point, end-users will not be able to tweak these settings manually.
"We had to make it such that we're managing a sizable amount of information offline and doing it well in a way that's seamless to the end-user," he said.
For example, in Gmail, users can put labels on messages, as well as tag them with stars to indicate their importance, and Google can use that information to determine which messages to download. Sheth estimates that in most cases Gmail will download several thousand messages, preferring those that are more recent as well. Depending on the amount of messages users have on their accounts, they may get downloads going back two months or two years, he said.
Google will begin to roll out the Gmail offline functionality Tuesday evening and expects to make it available to everybody in a few days, whether they use Gmail in its standalone version or as part of the Apps collaboration and communication suite for organizations.
While the feature was "rigorously" tested internally at Google, it is a first, early release upon which Google expects to iterate and improve on. That's why it's being released under the Google Labs label. Users are encouraged to offer Google feedback.
Users have been able to manage their Gmail accounts offline via other methods for years, since Gmail supports the POP and IMAP protocols that let people download and send out messages using desktop e-mail software like Microsoft Outlook and others.
However, the Gears implementation will let people work within the Gmail interface without the need for a separate PC application. When offline, messages will be put in a Gears browser queue, and the desktop and online versions of the accounts will be synchronized automatically when users connect to the Internet again. This will come in handy for people who travel a lot and often find themselves without Internet access, Sheth said.
To activate the offline functionality, users of standalone Gmail service and the standard Apps edition should click "settings" after logging on to their Gmail account. There, they should click on the "Labs" tab, select "Enable" next to "Offline Gmail" and click "Save Changes." A new "Offline" link will then appear in the right-hand corner of the account interface. Users of the Education and Premier Apps versions will
have to wait for their Apps administrators to enable Gmail Labs for everyone on the domain first.
Google is also rolling out Gears-based offline access for its Calendar application. However, it will be for now read-only and exclusively available to Google Apps account holders. Previously, Google introduced read-only offline access to the Spreadsheet and Presentation applications in Google Docs, which is also part of Google Apps.

release of offline Gmail.

The early version of the app is available now to users with the U.S./U.K. English version of Google Labs.
Pegged as an "experimental" feature, the app is aimed at maintaining Gmail's functionality even when you're not online. Built on Google's Gear's platform, once enabled the feature downloads a cache of your mail to your PC. When you're logged on the Web, it syncs the cache with the Gmail servers.
While you're offline, you can read, star, and label messages. If you send a message when you're offline, Gmail places it in your outbox and sends it as soon as you log back in. A special "flaky connection" setting splits the difference between on and offline modes ("when you're 'borrowing' your neighbor's wireless," says Google), utilizing a local cache while syncing it with the online version.

UCLA researchers have reprogrammed human induced pluripotent stem cells

For the first time, UCLA researchers have reprogrammed human induced pluripotent stem (iPS) cells into the cells that finally become eggs and sperm, possibly opening the door for new treatments for sterility using patient-specific cel
The iPS cells were coaxed into forming germ line precursor cells which include genetic material that may be passed on to a child. The study appears today in the early online edition of the peer-reviewed journal Stem Cells.
“This finding could be important for people who are rendered infertile through disease or injury. We may, one day, be able to replace the germ cells that are lost,” said Amander Clark, a Broad Stem Cell Research Center scientist and senior author of the study. “And these germ cells would be specific and genetically related to that patient.”
Theoretically, an infertile patient’s skin cells, for example, could be taken and reprogrammed into iPS cells, which, like embryonic stem cells, have the ability to become every cell type in the human body. Those cells could then be transformed into germ line precursor cells that would eventually become eggs and sperm. Clark cautioned, however, that scientists are still many years from using these cells in patients to treat infertility. There is still much to be learned about the process of making high quality germ cells in the lab.In another important finding, Clark’s team discovered that the germ line cells generated from human iPS cells were not the same as the germ line cells derived from human embryonic stem cells. Certain vital regulatory processes were not performed correctly in the human iPS derived germ cells, said Clark, an assistant professor of molecular, cell and developmental biology.
So it’s crucial, Clark contends, that work continue on the more controversial human embryonic stem cells that come from donated, excess material from in vitro fertilization that would otherwise be destroyed.
When germ cells are formed, they need to undergo a specific series of biological processes, an essential one being the regulation of imprinted genes. This is required for the germ cells to function correctly. If these processes are not performed the resulting eggs or sperm, are at high risk for not working as they should. This has significant consequences, given that the desired outcome is a healthy child.
“Further research is needed to determine if germ line cells derived from iPS cells, particularly those which have not been created by retroviral integration, have the ability to correctly regulate themselves like the cells derived from human embryonic stem cells do,” Clark said. “When we looked at the germ cells derived from embryonic stem cells, we found that they regulated as expected, whereas those from the iPS cells were not regulated in the same way. We need to do much more work on this to find out why.”
Humanitarian goals, science get new life
PRESIDENT Obama's inauguration has led to the resumption of aid to international groups that perform or give information about abortions and should open the door to important scientific research. Both developments are a boost to humanitarian and medical advances that should expand during the Obama administration.
The new president signed an executive order on Friday that ended the ban on giving taxpayer money to international family groups that offer abortions or provide related information. The assistance was available from the Agency for International Development during the Clinton administration but banned during the Reagan and both Bush administrations.
Obama also is expected to restore funding for the U.N. Population Fund, which George W. Bush had rejected on the contention that it supported a Chinese family planning policy of coercive abortion and involuntary sterilization, an allegation that the agency vehemently denied. In fact, the lifting of the bans will reduce unintended pregnancies, abortions and the deaths of women from high-risk pregnancies.
The signing came a day after the Food and Drug Administration allowed the world's first clinical trial of a treatment derived from human embryonic stem cells for spinal cord injury. The therapy uses an old embryonic stem cell line that was allowed under the latest Bush administration but the approval might have been delayed until Bush left office.
The Bush administration restricted federal financing for embryonic stem cell research because creation of the cells entailed destruction of human embryos, even though they had been destined for the trash. President Obama has pledged to remove some of the financial restrictions.

Research on stem cells is the subject of intense investigation, both from a basic science point of view, as well as a basis for cell-based therapies to treat disease. The ability to study and characterize stem cells has been aided by the identification of specific markers which allow researchers to characterize and enrich these cells. The use of immunophenotyping is an important technique to distinguish one population of cells from another. eBioscience is dedicated to providing you with a choice of innovative primary antibody reagents and flurochromes to accelerate your stem cell research using multicolor flow cytometry.

"Camera Phone Predator Alert Act" to protect citizens from being photographed illegally, without us knowledge

Congress Intros Bill to Force Cell Camera Sounds
The Camera Phone Predator Alert Act (H.R. 414) is the real deal. Fresh off the legislative desk of New York Representative Peter King (R), the bill--currently cosponsored by goose egg--would require an audible tone to accompany all cellular phones with an installed camera that are created in the U.S. This tone, likely a clicking noise of some sort, would sound, "within a reasonable radius of the phone whenever a photograph is taken with the camera in such phone." And don't think that evildoers would be able to conceal their predatory ways by flicking an iPhone-style audio toggle switch. Any mobile phones built after the bill becomes a law would be prohibited from including any way to eliminate or reduce the volume of said noise.
Camera Click Sound to be Legal Requirement
The draft of the legislation also mentions that the click sound should be audible within a sensible" distance.
The US is reportedly readying the "Camera Phone Predator Alert Act" to protect citizens from being photographed illegally, without their knowledge.While the topic has been mulled over for years, it is only now that the country is planning to put forth a legislation to make the camera click sound audible when a picture is clicked. While some cell phone manufacturers already have compliant devices in place, there are others where simply putting the phone into silent mode would let voyeuristic photography go undetected. Even for those phones on which the camera click sound cannot be turned off, users have been able to hack into the phone's firmware and remove the sound.The proposed bill would fall under the domain of the Consumer Product Safety Commission and is expected to be provided the status of a "safety requirement". Additionally, the draft of the legislation also mentions that the click sound should be audible within a "reasonable" distance.Similar laws are already in place on countries like Japan and Korea and most device manufacturers have been able to comply with the same.

Micro Camcorder - 'World's Smallest'
Things are getting ever smaller. If you doubt this, just check out the Micro Camcorder - a spy camera developed by Spy Gadget. The camcorder has claimed the spot for the 'World's Smallest Camcorder'.

The camcorder is so small that it can be hidden in a chewing-gum pack. It's a one touch record function and records videos at 15 fps (frames per second). The captured video is stored on a flash microSD card. It has built-in batteries and charges via USB. The camera can record video for over 30 hours with a 1GB card installed. The price quoted for the taking is USD 295 (Rs.11,800).

Monday, January 26, 2009

The world's best coolest ear buds

Skullcandy veered away from standard-issue black and white headphones - and struck gold.
Skullcandy is using fake alligator skin and rhinestones to shake up the headphone market, giving Philips and Sony a run for their money.

The half pipe tucked in a corner of the office is the first clue that Skullcandy is not your average company.
Other clues: In the teeth of the worst recession in generations, the five-year-old private company is growing like a weed. And it just scored a round of funding, from private-equity shop Goode Partners, at a time when investment dollars are scarce.
If the name Skullcandy doesn't register, it will with your kids (so will the term half pipe, which is a ramp, in this case for skateboarding, shaped like a pipe cut in half lengthwise).
Skullcandy's business is headphones, and they dominate the 12- to 25-year-old demographic with a line-up of gear covered in faux gator skin, gold foil, rhinestones and hip hop-inspired graphics. Pull back the hoody on any kid riding a snowboard in Park City, Utah and chances are pretty good, a pair of Skullcandy headphones, probably the top-selling "Smokin' Buds," will be pumping music into their ears.
Making electronics cool
From a distant No. 10 three years ago, Skullcandy is now North America's third-largest manufacturer of headphones by unit sales, behind consumer electronics giants Philips Electronics (PHG) and Sony (SNE), according to NPD Group. "We'll be No. 2 soon," predicted Skullcandy president Jeremy Andrus, legs dangling from the office half pipe. "My guess is some time next year."
After that, Skullcandy and the band of snowboarders, skaters, surfers and DJs that founder Rick Alden has assembled in Park City, will be gunning for No. 1. That is, if Alden, the CEO and creative madman to Andrus' operations guru, can figure out a way to do it without diluting the company's cool factor.
Skullcandy didn't invent headphones; what the company has done is make them into a fashion item. Kids don't want one pair, they want five. "We're like sunglasses," Alden said. "Except we sit on top of your head, and you wear them a lot more."
Skullcandy headphones are not the type you will hear audiophiles gushing about. They are mostly solid-sounding pieces of affordable gear that, unlike Sony's grey and black headphones, or Apple's white, don't disappear into the background. On the contrary, they make a statement. The snowboard, surf and skate inspired graphics and colors ask for attention, and speak to a lifestyle, or in most cases, a wannabe lifestyle.
Successful clothing brands are able to evoke that lifestyle magic, but it is the rare consumer electronics company that does it. Apple (AAPL, Fortune 500) with its iPod is the obvious and most successful current example. Skullcandy has pulled it off so far, and in doing so sent revenue from essentially zero to approaching $100 million in just a few years. Sales more than doubled in 2008.
To put in perspective Skullcandy's momentum, when many consumer electronics companies saw sales fall off a cliff in November, Skullcandy's quadrupled year over year, according to Andrus.
That success is obviously gratifying to Alden, but it also has him worried about overexposure. "I was at the mountain riding with my son the other day, and everyone I saw was wearing Skullcandy headphone, I mean they were everywhere," Alden said. "I may go back to wearing black Sony's just to be different."
He's kidding, but his concern is real. Alden and his design team need to keep Skullcandy fresh, so it doesn't fall out of fashion and black becomes the new black. Fortunately the Skullcandy team has a secret weapon when they seek inspiration, design-wise and business-wise.
"We head to the mountain," Alden said, checking for the latest snowfall report on his laptop. "No good ideas ever come from sitting in an office, not around here at least."


The Potential of Earbuds
There is great disagreement about:
Whether earbuds could potentially sound good, given their small size.
Whether any actual earbuds sound good, or whether the whole idea needs further development.
Which earbuds sound good and which sound bad.
Which of the expensive ($40-$80) earbuds sound so good that the extra cost is justified.

After testing many headphones and earbuds and applying my extensive experience tweaking equalizers, I think that earbuds actually have the potential to sound even *better* than standard headphones. In any case, all headphones and earbuds need a new approach: a calibrated equalization curve built into the player, to yield flat response. Megabass is a step toward such a compensation curve.
Like the Etymotics, earbuds have the potential to have smoother response than even the best popular standard headphones, such as the Sennheiser 580's. I've dialed in some truly vibrant, open sound using equalization together with $10 earbuds. It is easy and straightforward to equalize earbuds; just do anti-rolloff to a greater or lesser degree, and leave the rest flat; there aren't mysterious jags hidden along the entire spectrum that need unique shapes of compensation. I'd rather trust my ears than the common assumption that earbuds are inferior. If the conditions are right and the appropriate, ordinary EQ compensations are made, earbuds can be superior, rather than inferior, to good standard headphones. It's simply a matter of starting with a decent earbud driver, and providing the inverse of the earbud driver's frequency response.
If someone shows me a measured response curve of an earbud and it's rough and jagged, I will change my view somewhat, but in any case, I think that eq-compensated earbuds at least *can sound* unusually smooth and natural. Players need more fancy curves to compensate for specific earbud models.
"Though I like the R3 stock earbuds even better than the 888's, I can't stop seeking for even better sound, as I believe it can be a lot better. If I press against an earbud I get very powerful bass, so it is possible. I will keep on looking, and if I find something interesting I will let you know. Please let me know your findings on this matter." (from a private email to me)
Some people haven't been lucky and haven't heard the one or two models that are really good. No wonder they think earbuds are a poor packaging and sound poor. I was starting to suspect that *some* Sony stock earbuds (included with the player) sound great, and some sound lousy.

Internet Explorer 8 Focuses on better Security and Privacy

Some of the features of liberate Candidate 1, now existing to the public, are similar to functionality that’s already included in Firefox 3.
Microsoft's updated browser, Internet Explorer 8, promises an assortment of new features designed to help make Web browsing with IE safer, easier, and more compatible with Internet standards. We looked at the first release candidate of the new browser released to the public today, Release Candidate 1 (RC1). On the surface, IE 8 seems to be a lot like IE 7, but Microsoft has made a number of changes under the hood. You may have seen some of these new features already, however, in IE's no-longer-upstart competitor, Mozilla Firefox 3.
Tabbed Browsing

If you accidentally close a browser window in IE 8, you can opt to restore it when you reopen the program (just as you can in Firefox). IE 8 will use color coding to group related tabs together. If you open a link from in a new tab, for example, it will open adjacent to the original tab, and the tabs themselves will have a matching color. You can move tabs from one group to another, but if you have three unrelated pages open, you cannot create a group out of them.
Perhaps the most novel addition in IE 8 is what Microsoft calls tab isolation. The feature is designed to prevent a buggy Web site from causing the entire Web browsing program to crash. Instead, only the tab displaying the problematic page will close, so you can continue browsing.
Of course, IE 8 RC1 retains some of the features introduced in the first beta, including WebSlices and accelerators; see "Updated Web Browsers: Which One Works Best?" for more details.

IE 8 can use multiple search engines besides Windows Live Search, and you can add other search engines to the mix. Also, IE 8 will give you search suggestions as you type. For example, I can type in 'PC World' into the search field, and IE 8 RC1 will give me Live Search suggestions such as 'pc world magazine' or 'pc world reviews'. In addition, IE 8 lets you switch between search engines on the fly by clicking an icon at the bottom of the search field's drop-down menu. IE 8 can search Yahoo and, and you can install add-ins that give IE 8 the capability to search Wikipedia, Amazon, and the New York Times, among other sites.
Improved Security
Microsoft touts IE 8 as its most secure browser to date, and Microsoft has indeed added a good number of security features to the mix, ranging from phishing detection to private browsing, plus a new feature to prevent clickjacking, an emerging data theft threat.
IE 8 RC1 includes two security features under the 'InPrivate' label: InPrivate Browsing and InPrivate Filtering. Both existed in earlier prerelease versions of IE 8, but IE 8 RC1 lets you use the two features separately, whereas before each relied on the other.
If you enable IE 8's InPrivate Browsing feature, the browser will not save any sensitive data--passwords, log-in info, history, and the like. Afterward it will be as if your browsing session had never happened. This feature is very similar to Private Browsing in Apple's Safari browser, except that an icon in IE's address bar makes InPrivate Browsing's active status more obvious.
InPrivate Filtering--called InPrivate Blocking in earlier IE 8 builds--prevents sites from being able to collect information about other Web sites you visit. This feature existed in IE 8 Beta 2, but you could use it only while using InPrivate Browsing. In RC1, you can use InPrivate Browsing at any time.
The browser's phishing filter--called SmartScreen--improves on its predecessor's filter with such features as more-thorough scrutiny of a Web page's address (to protect you from sites named something like and a full-window warning when you stumble upon a suspected phishing site. SmartScreen relies largely on a database of known phishing sites, so new, unknown phishing sites may slip through the cracks.
IE 8 displays sites' domains in a darker text color, so you can more readily see whether you're visiting a genuine page, say, or a page simulating an eBay page on some site you've never heard of. Microsoft could still put a little more emphasis on the domain name (using a different color background, for example), but the highlighting is a welcome addition.
Finally, IE 8 RC1 includes a feature designed to prevent clickjacking, a method in which Web developers insert a snippet of HTML code into their Web page code to steal information from Web page visitors. When you use IE 8 to view such a page, IE 8 can identify an attempted clickjacking and will warn you of the attempt.
Web Compatibility
Creating a site that looks identical in Internet Explorer, Firefox, and Safari can be a challenge. IE 8 Beta 2 offers better support for W3 Web standards--a set of guidelines developed to ensure that a Web page appears the same in all browsers. The downside is that IE 8 will break some pages designed for earlier Internet Explorer versions.
To counteract this problem, Microsoft has added a compatibility mode: Click a button in the toolbar, and IE 8 will display a page in the same way that IE 7 does. In my testing, I found that most pages worked fine with the standard (new) mode, and that most errors were minor cosmetic ones. Unfortunately, the Compatibility Mode toggle button may not be obvious to most users, because it's pretty small; a text label would have helped.
Though it probably won't convince many Firefox users to jump ship, Internet Explorer 8 Release Candidate 1 shows promise, and may be worth considering for people who have not yet solidified their browser loyalties. (Keep an eye out for our report on the final release of IE 8.)
See more like this: internet explorer, browser security, online privacy.
Microsoft on Monday released a near-final "release candidate" version of Internet Explorer 8, the next version of its Web browser.
The software maker plans to say more on its Web site around noon, but, as noted by enthusiast site Neowin, the code is already available from Microsoft's download center.

With IE 8, Microsoft is hoping to regain some lost ground by adding features such as private browsing, improved security, and a new type of add-ons, called accelerators.
On the security front, Microsoft is adding a cross-site scripting filter, as well as protections against a type of attack known as clickjacking.
In an interview, IE General Manager Dean Hachamovitch said there will be little change between the release candidate and the final version, though he declined to say when the final version will be released.
"The ecosystem should expect the final candidate to behave like the release candidate," Hachamovitch said.
Internet Explorer 8 will work with Windows XP (Service Pack 2 or later) and Windows Vista. A version of IE 8 is also being built into Windows 7.
However, the IE code in Windows 7 is a pre-release candidate version.
"Windows 7 enables unique features and functionality in Internet Explorer 8 including Windows Touch and Jump Lists which require additional product tests to ensure we are providing the best Windows experience for our customers," the software maker said in a statement. "Microsoft will continue to update the version of Internet Explorer 8 running on Windows 7 as the development cycles of Windows 7 progress.

The future 3D holographic television to become realism

Picture this: you're sat down for the Football World Cup final, or a long-awaited sequel to the "Sex and the City" movie and you're watching all the action unfold in 3-D on your coffee table.
It sounds a lot like a wacky dream, but don't be surprised if within our lifetime you find yourself discarding your plasma and LCD sets in exchange for a holographic 3-D television that can put Cristiano Ronaldo in your living room or bring you face-to-face with life-sized versions of your gaming heroes.
The reason for renewed optimism in three-dimensional technology is a breakthrough in rewritable and erasable holographic systems made earlier this year by researchers at the University of Arizona.
Dr Nasser Peyghambarian, chair of photonics and lasers at the university's Optical Sciences department, told CNN that scientists have broken a barrier by making the first updatable three-dimensional displays with memory.
"This is a prerequisite for any type of moving holographic technology. The way it works presently is not suitable for 3-D images," he said.
The researchers produced displays that can be erased and rewritten in a matter of minutes.

To create television sets the images would need to be changing multiple times each second -- but Peyghambarian is very optimistic this can happen.
He said the University of Arizona team, which is now ten-strong, has been working on advancing hologram technology since 1990 -- so this is a major step forward. He believes that much of the difficulty in creating a holographic set has now been overcome.
"It took us a while to make that first breakthrough, but as soon as you have the first element of it working the rest often comes more rapidly," he said. "What we are doing now is trying to make the model better. What we showed is just one color, what we are doing now is trying to use three colors. The original display was four inches by four inches and now we're going for something at least as big as a computer screen."
There are no more great barriers to overcome now, he said.
The breakthrough has made some long-time researchers of the technology believe that it could now come to fruition.
Tung H. Jeong, a retired physics professor at Lake Forest College outside Chicago who had studied holography since the 1960s told; "When we start talking about erasable and rewritable holograms, we are moving toward the possibility of holographic TV ... It has now been shown that physically, it's possible."
And what might these holographic televisions look like?
According to Peyghambarian, they could be constructed as a screen on the wall (like flat panel displays) that shows 3-D images, with all the image writing lasers behind the wall; or it could be like a horizontal panel on a table with holographic writing apparatus underneath.
So, if this project is realized, you really could have a football match on your coffee table, or horror-movie villains jumping out of your wall.
Peyghambarian is also optimistic that the technology could reach the market within five to ten years. He said progress towards a final product should be made much more quickly now that a rewriting method had been found.
However, it is fair to say not everyone is as positive about this prospect as Peyghambarian.
Justin Lawrence, a lecturer in Electronic Engineering at Bangor University in Wales, told CNN that small steps are being made on technology like 3-D holograms, but, he can't see it being ready for the market in the next ten years.
"It's one thing to demonstrate something in a lab but it's another thing to be able to produce it cheaply and efficiently enough to distribute it to the mass market," Lawrence said.
Yet, there are reasons to be optimistic that more resources will be channeled into developing this technology more quickly.
The Japanese Government is pushing huge financial and technical weight into the development of three-dimensional, virtual-reality television, and the country's Communications Ministry is aiming at having such technology available by 2020.
Peyghambarian said there are no major sponsors of the technology at present, but as the breakthroughs continued, he hopes that will change.
Even if no major electronics company commit themselves, there is hope that backers could come from outside of the consumer electronics industry, he said.
"It could have some other applications. In training it's useful to show people three-dimensional displays. Also it would be good to show things in 3-D for defense command and control and for surgery," he said.

Sunday, January 25, 2009

Wireless power technologies are moving closer to becoming feasible options.

PowerMat Wireless Charging Plate

A vision of our wireless future, courtesy of PowerMat. The company teamed up with Michigan-based HoMedics to introduce more than a dozen products at this year's Consumer Electronics Show.

This year probably won't be the tipping point for wireless electricity. But judging from all the new techniques and applications of this awe-inspiring technology, getting power through the airwaves could soon be viable.
Fulton Innovations showcased blenders that whir wirelessly and laptops that power up without a battery at the Consumer Electronics Show (CES) earlier this month. The devices are all powered by electromagnetic coils built into the charging surface, and there's not a plug in sight.

Fulton's wireless electricity technology is called eCoupled, and the company hopes it can be used across a wide rage of consumer devices. Fulton was one of half a dozen companies that wowed consumers at CES.
10 Wireless Electricity Technologies
ECoupled uses a wireless powering technique called "close proximity coupling," which uses circuit boards and coils to communicate and transmit energy using magnetic fields. The technology is efficient but only works at close ranges. Typically, the coils must be bigger than the distance the energy needs to travel. What it lacks in distance, it makes up in intelligence.
In conjunction with the Wireless Power Consortium, Fulton, a subsidiary of Amway, has developed a standard that can send digital messages back and forth using the same magnetic field used to power devices. These messages are used to distinguish devices that can and can't be charged wirelessly, and to relay informtion like power requirements or how much battery is left in a device.
Using this technique, an industrial van parked outside the Fulton booth at CES charged a set of power tools from within its carrying case. The van was tricked out by Leggett & Platt people )--a diversified manufacturing company based in Carthage, Mo., and an eCoupled licensee--and is designed to solve its customers' biggest headache: arriving at the job site with a dead set of tools. Fulton, which teamed up with Bosch to design the setup, already has test vehicles rolling around in the field and plans to sell them to utility and other industrial companies by the end of the year.

more about news....

Bosch Wireless Powertool Set The eCoupled setup uses a technique called close proximity coupling, so the devices can remain in their case while charging. Generally, the efficiency of the wireless-electricity transfer decreases with distance.

In another area of the vast CES show, cellphones, videogame controllers and a laptop charged wirelessly on a silver and black mat created by Boulder, Colo.-based WildCharge.
The mat uses a conductive powering technique, which is more efficient than inductive powering but requires direct contact between the devices and the charging pad. Though most of the mats or pads on display are intended to power only a handful of devices at a time, WildCharge says the product design is certified for up to 150 watts--enough to power 30 laptops.
Across the room from WildCharge, PowerCast displayed Christmas ornaments and floor tiles glowing with LEDs powered by ambient radio waves. The devices harvest electromagnetic energy in ambient radio waves from a nearby low-power antenna. Because of the dangerous nature of electromagnetic waves in high doses, Pittsburgh-based PowerCast is targeting its application for mall devices like ZigBee wireless chips, which require little power.
Perhaps the most promising wireless power technology was the latest iteration of WiTricity, the Watertown, Mass.-based brainchild of MIT physicist Marin Soljacic, on display in a private suite high in the Venetian hotel tower.
The technology uses a technique developed by Soljacic called "highly coupled magnetic resonance." As proof that it works, an LCD TV is powered by a coil hidden behind an oil painting located a few feet away. Across the hotel room, WiTricity Chief Executive Eric Giler walks in the direction of another coil holding an iPod Touch in the palm of his hand. Power hungry, it starts to charge when it gets within two meters.
Soljacic has already earned a $500,000 genius grant from the John D. and Catherine T. MacArthur Foundation for his work, but Giler said the technology is at least a year away. In the meantime, WiTricity has obtained an exclusive license from MIT to bring Soljacic's idea to market and hopes to have an estimated 200 patents.
But because Soljacic published his academic paper in Nature magazine, companies like Intel (nasdaq: INTC - news - people ) have been able to replicate the effect in their labs based on his principles.
Elsewhere at CES, PowerBeam showcased wireless lamps and picture frames. Located in Sunnyvale, Calif., the company uses yet another wireless-powering approach. Its technology beams optical energy into photovoltaic cells using laser diodes. Although the company says it can maintain a constant energy flow across long distances, the difficultly of targeting a laser means that it's not ideal for charging moving devices.
So, while 2009 may not be the year wireless electricity takes off, the nascent sector is certainly on its way.

Downadup worm replicates itself at astonishing speed!

Call it Conficker, Downadup or Kido - the fact is the nasty worm is spreading at a very rapid speed! There is no checking the pace at which it is infecting PCs; and with already more than 9 million victims, including corporate networks worldwide, the worm is still going strong!
The Downadup worm made its first appearance two months back, exploiting a critical Windows flaw in the way the Server Service handles RPC requests. A blended threat, the malware relies upon many attack vectors - from brute-force password guessing to hitching rides on USB sticks - for replicating itself to spread throughout a network.
The unique rate of speed at which the worm replicates has perplexed experts. Security researcher, Derek Brown, of TippingPoint's DVLabs Team, said: "The notion of using multiple attack vectors is not terribly new. The unique thing about this worm is the speed at which it has spread and I think that's a result of the big size of the Microsoft vulnerability."
Experts also opine that though the Downadup malware got started because of the Microsoft flaw, it later proliferated quickly through the unpatched Windows operating systems of the users.
Though the malicious worm knows no land barriers, the hardest hit countries, as per Symantec Security Response, are China and Argentina. According to the Symantec vice president, Alfred Huger, China accounts for almost 29 percent of the infections tracked, Argentina was next in line with over 11 percent infections.

Computer worm called 'authentic risk

If you’ve never heard the words “Conficker” or “Downadup,” wait a few hours.

They’re rapidly becoming household words for personal computer owners.
Various major newspapers and television news shows reported Friday morning that the latest computer worm might now infect as many as 10 million computers worldwide.
According to a report in the Detroit Free Press, the worm is so virulent because it seems to “mutate” and launch “brute force attacks” that relentlessly try thousands of letter and number combinations in codes to steal personal passwords and login information.
Because most computer users choose passwords that they can remember easily, the words might also be something the worm can guess easily. Once in control of a computer the worm can launch spam, phishing attacks, shut down the Internet with massive traffic or access bank records.
According to F-Secure, an antivirus software company, the Conficker worm is spreading at a rate of 1 million new machines a day. It can be spread by USB stick also.
F-Secure has updated its Downadup removal tool, and the United States Computer Emergency Readiness Team has issued Alert TA09-020A, which describes how to disable AutoRun on Microsoft Windows systems in order to help prevent the spread of Conficker/Downadup via USB drives.
According to Symantec, the top infected countries in order of infection are: China, 28.7 percent; Argentina, 11.3 percent; Taiwan, 6.7 percent; Brazil, 6.2 percent; India, 5.8 percent; Chile, 5.2 percent; Russia, 5 percent; Malaysia, 2.8 percent; Columbia, 2.1 percent; and Mexico, 1.9 percent.
Philip Templeton of PT Technologies in Athens said everyone should keep his or her virus protection and software updates current.
“I have seen in the last four to six months more people getting viruses,” said Templeton. “But no matter what antivirus software you buy, nothing is 100 percent. Make sure your Windows Firewall is on, and it doesn’t hurt to change passwords periodically. I usually advise to make this a quarterly chore.”

Saturday, January 24, 2009

Windows 7 beta to be offered through Feb. 10

Hints at weaker-than-expected demand since Jan. 10 launch

Microsoft announced Friday night that computer enthusiasts will have a while longer to get their hands on the beta version of Windows 7.

In a blog posting, Microsoft said that the test version of the operating system will be available for download through February 10. Previously, Microsoft had said that the OS would only be open through late this month.

We are at a point where we have more than enough beta testers and feedback coming in to meet our engineering needs, so we are beginning to plan the end of general availability for Windows 7 Beta," Microsoft's Brandon LeBlanc said in the blog posting. "Because enthusiasm continues to be so high for the Windows 7 Beta and we don't want anyone to miss out, we will keep the Beta downloads open through February 10th."
Those who start the download process before February 10 will have until February 12 to finish the task.
The deadline applies to the general public, while members of Microsoft's TechNet and MSDN developer programs will continue to have access to the code, LeBlanc said.
CEO Steve Ballmer announced the beta of Windows 7 during his speech at the Consumer Electronics Show in Las Vegas on January 7. After a slight hiccup, Microsoft made the code available on January 10.

Keep Your laptop data safe,now fix it.

Follow InfoWorld's encryption-based data-protection plan, which can safeguard your most at-risk PCs .
The largest single type of security breach is the stolen or lost laptop, according to the Open Security Foundation, yet these computers are among the least protected of all IT assets. The costs of a data breach can be huge, including the loss of trade secrets, marketing plans, and other competitive information that could have long-term business damage, plus the immediate costs of having to notify people if their personal information was possibly at risk from the breach. Particularly in a recession, enterprise management can't afford to take these risks lightly.

There is a way for IT to protect those laptops and the confidential information they contain: encryption. Without the combination of password security and encryption, any halfway-competent hacker has no problem siphoning hard drive contents and putting it to nefarious use.
[ Stay up to date on key security issues and solutions in InfoWorld's Security Adviser blog. Keep abreast of the latest mobile developments in the Mobile Pulse blog. ]
Perhaps the most important advantage of full disk encryption, though -- beyond the peace of mind it gives your business's lawyers -- is the "safe harbor" immunity that accrues under many data privacy regulations. For example, credit card disclosure rules don't apply to encrypted data, and even California's strict data-disclosure statute makes an exception for encrypted records -- provided you can prove they're encrypted. That's trivial with full disk encryption but not so easy with partial encryption techniques, which depend on user education for safe operation.
A key challenge for IT in deploying encryption on its laptops is the sheer number of encryption options available. Some Windows Vista editions, as well as the forthcoming Windows 7, support Microsoft's built-in BitLocker encryption, and numerous third-party encryption products cover the range of mobile operating systems from XP through Windows 7, Linux, and Mac OS X. Encryption granularity is widely variable as well, ranging from protecting individual files to encrypting virtual disks to deploying fully armored, hardware-based full disk encryption. Prices range from free to moderately expensive.
If you've put off laptop data security due to perceived technical shortcomings or high costs, you need to take another look at the field -- before you lose another laptop.

The maximum encryption protection possible: TPMIdeally, you'll deploy the full-metal-jacket approach to laptop data protection: full disk encryption using the Trusted Platform Module (TPM) technology. If you can afford the cost, waste no time with inferior methods. All you need is a laptop containing a TPM security coprocessor and, optionally, an encryption-enabled hard drive from one of the major hard drive manufacturers.
The TPM is a chip soldered on to the laptop's motherboard, providing hardware-based device authentication, tamper detection, and encryption key storage. The TPM generates encryption keys, keeping half of the key information to itself, making it impossible to recover data from an encrypted hard drive apart from the computer in which it was originally installed. Even if an attacker gets the user's part of the encryption key or disk password, the TPM-protected drive's contents can't be read when connected to another computer. Further, the TPM generates a unique digital signature from the motherboard in which it's embedded, foiling attempts to move the TPM chip itself to another machine.

TPM-enabled full disk encryption, especially hardware-based implementations of it, provides one other key benefit to enterprises: data erasure upon laptop decommissioning or repurposing. A common bugaboo in the enterprise is the accidental disclosure of data when seemingly worthless outdated laptops are discarded or sold, or transferred to another employee. Erasing sensitive information in such situations is not trivial, and even removing and physically mangling a laptop's hard drive is no guarantee against disclosure. However, because TPM has absolute control over the encryption keys -- remember, half of the key information is stored with the TPM itself -- you can simply tell TPM to forget its keys, and the hard drive is instantly reformatted and effectively rendered nonrecoverable. Disk sectors aren't zeroed, but no computationally feasible method exists today to decrypt the residue.

A great many enterprise-class laptops manufactured in the last two to three years shipped with embedded TPM chips; Apple's Macs are a key exception, as none since 2006 include a TPM chip. But the TPM chips must be explicitly enabled to use them as the authentication mechanism for encryption.
If your laptops have a TPM chip, don't try enabling it without carefully following the vendor's instructions -- otherwise, you could accidentally wipe out the laptop's hard drive. Before enabling the TPM chip in a laptop, you must first take ownership of it, a process that establishes user and management-level passwords and generates the initial set of encryption keys. The management password lets IT administration monitor the inventory of TPM devices, recover lost user passwords, and keep track of usage.
A TPM works with the laptop's resident operating system to encrypt either the entire hard drive or most of it, depending on the OS encryption implementation. (Microsoft's BitLocker, for example, requires a small, unencrypted initial-boot partition). Alternatively, a TPM can interoperate with encryption-enabled hard drives to perform encryption entirely outside of, and transparent to, the operating system.
The TPM technology isn't perfect, but it provides very solid protection in the most common incident, where a laptop is lost or stolen and the user has not left it logged in. If the laptop is powered off, TPM protection is absolute. Most implementations use 256-bit AES encryption, which is considered uncrackable for the foreseeable future. Powering up the device requires entering pre-boot credentials in the form of a password, a PIN, a smartcard, biometric data, a one-time-password token, or any combination of these. If the lost laptop is powered on (but not logged in), or just powered off, an attacker would have to use extraordinary procedures to recover the encryption keys from live memory.
However, if a lost device is powered up and logged in, a TPM provides zero protection. An interloper can simply dump the data off the hard drive in the clear using ordinary file copies. Thus, it's essential that TPM-protected systems have noncircumventable log-in timeouts using administrator-protected settings.

To achieve the ultimate in full disk encryption protection requires hardware-enabled encryption on board the hard drive. Drive-based encryption closes all of TPM's loopholes, since the encryption key is no longer stored in OS-accessible memory. Hardware-based full disk encryption also eliminates the performance penalty incurred by software-based full disk encryption, although with today's fast, processors, that software encryption overhead is not noticeable to most users.

The cost for TPM protection starts at zero for Microsoft's BitLocker, which is built into Vista Enterprise and Ultimate, Windows Server 2008, and the forthcoming Windows 7. Major laptop manufacturers also sell software bundles that enable TPM in any Windows version, including XP, such as Wave's Embassy Trust Suite and McAfee's SafeBoot. The advantage of bundled software is sole-source support and pre-tested configurations.
You can also roll your own software protection using stand-alone packages such as PGP Whole Disk Encryption.
All these products support a wide range of enterprise-class management tools that let you enforce uniform policies and centrally store encryption keys, including special data-recovery keys that solve the problem of lost passwords and prevent employees from locking employers out of their hard drives.

If you can't do TPM, here's your plan B for encryptionAlthough the deployment of TPM-based full description is ideal, you may count the cost of full disk encryption and come up short-funded, especially if you just refreshed your enterprise laptops with non-TPM models. Forklifting your entire laptop population is an undeniably expensive proposition, as is replacing the non-TPM laptops if your company has a mix of TPM and non-TPM laptops. If you can't go all TPM, there's a plan B that can give you much of the encryption benefits you need.
You might think that plan B involves partial disk encryption, typically deployed by designating specific folders on a laptop as encrypted; as files are moved into that folder, they are automatically encrypted. Apple and Microsoft have long offered this form of encryption, via FileVault on the Mac and the Encrypted File System tools in Windows XP and Vista. But this approach has a major flaw: It depends on users to properly store sensitive data only in encrypted form.

A variation of folder-level encryption is virtual disk encryption (VDE), in which a single disk file contains a virtual disk image that the user can mount when needed; this virtual disk collects all sensitive files in one location. Microsoft's BitLocker offers this feature in all Vista editions, as well as in Windows Server 2008 and Windows XP. Third-party products such as PGPDisk and even free open source software programs such as TrueCrypt have VDE capabilities. Many of these third-party utilities are easier to use than BitLocker, so they can save you some implementation expense.
Another form of partial disk encryption is to apply encryption to specific files, typically those residing on corporate servers that users want to open locally. In this approach, users must enter a password every time they open a protected file. IT not only is on the hook to ensure that all sensitive files get encrypted but also has no way to stop users from simply saving the opened file as an unencrypted copy. Still, this protection is better than nothing and is widely available via free disk utilities. But key management can be a problem, and these file-level encryption tools generally don't support multifactor authentication.

But the best plan B to TPM-enabled full disk encryption isn't any of these partial disk methods. The best plan is software-only full disk encryption, in which either the operating system or a third-party program performs the same encryption as with TPM but uses another method to store the encryption keys, such as a thumb drive or a smart card.

The good news is that virtually all-TPM full disk encryption suppliers' offerings, including BitLocker, can operate in this software-only mode, which relies on a removable hardware token so that you can use this approach for your non-TPM devices while having a consistent encryption method to manage across all your laptops.
It's true that software-based full disk encryption is less secure than if you have a TPM-equipped laptop: The entire drive can still be encrypted, but a determined hacker will have more opportunities to gain access through compromised keys. For example, if the key-storage token is left with the notebook computer (how likely is that?), the hacker may be able to simply plug the token in and gain access to the drive contents. Even multifactor authentication in this scenario is subject to attack by inspection, since the key token is not tightly bound to the system motherboard.
Still, when TPM-enabled encryption is not an option, pure software full disk encryption can still give you considerable peace of mind, as well as provide the "safe harbor" benefits afforded encrypted systems in data-privacy regulations. Software full disk encryption solutions have also been around long enough that they're available for most mobile computing platforms, including Linux and Mac OS X.
TPM technology changes to comeAlthough TPM full disk encryption with hardware-based encryption in the hard drive is the best you can do for data protection today, security researchers are constantly testing TPM's mettle and devising improvements to it.
One potential vulnerability of today's separate TPM chip implementation is that keys must be transported across conductors in the motherboard to the CPU for software-based full disk encryption, or to the hard drive for hardware-based full disk encryption. That could provide an entry point for a hacker. That's why a major vendor trend is to move all TPM-oriented data manipulation on to the CPU chip set in the form of customized silicon. Intel has advertised its vPro solution, which is part of the upcoming Danbury processor and Eaglelake chip set. This feature will perform all encryption and decryption for SATA and eSATA drives without involving the CPU, OS device drivers, or even the hard drive itself.

Such an approach could make TPM even more secure. But there's no reason to wait until such chips are standard in laptops. With today's TPM-equipped laptops, and with the software-based fallback option for non-TPM laptops, you have a platform for a consistent, manageable, secure deployment strategy.

Google Beats Estimates, Profit Takes a strike

Just 13 Months Ago, Google's Stock Hit an All-Time High of $747; Today $314.
Internet giant earns $5.10 a share, topping estimates, despite a dreary economy. Net income drops 68% on charges.
Internet advertising behemoth Google continued to show strong sales and profit against a thorny economic backdrop.
The Mountain View, Calif.-based company reported an 18% jump in fourth-quarter revenue to $5.7 billion for the period ended Dec. 31. That's up from $4.83 billion in the year-earlier quarter.
Excluding commissions paid to advertising partners, Google posted sales of $4.22 billion, better than the $4.12 billion in sales expected by analysts polled by Thomson Reuters.
Google reported fourth-quarter net income of $382 million, down 68% from $1.2 billion a year ago. However, excluding certain charges, such as the cost of employee stock options, the company earned $5.10 a share, much better than consensus estimates of $4.95 per share.
"We had tight control over costs" in the quarter, said Google chief executive Eric Schmidt in a conference call with analysts.

"We don't know how long this period will last," Schmidt said, but he added that Google remained focused on long-term growth.
Schmidt pointed to the scaling back of non-profitable Google projects such as Google Video, Google Notebook, and status update service Jaiku. He also mentioned a quarterly decline in costs paid to advertising partners.
"Google continues to take market share, and they continue to have any number of levers to pull on both the revenue and the cost side that makes them very formidable in any economic environment," said Derek Brown, analyst with brokerage Cantor Fitzgerald.
Over the last quarter of 2008, Google said it spent about $368 million on capital expenses - mostly on data centers, servers and networking equipment.
As of Dec. 31, Google said it employed 20,222 full-time workers, slightly up from the 20,123 it employed at the end of September.
In order to retain employees, Google also announced that it would be starting a stock option exchange program from the end of January through early March.


Google joined Apple and IBM as one of the few tech companies to report good news in its most recent earnings report. The search giant beat analysts' estimates today, though it reported a sharp drop in net income for the fourth quarter to $382 million, or $1.21 a share, well below the $1.2 billion, or $3.79 a share from a year ago.
Revenue for the fourth quarter rose 18 percent from the same period last year to $5.70 billion and three percent from the previous quarter. Google (NASDAQ: GOOG) also suffered significant non-cash-impairment charges of $1.09 billion related primarily to its investments and AOL and Clearwire, a wireless broadband service that has partnered with Intel to build WiMax services across the country.
"The results were better than I expected," IDC analyst Karsten Weide told "Google is doing great because about half of the online ad spend in the U.S. is search and they have about half that market. They are leveraging the biggest market out there."
On a conference call with financial analysts, Google CEO Eric Schmidt noted "strong search query growth year on year." He also credited "tight control over costs that may have eluded us in the past, but I think we've got the formula down now."

Schmidt acknowledged AOL and Clearwire were "significant writedowns," but thinks there's a longer term payoff to come. "Both deals made sense for us and continue to fit with our business philosophy."
While neither Schmidt or other Google executives on the call got very specific about new initiatives or product plans, he did say the company is looking at new ways to recognize the contextual meaning of a search phrase, which it would be rolling into its market leading search engine.
The past year saw Google branch out significantly beyond its original model of text-only results. In 2008 Google tripled the number of non-text only results, which includes video, images, blogs and books, said Jonathan Rosenberg, Google's senior vice president or product management.
He also said Google's $125 million settlement in October with the Authors Guild and the Association of American Publishers promises to make content from millions of out-of-print books accessible online and even create a new market for the sale of those books.

Analyst Weide said Google still faces challenges growing the display side of its ad business. He said Yahoo (NASDAQ: YHOO) is the online display ad leader with about a 16 percent share in a very fragmented market. "Search advertising isn't always going to be the biggest segment. It probably will still be in five years, but not always and right now Google is essentially a one-trick pony," he said.
Weide also said YouTube's been "a sinkhole" for Google, which bought the video site for over $1.65 billion in 2006. "User generated content can work as an advertising source, but it's going to take a while," said Weide. "I think Google is going to have to acquire long form, professional content because big name advertisers want premium content not grainy, amateur video."
Rosenberg said Google continues to experiment with different ad approaches for YouTube. "It's hard to match the right format with the right content," he said. "We have to come up with a standard format to make it easier."
New employee stock options
Google also announced a new stock options plan, beginning January 29, that's designed to help retain employees. Schmidt said about 85 percent of its 20,000 employees had stock options "under water," or priced higher than the current trading price of stock.
Under the voluntary plan, employees can exchange all or a portion of their existing stock options for the same number of new options. Google said it expects the new options to have an exercise price equal to the closing price per share on March 2, 2009. Stock options with exercise prices above the March 2 closing price would be eligible for exchange, though Google said details of how the plan will work could change.
In after hours trading Thursday, Google shares were down $8.18 to $298.32.

Friday, January 23, 2009

New York City & Google starting a new trend in city-oriented tourist Web sites

Google and New York City Mayor Michael Bloomberg have launched a new initiative today, designed at helping tourists and residents to get around New York without feeling like they’ve missed any of the city’s exciting places and events.
In what could be a model for other cities, New York has partnered with Google to launch a Web site for tourists and a high-tech visitor's center. Interactive tables and a Video Wall let visitors explore New York, get local opinions, and save information. An forecaster called New

York's high-tech center an development of "out-of-home" marketing.

New York City may be starting a new trend in city-oriented tourist Web sites. In partnership with Google, it has launched the site, a portal to promote tourism, and opened a high-tech information center for visitors. The Web site uses Google Maps and other information to make it easy for a visitor or a local resident to quickly find things to do, places to go, restaurants and other points of interest. The site also provides discounts and promotions.
Just Ask the Locals
The information center at 810 Seventh Avenue offers touch-sensitive horizontal screen tables that also use Google Maps. In a statement on The Official Google Blog, New York City Mayor Michael Bloomberg wrote that the new Web site and information center will "help make it easier for both visitors and residents to explore the energy , excitement and diversity of New York City's five boroughs."
Visitors can move around a table's map of the city's five boroughs. If the user has selected a category such as Museums & Galleries or Dining, the map will flag those places as a token is moved around. Each flagged item can then be opened to reveal photos and more information.
Since there are probably 10 million opinions about the city, no visitor's center would be complete without at least a few virtual New Yorkers. A visitor can browse a Just Ask The Locals section, where famous New Yorkers give recommendations.
'Custom Itinerary Flyover'
Visitors can save sites, recommendations and more to a physical disk and take it to a Video Wall where a "custom itinerary flyover" soars virtually over a detailed, three-dimensional map of the city. The wall also offers yet more advice from celebrities and local experts, and the visitor can send the itinerary to his or her cell phone, e-mail, or print it.
Andrew Frank, an analyst with Gartner, said such a high-tech center for visitors could be a marketing tool for other cities.
If done with an eye toward ease of use, as New York's appears to be, Frank indicated that such centers could appeal to the wide range of technological sophistication among visitors and locals in any city. He also said New York's center is another indication of "the evolution of out-of-home" marketing experiences, which increasingly are accompanied by ways to measure how people use them.
But, Frank noted, an issue with these centers -- and even Web sites -- is keeping them up to date, not only with data , but with the latest technology and fastidious, shining surfaces.

Google powers new NYC information hub

Google Maps and Google Earth are the centerpiece of NYCGo, a new information and reference project launched by the New York City government to provide resources to both visitors and locals. Wednesday's launch announced the debut of, a Google Maps-fueled local search and reference site, as well as the unveiling of the renovated New York City Information Center a few blocks north of the tourist-heavy Times Square district. contains not just Google map and search data, but also travel deals from Travelocity and local content from what-to-do powerhouse Time Out New York, nightlife culture magazine Paper, the New York Observer, and eco-living guide Greenopia.
The information center, located on Seventh Avenue between 52nd and 53rd streets, is equally Googly. The city's technocratic mayor, Michael Bloomberg, even contributed a guest post to the official Google blog to announce it: "The Information Center features interactive map tables, powered by the Google Maps API for Flash, that let you navigate venues and attractions as well as create personalized itineraries, which can be printed, emailed or sent to mobile devices," the blog post explained. "Additionally, there's a gigantic video wall that utilizes Google Earth to display a 3D model of New York City on which you can map out personalized itineraries."
Bloomberg has been aggressive about promoting tech initiatives during his time in office, from a wind power plan (part of the much bigger "GreeNYC" project) and a city-run venture firm. Under his watch, the Mountain View, Calif.-based Google opened its New York satellite office, taking over several floors of the historic former Port Authority building downtown.